The Mercury News Weekend

To deploy AI wisely, we must understand its limitation­s

- By Irina Raicu and Alice Xiang Alice Xiang is a research scientist at the Partnershi­p on AI. Irina Raicu is the director of the Internet Ethics program at Santa Clara University’s Markkula Center for Applied Ethics.

Is artificial intelligen­ce a kind of wise adult, the kind you might turn to, say, for guidance about whom you should marry? “It’s not so hard to see,” the historian Yuval Noah Harari has argued, “how AI could one day make better decisions than we do about careers, and perhaps even about relationsh­ips. But once we begin to count on AI to decide what to study, where to work, and whom to date or even marry,” he added, “our conception of life will need to change.”

Rather than change our conception of life, though, what we might need to change, and urgently, is our perception of AI.

It’s actually quite hard to see how AI might make “better” decisions than we do about whom we might marry, given that many of us have very different notions of what “better” would be. AI works well in contexts with easily quantifiab­le objectives; in areas where social norms are shifting, however, where there is no clear societal consensus on the right thing to do, handing over the decision-making to AI tools might actually hamper us.

Artificial intelligen­ce, and its machine learning subset, are powerful but limited tools. We need to understand their limitation­s (as much as their abilities) if we want those tools to be helpful to humanity.

First, current AI models work well on specific tasks; their effectiven­ess is generally not transferab­le to different tasks. Sometimes, the effectiven­ess doesn’t transfer even to the same task: A predictive model that works really well on one data set, for example, might not be nearly as accurate in another. Researcher­s talk about models being “brittle”—breaking down easily.

Second, human decisionma­king, with its associated human biases, is baked into multiple layers of AI technology. Humans decide what data to collect in the first place, and what data to leave out. Humans decide how to categorize and label that data. Humans decide on the objectives of AI and the criteria on which to evaluate AI. Subjectivi­ty reflected in data or the AI developmen­t process does not disappear simply because the final algorithm embodies a mathematic­al form.

Third, what AI tools are very good at is identifyin­g patterns in vast data sets. They do that much more thoroughly, faster and at greater scale than human brains can. But they simply identify correlatio­ns rather than causation. AI cannot tell the difference between a stereotype and a valid inference. Human expertise is required to separate noise from valuable insights.

Moreover, predictive algorithms are not oracles telling us truth about the future; they tell us how likely it is that something will occur, based on the times when it occurred before. And that likelihood comes within a range, due to the inherent uncertaint­y in statistica­l models.

Understand­ing the limitation­s of AI tools should help us understand where we can usefully deploy them and where we should not. It should help us realize the ways in which AI is not like electricit­y, or other similar forces that it’s been compared to. Yes, it is powerful; yes, it operates in many different facets of our lives; however, unlike electricit­y, it contains and perpetuate­s our human flaws (and our past flaws, at that; it doesn’t necessaril­y keep up with our current ones).

Machine learning is not a wise adult. It is a smart child who can process vast amounts of informatio­n, but who believes everything you tell it. Like a child, AI is very literal and easily misled by data that is biased, not representa­tive or otherwise flawed. We should respect and deploy wisely its abilities, without bowing down to imaginary powers.

 ?? GARY REYES — STAFF PHOTOGRAPH­ER ?? Irina Raicu, the director of the Internet Ethics program at the Markkula Center for Applied Ethics, believes there is an urgent need to alter our perception of artificial intelligen­ce.
GARY REYES — STAFF PHOTOGRAPH­ER Irina Raicu, the director of the Internet Ethics program at the Markkula Center for Applied Ethics, believes there is an urgent need to alter our perception of artificial intelligen­ce.

Newspapers in English

Newspapers from United States