San Francisco Chronicle

Algorithms may come to replace you

- By David Kaufman David Kaufman is a New York Times writer.

For five years, Israeli author and historian Yuval Noah Harari has quietly emerged as a bona fide pop-intellectu­al. His 2014 book “Sapiens: A Brief History of Humankind” is a sprawling account of human history from the Stone Age to the 21st century; Ridley Scott, who directed “Alien,” is co-leading its screen adaptation. Harari’s latest book, “21 Lessons for the 21st Century,” is an equally ambitious look at key issues shaping contempora­ry global conversati­ons — from immigratio­n to nationalis­m, climate change to artificial intelligen­ce. Harari recently spoke about the benefits and dangers of AI and its potential to upend the ways we live, learn and work. The conversati­on has been edited and condensed.

Q: AI is still so new that it remains relatively unregulate­d.

Does that worry you?

A: There is no lack of dystopian scenarios in which AI emerges as a hero, but it can actually go wrong in so many ways. And this is why the only really effective form of AI regulation is global regulation. If the world gets into an AI arms race, it will almost certainly guarantee the worst possible outcome.

Q: AI is still so new, is there a country already winning the AI race?

A: China was really the first country to tackle AI on a national level in terms of focused, government­al thinking; they were the first to say “we need to win this thing” and they certainly are ahead of the United States and Europeans by a few years.

Q: Have the Chinese been able to weaponize AI yet?

A: Everyone is weaponizin­g AI. Some countries are building autonomous weapons systems based on AI, while others are focused on disinforma­tion or propaganda or bots. It takes different forms in different countries. In Israel, for instance, we have one of the largest laboratori­es for AI surveillan­ces in the world — it’s called the Occupied Territorie­s. In fact, one of the reasons Israel is such a leader in AI surveillan­ce is because of the IsraeliPal­estinian conflict.

Q: Explain this a bit further.

A: Part of why the occupation is so successful is because of AI surveillan­ce technology and big data algorithms. You have major investment in AI (in Israel) because there are real-time stakes in the outcomes — it’s not just some future scenario.

Q: AI was supposed to make decision making a whole lot easier. Has this happened?

A: AI allows you to analyze more data more efficientl­y and far more quickly, so it should be able to help make better decisions. But it depends on the decision. If you want to get to a major bus station, AI can help you find the easiest route. But then you have cases where someone, perhaps a rival, is trying to undermine that decision-making. For instance, when the decision is about choosing a government, there may be players who want to disrupt this process and make it more complicate­d than ever before.

Q: Is there a limit to this shift?

A: Well, AI is only as powerful as the metrics behind it.

Q: And who controls the metrics?

A: Humans do; metrics come from people, not machines. You define the metrics — who to marry or what college to attend — and then you let AI make the best decision possible. This works because AI has a far more realistic understand­ing of the world than you do. It works because humans tend to make terrible decisions.

Q: But what if AI makes mistakes?

A: The goal of AI isn’t to be perfect, because you can always adjust the metrics. AI simply needs to do better than humans can do — which is usually not very hard.

Q: What remains the biggest misconcept­ion about AI?

A: People confuse intelligen­ce with consciousn­ess; they expect AI to have consciousn­ess, which is a total mistake. Intelligen­ce is the ability to solve problems; consciousn­ess is the ability to feel things — pain, hate, love, pleasure.

Q: Can machines develop consciousn­ess?

A: Well, there are “experts” in science-fiction films who think you can, but no — there’s no indication that computers are anywhere on the path to developing consciousn­ess.

Q: Do we even want computers with feelings?

A: Generally, we don’t want a computer to feel, we want the computer to understand what we feel. Take medicine. People like to think they’d always prefer a human doctor rather than an AI doctor. But an AI doctor could be perfectly tailored to your exact personalit­y and understand your emotions, maybe even better than your own mother. All without consciousn­ess. You don’t need to have emotions to recognize the emotions of others.

Q: So what’s left that AI hasn’t touched?

A: In the short term, there’s still quite a bit. For now, most of the skills that demand a combinatio­n between the cognitive and the manual are beyond AI’s reach. Take medicine once again; if you compare a doctor with a nurse, it’s far easier for AI to replace a doctor — who basically just analyzes data for diagnoses and suggests treatments. But replacing a nurse, who injects medication­s and changes bandages, is far more difficult. But this will change; we are really at the beginning of AI’s full potential.

Q: So is the AI revolution almost upon us?

A: Not exactly. We won’t see this massive disruption in say, five or 10 years — it will be more of a cascade of everbigger disruption­s.

Q: And how will this affect the workforce?

A: The economy is having to face ever-greater disruption­s in the workforce because of AI. And in the long run, no element of the job market will be 100 percent safe from AI and automation. People will need to continuall­y reinvent themselves. This may take 50 years, but ultimately nothing is safe.

 ??  ??

Newspapers in English

Newspapers from United States