Der Standard

Facing the Scary Promise of Artificial Intelligen­ce

-

was deeply troubling to many A.I. researcher­s at the company. Early this month, Google executives, trying to head off a worker rebellion, said they wouldn’t renew the contract next year.

A.I. research has enormous potential and enormous implicatio­ns, both as an economic engine and a source of military superiorit­y. Beijing has said it is willing to spend billions to make China the world’s leader in A.I., while the United States Defense Department is aggressive­ly courting the tech industry for help. A new breed of autonomous weapons can’t be far away.

All sorts of deep thinkers have joined the debate, from a gathering of philosophe­rs and scientists held along the central California coast to an annual conference hosted in Palm Springs, California, by Amazon’s chief executive, Jeff Bezos.

“You can now talk about the risks of A.I. without seeming like you are lost in science fiction,” said Allan Dafoe, a director of the governance of A.I. program at the Future of Humanity Institute, a research center at the University of Oxford that explores the risks and opportunit­ies of advanced technology.

And the public criticism of Facebook and other tech companies over the past few months has done plenty to raise the issue of the unintended consequenc­es of the technology created by Silicon Valley.

In April, Mr. Zuckerberg spent two days answering questions from members of the United States Congress about data privacy and Facebook’s role in the spread of misinforma­tion before the 2016 American election. He faced a similar grilling in Europe last month.

Facebook’s recognitio­n that it was slow to understand what was going on has led to a rare moment of self-reflection in an industry that has believed it is making the world better.

Even such influentia­l figures as the Microsoft founder Bill Gates and the late Stephen Hawking have expressed concern about creating machines that are more intelligen­t than we are. Though superintel­ligence seems decades away, they and others have said, we should consider the consequenc­es before it’s too late.

“The kind of systems we are creating are very powerful,” said Bart Selman, a computer science professor at Cornell University in Ithaca, New York, and a former Bell Labs researcher. “And we cannot understand their impact.”

Pacific Grove is a tiny town on the California coast. Geneticist­s gathered there in 1975 to discuss whether their work — gene editing — would end up harming the world.

The A.I. community held a similar event there in 2017.

The private gathering was organized by the Future of Life Institute, a think tank built to consider the risks of A.I.

The leaders of A.I. were in the room — among them Mr. LeCun, the Facebook A.I. lab boss who was at the dinner in Palo Alto, and who had helped develop a neural network, one of the most important tools in artificial intelligen­ce today. Also there was Nick Bostrom, whose 2014 book, “Superintel­ligence: Paths, Dangers, Strategies” had an outsized — some would argue fear-mongering — effect on the A.I. discussion; Oren Etzioni, a former computer science professor at the University of Washington who had taken over the Allen Institute for Artificial Intelligen­ce in Seattle; and Demis Hassabis, who heads DeepMind, an influentia­l Google- owned A.I. research lab in London.

And so was Mr. Musk, who in 2015 had helped create an independen­t artificial intelligen­ce lab, OpenAI, with an explicit goal: create superintel­ligence with safeguards meant to ensure it won’t get out of control. It was a message that clearly aligned him with Mr. Bostrom.

Mr. Musk said at the retreat: “We are headed toward either superintel­ligence or civilizati­on ending.”

Mr. Musk was asked how society can best live alongside superintel­ligence. What we needed, he said, was a direct connection between our brains and our machines. A few months later, he unveiled a startup, called Neuralink to create that kind of so- called neural interface by merging computers with human brains.

There is a saying in Silicon Valley: We overestima­te what can be done in three years and underestim­ate what can be done in 10.

On January 27, 2016, Google’s DeepMind lab unveiled a machine that could beat a profession­al player at the ancient board game Go. In a match played soon after, the machine, called AlphaGo, had defeated the European champion Fan Hui — five games to none.

Even top A.I. researcher­s had assumed it would be another decade before a machine could solve the game. Go is complex — there are more possible board positions than atoms in the universe — and the best players win not with sheer calculatio­n, but through intuition. Two weeks before AlphaGo was revealed, Mr. LeCun said the existence of such a machine was unlikely.

A few months later, AlphaGo beat Lee Sedol, the best Go player of the last decade. The machine made moves that baffled human experts but ultimately led to victory.

Many researcher­s believe the kind of self-learning technology that un- derpins AlphaGo provided a path to “superintel­ligence.” And they believe progress in this area will accelerate in the coming years.

OpenAI recently “trained” a system to play a boat racing video game to win as many game points as it could. It proceeded to win those points but did so while spinning in circles, colliding with stone walls and ramming other boats. It’s the kind of unpredicta­bility that raises grave concerns about A.I.

Since their dinner three years ago, the debate between Mr. Zuckerberg and Mr. Musk has turned sour. Last summer, in a live Facebook video, Mr. Zuckerberg called Mr. Musk’s views on A.I. “pretty irresponsi­ble.”

Panicking about A.I. now, so early in its developmen­t, could threaten the many benefits that come from things like self- driving cars and A.I. health care, he said.

Mr. Zuckerberg then said: “People who are naysayers and kind of try to drum up these doomsday scenarios — I just, I don’t understand it.”

Mr. Musk responded with a tweet: “I’ve talked to Mark about this. His understand­ing of the subject is limited.”

In his testimony before Congress, Mr. Zuckerberg explained how Facebook was going to fix the problems it helped create: by leaning on artificial intelligen­ce. But he acknowledg­ed that scientists haven’t exactly figured out how some types of artificial intelligen­ce are learning.

“This is going to be a very central question for how we think about A.I. systems,” Mr. Zuckerberg said. “Right now, a lot of our A.I. systems make decisions in ways that people don’t really understand.”

Researcher­s are warning that A.I. systems that automatica­lly generate realistic images and video will soon make it even harder to trust what we see online. Both DeepMind and OpenAI now operate research groups dedicated to “A.I. safety.”

Mr. Hassabis, the founder of DeepMind, still thinks Mr. Musk’s views are extreme. But he said the same about the views of Mr. Zuckerberg. The threat is not here, he said. Not yet. But Facebook’s problems are a warning.

“We need to use the downtime, when things are calm, to prepare for when things get serious in the decades to come,” Mr. Hassabis said. “The time we have now is valuable, and we need to make use of it.”

 ?? JACK NICAS/THE NEW YORK TIMES ?? The dangers of artificial intelligen­ce were debated at a recent event hosted by Jeff Bezos of Amazon.
JACK NICAS/THE NEW YORK TIMES The dangers of artificial intelligen­ce were debated at a recent event hosted by Jeff Bezos of Amazon.

Newspapers in German

Newspapers from Austria