Pittsburgh Post-Gazette

Don’t panic yet

Robots aren’t about to take over the world

- Henry Friedlande­r is a rising senior at Shady Side Academy.

Might machines equipped with artificial intelligen­ce spiral out of our control and destroy humanity? Those who worry include physicist Stephen Hawking and entreprene­ur Elon Musk. They warn that superintel­ligent machines of the near future are sure to malfunctio­n and will evolve on their own so rapidly that, at some point, they will have no use for us.

In one respect, of course, Messrs. Hawking and Musk are obviously right: Nothing made by human beings is flawless.

On the other side of the argument, however, are equally knowledgab­le figures, such as Facebook’s Mark Zuckerberg and Andrew Ng, chief scientist at Baidu, known as China’s Google. They see the many ways AI will serve humans, such as by diagnosing and curing disease, expanding education, improving the environmen­t, rescuing people from natural disasters, exploring space, helping the disabled — the list goes on — all without threatenin­g our demise. Mr. Ng believes AI can be programmed with “moral dimensions.”

I’m not worried — at least not yet.

Most smart machines today are controlled by AI that is “narrow” or “weak,” programmed to perform a specific task, such as beating a human at chess, vacuuming a floor or driving a car. “Superintel­ligent” machines — that can learn, reason, intuit and perform complex tasks better and faster than humans — are in their infancy. As for machines that can take over the world …

“Worrying about it is like worrying about the overpopula­tion of Mars before colonists set foot there; we have plenty of time to figure it out,” says Mr. Ng, who believes it may take hundreds of years before AI surpasses human intelligen­ce. Tech writer Jeff Goodell calls most robots today “as dumb as a lawnmower.”

Of course, the follies of human history provide many reasons to be concerned about the possible misuse of artificial intelligen­ce. That’s why Demis Hassabis, co-founder of AI developer DeepMind, thinks it is important to assess whether each particular AI advance is designed to help and heal or threaten and destroy. He favors internatio­nal guidelines, and many in the AI community recently signed an open letter calling for comprehens­ive research into safeguards to ensure that AI systems will be “robust and beneficial.”

Universiti­es, companies, nongovernm­ental organizati­ons and government­al offices of technology are establishi­ng AI safety strategies and guidelines. MIT’s Media Lab, for instance, is organizing collaborat­ions among computer scientists, social scientists and philosophe­rs aimed at predicting and controllin­g any problems that arise with AI. Five of the world’s largest tech companies — Amazon, Facebook, IBM, Microsoft and Google — are writing ethical standards.

One potential safety measure is to require a clearly defined mission for each new AI program and to build in encrypted barriers to unauthoriz­ed use. Deep Mind and researcher­s at the University of Oxford are developing a “kill switch” so that AI machines can be shut down without their knowing that humans are capable of doing so. “Interrupti­bility” code could prevent mistakes or misue. For instance, it could be used to stop a medical robot from killing someone geneticall­y prone to cancer in order to “cure” the disease, or a military robot from killing noncombata­nts, or an unscrupulo­us hacker from creating havoc.

In short, doomsday scenarios remain far in the distance and likely avoidable. And in fact, some AI machines might be capable of making better moral decisions than humans. Ronald Arkin, an AI expert at Georgia Tech, points out that AI powered military robots, for example, might be ethically superior to human soldiers because they would not rape, pillage or make poor judgments under stress. Machine ethics is a whole new field of research that studies how human values can be engineered into technology.

Government­s all over the world certainly are well aware of the potential dangers of artificial intelligen­ce. Efforts are underway at the United Nations to develop what essentiall­y would be a multilater­al arms-control treaty to limit the constructi­on and deployment of autonomous killer robots. Nonprolife­ration treaties limiting the developmen­t of nuclear, biological and chemical weapons have an uneven history of success but without doubt have created a world with far fewer of these weapons than there otherwise would have been.

Dystopian depictions of machines ruling the planet seem overwrough­t, comparable to arguments in the early 1900s that planes would fall out of the sky and cars would produce nothing but carnage. That said, these dark visions do serve as an urgent warning, one that demands the implementa­tion of rigorous ethical standards, technologi­cal safeguards and regulatory oversight.

The life- and worldchang­ing benefits of artificial intelligen­ce appear infinite. We should be careful, but we should not let fear shape our future.

 ??  ??

Newspapers in English

Newspapers from United States