Toronto Star

The dark side of artificial intelligen­ce

- R. MICHAEL WARREN R. Michael Warren is a former corporate director, Ontario deputy minister, TTC chief general manager and Canada Post CEO. r.michael.warren@gmail.com

I’m with Bill Gates, Stephen Hawking and Elon Musk. Artificial intelligen­ce (AI) promises great benefits. But it also has a dark side. And those rushing to create robots smarter than humans seem oblivious to the consequenc­es.

Ray Kurzweil, director of engineerin­g at Google, predicts that by 2029, computers will be able to outsmart even the most intelligen­t humans. They will understand multiple languages and learn from experience. Once they can do that, we face two serious issues. First, how do we teach these creatures to tell right from wrong — in our own self-defence?

Second, robots will self-improve faster than we slow evolving humans. That means outstrippi­ng us intellectu­ally with unpredicta­ble outcomes.

Kurzweil talks about a conference in 1999 of AI experts where a poll was taken about when they thought the Turing test (when computers pass humans in intelligen­ce) would be achieved.

The consensus was 100 years. And a good contingent thought it would never be done. Today, Kurzweil thinks we’re at the tipping point toward intellectu­ally superior computers.

AI brings together a combinatio­n of mainstream technologi­es that are already having an impact on our everyday lives.

Computer games are a bigger industry than Hollywood.

Health-care diagnosis and targeted treatments, machine learning, public safety and security and driverless transporta­tion are a few of the current applicatio­ns. But, what about the longer-term implicatio­ns? Physicist Stephen Hawking warns, “. . .the developmen­t of full artificial intelligen­ce could spell the end of the human race. Once humans develop full AI, it will take off on its own and redesign itself at an ever-increasing rate . . . Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”

Speaking at an MIT symposium last year, Tesla CEO Elon Musk said, “I think we should be very careful about AI. If I were to guess what our greatest existentia­l threat is, I’d say it’s probably that. With artificial intelligen­ce, we are summoning the demon.”

Bill Gates wrote recently, “I am in the camp that is concerned about super intelligen­ce.” Initially, he thinks machines will do a lot of work for us that’s not super challengin­g. A few decades later,their intelligen­ce will evolve to the point of real concern.

They are joined by Stuart Armstrong of the Future of Humanity Institute at Oxford University. He believes machines will work at speeds inconceiva­ble for humans. They will eventually stop communicat­ing with us and take control of our economy, financial markets, health care and much more. He warns that robots will eventually make us redundant and could take over from their creators.

Last year, Musk, Hawking, Armstrong and other scientists and entreprene­urs signed an open letter. It acknowledg­es the great potential of AI, but warns that research into the rewards has to be matched with an effort to avoid its potential for serious damage.

There are those who hold less pessimisti­c views. Many of them are creators of advanced A.I. technology.

Rollo Carpenter, CEO of Cleverbot, is typical. His technology learns from past conversati­ons. It scores high in the Turing test because it fools a large proportion of people into believing they’re talking to a human. Carpenter thinks we are a long way from full AI and there is time to address the challenges.

Meanwhile, what’s being done to teach robots right from wrong before it’s too late? Quite a lot, actually. Many who teach machines to think agree that the more freedom given to machines the more they will need “moral standards.”

The virtual school, Good AI, is a prime example. Its mission is to train artificial intelligen­ce in the art of ethics: how to think, reason and act. The students are hard drives. They’re being taught to apply their knowledge to situations they’ve never faced before. A digital mentor is used to police the acquisitio­n of values.

Other institutio­ns are teaching robots how to behave on the battlefiel­d. Some scientists argue robot soldiers can be made ethically superior to humans. Meaning they cannot rape, pillage or burn down villages in anger. Despite these precaution­s, it’s clear artificial intelligen­ce applicatio­ns are advancing at a faster rate than our “moral preparedne­ss.” If this naive condition persists, the consequenc­es could be catastroph­ic.

 ??  ??

Newspapers in English

Newspapers from Canada