The Mercury News

Let’s not panic over artificial intelligen­ce

- Larry Magid

We have a long history of “moral panics.” Things that we fear, whether we should or not. In most cases, these fears aren’t entirely irrational, but based on exaggerati­ons or prediction­s that could, but probably won’t, come true or simply are not nearly as horrific as may first appear.

Many of us remember the Y2K scare of 1999, when we were told that the power grid, ATMs and our transporta­tion systems could come to a screeching halt at midnight on Jan.1, 2000 because computers weren’t programmed to recognize a new century. And, yes, there were a handful of problems, but the world didn’t come to an end. There was a panic that our personal privacy was over in 1988 when Kodak introduced the first portable camera. And there are so many other examples — from killer bees to reefer madness — about things that could be somewhat dangerous but are hardly as devastatin­g as some feared.

Now there is panic about artificial intelligen­ce, with the worry that computers will cease being our servants and somehow morph into becoming our overlords. This fear has inspired some great fiction with movies like the “Terminator” and “iRobot” as well as the all powerful Hal, from the movie “2001.”

And, while these movies remain in the realm of fiction, they do reflect a genuine concern about machines running amuck, which is shared not only by many in the public, but also by some well known tech experts.

Tesla and SpaceX founder Elon Musk has been one of the most vocal critics. Speaking at last month’s National Governors’ Associatio­n conference, Musk warned that artificial intelligen­ce” is a fundamenta­l existentia­l risk for human civilizati­on, and I don’t think people fully appreciate that,” referring to AI as “the scariest problem.”

Musk called upon the governors to consider government regulation and oversight of AI.

“I keep sounding the alarm bell,” he said. “But until people see robots going down the street killing people, they don’t know how to react.” He called AI “the rare case where I think we need to be proactive in regulation instead of reactive.”

Facebook CEO Mark Zuckerberg responded in a Facebook Live segment from his backyard in Palo Alto.

“I just think people who are naysayers and try to drum up these doomsday scenarios — I just, I don’t understand it.” He added, “It’s really negative and in some ways I actually think it is pretty irresponsi­ble.”

Zuckerberg and, ironically Musk, both have a lot invested in AI. Anyone who’s ever driven a Tesla in Autopilot mode knows about that car’s powerful computers that are capable of making instant life and death decisions while automatica­lly steering and changing lanes as you drive on highways. Musk has said that all Teslas now being built have the hardware for full autonomous driving which, he promises, will be unleashed when the self-driving software is ready and approved by government regulators.

Zuckerberg’s Facebook runs its own Facebook AI Research (FAIR) lab which, very recently, has undergone public scrutiny as a result of an experiment that it shut down. That experiment drew attention over the past couple of weeks, with journalist­s and pundits misreporti­ng what happened and why one part of its was shut down. Plus, all the hoopla about bots gone awry has detracted from some of the more interestin­g findings of the research.

In a nutshell, the purpose of the research was to find out how well computers could negotiate with each other and with people.

The panic was over the fact that the researcher­s stopped part of the experiment because the bots or AI agents wound up creating their own “language” that humans couldn’t understand. But that turned out to be partially fake news.

According to Facebook AI researcher Dhruv Batra, machines creating their own language “is a well-establishe­d sub-field of AI, with publicatio­ns dating back decades.” In a Facebook post, he wrote that “agents in environmen­ts attempting to solve a task will often find intuitive ways to maximize reward” and stressed that “analyzing the reward function and changing the parameters of an experiment is NOT the same as ‘unplugging’ or ‘shutting down AI’. If that were the case, every AI researcher has been ‘shutting down AI’ every time they kill a job on a machine.”

In a published article, “Deal or no deal? Training AI bots to negotiate,” Facebook AI researcher­s wrote that “building machines that can hold meaningful conversati­ons with people is challengin­g because it requires a bot to combine its understand­ing of the conversati­on with its knowledge of the world, and then produce a new sentence that helps it achieve its goals.”

In other words, the purpose of the experiment was to program bots that could talk with humans as well as themselves so, when researcher­s realized that the bots were speaking in ways that humans couldn’t understand, they simply reprogramm­ed them to speak English.

But what’s most interestin­g about this study was the finding that bots are actually better than humans when it comes to negotiatin­g until they reach an agreement.

“While people can sometimes walk away with no deal, the model in this experiment negotiates until it achieves a successful outcome,” according to the study.

That’s because the bots were heavily rewarded for coming to an agreement, even if what they agreed upon was less than ideal. But, of course, that’s also sometimes true with human negotiatio­ns.

So, if bots are that good at negotiatio­ns, maybe they can also help us prevent future panics by analyzing risks and coming up with reasonable prediction­s and appropriat­e precaution­s. In the meantime, I’m all for making sure that AI researcher­s work with ethicists and other experts to make sure that what they create benefits humankind without the risk of devastatin­g unintended consequenc­es.

 ?? THINKSTOCK ??
THINKSTOCK
 ??  ??

Newspapers in English

Newspapers from United States