The Middletown Press (Middletown, CT)
WILL ROBOTS RULE THE WORLD?
New Haven author talks future of artificial intelligence
Steve Shwartz isn’t particularly worried about artificial intelligence, and if anyone should know whether or not to panic, it’s him. He’s been working in artificial intelligence since 1979, when he enrolled in a post-doctorate program at Yale. Since then, he’s been an entrepreneur, opening
AI and non-AI companies. Now, he chairs the board for one of those companies, Device42, which he founded in 2012.
And he’s written a book to ease the public’s worry about AI technologies. “Evil Robots, Killer Computers, and Other Myths” explores the future of AI, and explains how it intersects with humanity.
Shwartz chatted with Hearst Connecticut Media via email about the book, which was published earlier this month. Sarajane Sullivan: Do you see the advancements of artificial intelligence as good or bad? It makes a lot of people feel uneasy, but why is that? Should we feel hesitant and why or why not?
Steve Shwartz: AI is having a huge impact on society that is mostly but not all good. AI makes our lives easier by enabling us to talk to our smartphones, translate language in foreign countries and automatically label our photos. It brings a promise of self-driving cars that
may someday eliminate fatalities and provide mobility for seniors and the disabled. It is revolutionizing medicine, identifying hate speech, and stopping cyberattacks.
At the same time, AI systems are susceptible to inadvertent discrimination. Facial recognition systems incorrectly identify minorities as terrorists and criminals. AI-based decision systems make loans and hiring decisions that are often discriminatory. AI also creates privacy issues that threaten to create a 1984-style society in which people are constantly monitored. AI-based weapons are also a concern – though not as big a concern as depicted in science fiction. And AI makes it easier to create fake news which threatens our elections (though see below for a caveat). These are serous issues; however, good progress is being made on all of them.
The biggest concern around AI is that it will take over the world, turn us into pets, or take all our jobs. Elon Musk called AI “the biggest existential threat to humanity.” This is all fiction and no one should be concerned about any of these things.
Sullivan: Is there an AI trend coming in the next few years that you feel will change the way humans live their lives? If so, what is it and how will it affect us?
Shwartz: I believe that issues like discrimination and privacy are wellunderstood and that we are well on the way to resolving those issues.
My biggest concern is self-driving vehicles. There is a general belief that self-driving vehicles will improve safety. While it is certainly true that self-driving vehicles have the potential to react more quickly than humans, there is also strong technical evidence that self-driving vehicles will make bad decisions that humans wouldn’t make. A bad decision made quickly can still cause an accident. Yet, governments all over the world are rushing to pave the way for this technology without appropriate safety testing and the result will be accidents and massive traffic jams.
The NHTSA (National Highway Traffic Safety Administration) has stated that it probably won’t require safety testing of self-driving capabilities. If in fact the inevitable bad decisions lead to a high rate of accidents and/or traffic jams, this policy could prove to be disastrous. My view is that we should take a step back and require safety testing.
Sullivan: What are some of the most outlandish rumors or misconceptions you’ve heard about AI technology and how do you debunk them?
Shwartz: Number one is the idea that AI systems will develop human-level intelligence and/or superhuman intelligence and take over the world. As I explain in my book, the reality is that today’s AI systems represent clever engineering but have no human-level intelligence. Moreover, the technology behind these systems cannot evolve into human-level intelligence and AI researchers have no concrete ideas of how to create human-level intelligence.
Number two is the idea that AI will take all of our jobs. If AI systems could read books and take classes, then yes, they could learn all our jobs. But this is fiction and will remain fiction. The reality is that, while AI systems will cause some job loss, the degree of job loss will be far less than that caused by conventional software technology.
Number three is the idea that AI can understand language. For example, in 2018, Microsoft claimed to have built a system that reads better than humans. The reality is that this system doesn’t understand language at all and uses some very clever engineering to outperform humans on a single “reading comprehension” test. IBM made a similar claim when it’s Watson DeepQA computer beat two Jeopardy! champions in 2012. Both systems are clearly detailed in technical papers that explain the clever engineering.
Number four is that AI systems can generate credible fake news. There has been a great deal of press around GPT-3 and similar AI systems that generate fake news articles. However, these systems have no knowledge of the world and the generated text is typically riddled with incorrect facts that are easily detected with a modicum of fact-checking.
Sullivan: What is one thing you wish people outside the tech field understood about AI?
Shwartz: AI systems are amazing engineering feats and do things that seem intelligent but none of these systems have any human-like intelligent whatsoever, and AI researchers have no idea how to build human-level intelligence into computers or robots. More importantly, the amazing engineering feats of AI should not be taken as an indication that progress will lead to human-level intelligence.