THE LIMITS TO AI THIS HAS ALL HAPPENED BEFORE...
Way back in the 1950s, at the dawn of computing, researchers assumed a machine with human-level intelligence might be possible within 10 years, 20 at the most. Here we are, 70 years later, and true AI arguably isn’t any closer to reality. So, should we file AI along with a cure for cancer and nuclear fusion as yet another technology that somehow recedes from reach with every step forward in understanding?
Certainly, the limitations of current pattern-spotting machine learning are becoming increasingly apparent. In 2015, Elon Musk, Tesla’s head honcho among his other accolades, confidently predicted self-driving technology would achieve “complete autonomy” by 2018. He wasn’t alone in making that kind of bold prognostication. In 2016, Ford predicted that it would be selling “fully autonomous” commercial vehicles for ride sharing applications by 2021.
But here we are and driverless cars feel no closer to reality than they did a decade ago. Arguably, the challenge feels even greater. Earlier this year, John Krafcik, chief executive of
Google’s driverless car subsidiary Waymo, underscored the difficulty of developing driverless technology.
“It’s an extraordinary grind, a bigger challenge than launching a rocket and putting it in orbit around the Earth because it has to be done safely over and over and over again,” Krafcik said. That’s quite a turn around from 2018, when Waymo was planning to unleash 62,000 driverless minivans by the end of that year. At last count, Waymo’s driverless fleet numbered just 600 vehicles.
“When we thought, in 2015, that we would have a broadly available service by 2020, it wasn’t a crazy idea,” Krafcik explained. “If we’ve got one prototype, then we can get to mass production in just a couple of years, right? This was a position of—I wouldn’t say ignorance—but a lack of information and a lack of experience. We’ve become very humble over these last five years.”
Self-driving car technology is not the only area where the promise of AI has failed to deliver. Far from it. TheEconomist magazine reckons enthusiasm for AI across industry is stalling, in part due to being overhyped, citing a survey of European AI start-ups that found 40 percent were not actually using AI at all.
More broadly, TheEconomist reckons AI faces two core problems. The first is, surprisingly, a lack of data. Despite our increasingly digitized lives, critical data is often incomplete. Tracking COVID-19 transmission without a comprehensive record of individual movements, for instance, has proved impossible.
The second problem is even more challenging. Existing AI typically depends on pattern spotting and does certain tasks like image recognition and language processing far better than oldschool software with hand-coded rules. But such systems are not “intelligent” as the term is traditionally understood. There’s no cognition or reasoning, no ability to generalize, just a narrow ability to do one thing very well. In fact, the scope of existing AI systems is so narrow there is now doubt whether fully autonomous cars will ever be possible, such is the complexity of coping with the unpredictability of driving in the real world on real roads— and even slightly off them.