Elon Musk is painting an unrealistic picture of AI
Elon Musk has a habit of using Twitter and interviews to make big statements. On May 30, for instance, Musk told Jack Dorsey that Artificial General Intelligence (AGI) would most likely be here by 2029. And when Musk talks, people listen. But should they?
His pronouncements may cause some people to panic, especially when he sounded the alarm about what could happen. For example, he once told a crowd at MIT, “With Artificial Intelligence, we are summoning the demon”. What’s more, his pronouncements could distract from the real issues with a technology that is not yet ready for prime time.
The truth is there is a missing link between today’s Artificial Intelligence (AI), which is primarily pattern recognition, and the kind of Star Trek computer-level AI that Musk is dreaming about. Yes, AI can do amazing things such as speech recognition and holding surrealistic, entertaining conversations about virtually any topic. Still, when it comes to reliability and coherence, current AI is nowhere near what it needs to be. There are no firm fixes in hand to the limitations of current AI, it creates false stereotypes, spreads misinformation and fails at everyday tasks such as human-level driving, despite years of promises. Fixing that needs to start with a realistic assessment of the current state and how far we have to go. Claims such as Musk’s are detrimental to the public understanding of one of the most important engineering challenges of our time: Building an AI that is genuinely trustworthy. By painting a rosy and likely unrealistic picture, he has, in our view, led the public astray.
With so much at stake, we decided to call Musk’s claims “bullshit”.
One of us, Marcus, drafted a $100,000 bet. The bet highlights the disconnect between Musk’s claims and current reality. In the spirit of serious betting, there are five particular conditions. To say that AGI had been achieved, the field would have to defy at least three of the following five pessimistic predictions that Marcus compiled in collaboration with New York University computer scientist Ernest Davis. AI must be able to:
Watch movies and tell us accurately what is going on. Who are the characters? What are their conflicts and motivations?
Read novels and reliably answer questions about plot, character, conflicts, and motivations. The key is to go beyond the literal text and show a fundamental understanding of the material.
Work as a competent cook in an arbitrary kitchen. No cookie-cutter recipes, but real creativity.
Reliably construct bug-free code of more than 10,000 lines from natural language specification or interactions with a non-expert user. [Glueing together code from existing libraries doesn’t count.]
Take arbitrary proofs from the mathematical literature written in natural language and convert them into a symbolic form suitable for symbolic verification.
The other of us, Wadhwa, thought it was a great bet, fair and provocative, and something that could move the field of AI forward. (Ben Goertzel, for decades one of the leaders in trying to make AGI into something real, rather than just a fantasy, felt much the same way.) So Wadhwa decided to match Marcus’ wager. Within a couple of hours, there was a flurry on Twitter and Marcus’s substack had close to 10,000 views, and soon other experts in the field also offered their support for the wager, increasing the pool to $500,000. But not a word from Musk.
Then writer and futurist Kevin Kelly, who co-founded the Long Now Foundation, offered to host it on his website side by side with an earlier and related bet that Ray Kurzweil made with Mitch Kapor. Worldsummit.ai, the world’s leading AI Summit, has offered to host a debate. The AI community is excited. But there has still been no word from Musk.
Half a million bucks is chump change, of course, for Musk, perhaps the richest person in the world, but it is real money to us, and it symbolises something important: The value of getting public voices who hype AI’S nearterm prospects to stand by their claims.
Feeding the public misinformation about the potential of AI and its likely progress may serve Tesla by distracting from the many problems it has with its self-driving software, but it doesn’t serve the public. If Musk believes what he says, he should stand up and take the bet; if not, he should own up to the reality that his pronouncements are little more than off-the-cuff hunches that even Musk realises aren’t worth the virtual paper he’s printed them on.
IT IS TRUE THAT ARTIFICIAL INTELLIGENCE CAN DO AMAZING THINGS SUCH AS SPEECH RECOGNITION AND HOLDING SURREALISTIC, ENTERTAINING CONVERSATIONS ABOUT VIRTUALLY ANY TOPIC. STILL, WHEN IT COMES TO RELIABILITY AND COHERENCE, CURRENT AI IS NOWHERE NEAR WHAT IT NEEDS TO BE