Artificial Intelligence cars: Really? Did anyone see The Terminator?
I’m not often given to rampant paranoia. The grassy knoll was, well, just a grassy knoll and I have never once thought of The Terminator as a documentary. Or at least, I didn’t until last week when I read an article by researcher Will Knight.
If you’ve been following the hoopla surrounding self-driving cars, you know there’s enormous interest in the computational abilities of artificial intelligence. Ford recently invested $1 billion in a month-old startup called Argo AI, mainly because its staffers are some of the best robotics engineers on the planet.
More ominously of all, the American National Highway Traffic Safety Administration (NHTSA) recently certified Google’s AI computer controller as a “licensed” driver so that the Silicon Valley giant would be able to send its little runabouts scurrying about autonomously without the pesky human “backup.” that’s so far been required every time a self-driving car tries to steer itself through traffic.
Now, understand that it’s virtually impossible to program a self-driving car for the countless situations it will encounter every day. Some problems will be mundane, such as the unexpected telephone-line repair truck illegally parked on a narrow road that stymies a selfdriving car’s prohibition against crossing a solid yellow line. It could be simple human idiosyncrasy, such as when an autonomous Uber car that reached a stalemate with a cyclist because it could not determine if the rider wanted to proceed forward or backward. It could even be the downright weird, like the Google car that encountered a woman in a wheelchair chasing a duck into the street with a broom.
For the engineers creating self-driving cars, if you can’t imagine something happening, you can’t program a car to avoid it.
That’s where artificial intelligence — the ability for machines to “learn” without human intervention — is supposed to come in. Essentially, it involves imbuing a computer with algorithms such that it can learn beyond its simple programming. Artificial intelligence will allow driverless cars to recognize situations that we don’t program it for (or, in the case of old ladies in wheelchairs chasing ducks, couldn’t in a million years have imagined) and take appropriate action.
Sounds good, right? There can’t be anything even remotely conspiratorial about teaching a machine to be safer and smarter. Right?
Until you read Knight’s The Dark Secret at the Heart of AI. Essentially, Knight’s contention is that while the engineers who program these supercomputers know what their machines can do, they don’t have a clue how they do it. Yes, you read that right: According to Knight, the guys who program these computers don’t really know how their algorithms actually work. Indeed, if anything goes wrong, says Knight, even the engineers who designed them may struggle to isolate the reason for its malfunction, there being no obvious way, says the author “to design such a system so that it could always explain why it did what it did.”
In other words, if a car directed by artificial intelligence crashes into a tree, we might never know why.
Why this should be so disconcerting is that, again according to Knight, last year chipmaker Nvidia road tested a very special autonomous car, one that didn’t rely on instructions provided by an engineer or programmer, but instead “had taught itself to drive by watching a human do it.”
As impressive as that is, says Knight, “it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions.”
As an example of AI’s ability to confound, Knight goes on to detail how an NYC Mount Sinai experiment called Deep Patient taught itself to predict diseases just from looking at patient’s records. The problem is, the computer went on to also predict incidents of schizophrenia, and its programmers have no idea how that was possible.
There’s the “mind-boggling” possibility, as Knight suggests, that these will be the first machines their creators don’t understand. Just as important is the matter of trust. How do, for instance, doctors justify changing the drugs someone is being prescribed when they don’t know how Deep Patient made its diagnosis?
Now, this would all be just a distraction if Knight were a half-baked conspiracy theorist. Unfortunately for those looking for some calming news at the end of this fulmination, Knight is the senior editor for artificial intelligence at the MIT Technology Review, so it’s a little hard to dismiss him as a crackpot.
But, wait, Like all good paranoid rants, there’s even more. To surprisingly little fanfare, Elon Musk — yes, he of the electric cars that supposedly drive themselves — recently launched Neuralink, a startup that promises to implant chips into your head so you can communicate directly with artificial intelligence.
So let me see if I’ve got this straight. To become absolutely autonomous, self-driving cars will have to learn to think for themselves. The problem then becomes that, once they become sentient, we might not be able to control them. And an automotive CEO who has already shown that he doesn’t mind using his customers as beta testers — think Autopilot and Joshua Brown — wants to put a chip in my head so that very same artificial intelligence can communicate directly with my synapses.
And, oh, we’re going ahead with all of this because we’re too lazy to push our own gas pedals and steer our own wheels.
Maybe I’m not so paranoid after all.