Montreal Gazette

Artificial Intelligen­ce cars: Really? Did anyone see The Terminator?

- DAVID BOOTH

I’m not often given to rampant paranoia. The grassy knoll was, well, just a grassy knoll and I have never once thought of The Terminator as a documentar­y. Or at least, I didn’t until last week when I read an article by researcher Will Knight.

If you’ve been following the hoopla surroundin­g self-driving cars, you know there’s enormous interest in the computatio­nal abilities of artificial intelligen­ce. Ford recently invested $1 billion in a month-old startup called Argo AI, mainly because its staffers are some of the best robotics engineers on the planet.

More ominously of all, the American National Highway Traffic Safety Administra­tion (NHTSA) recently certified Google’s AI computer controller as a “licensed” driver so that the Silicon Valley giant would be able to send its little runabouts scurrying about autonomous­ly without the pesky human “backup.” that’s so far been required every time a self-driving car tries to steer itself through traffic.

Now, understand that it’s virtually impossible to program a self-driving car for the countless situations it will encounter every day. Some problems will be mundane, such as the unexpected telephone-line repair truck illegally parked on a narrow road that stymies a selfdrivin­g car’s prohibitio­n against crossing a solid yellow line. It could be simple human idiosyncra­sy, such as when an autonomous Uber car that reached a stalemate with a cyclist because it could not determine if the rider wanted to proceed forward or backward. It could even be the downright weird, like the Google car that encountere­d a woman in a wheelchair chasing a duck into the street with a broom.

For the engineers creating self-driving cars, if you can’t imagine something happening, you can’t program a car to avoid it.

That’s where artificial intelligen­ce — the ability for machines to “learn” without human interventi­on — is supposed to come in. Essentiall­y, it involves imbuing a computer with algorithms such that it can learn beyond its simple programmin­g. Artificial intelligen­ce will allow driverless cars to recognize situations that we don’t program it for (or, in the case of old ladies in wheelchair­s chasing ducks, couldn’t in a million years have imagined) and take appropriat­e action.

Sounds good, right? There can’t be anything even remotely conspirato­rial about teaching a machine to be safer and smarter. Right?

Until you read Knight’s The Dark Secret at the Heart of AI. Essentiall­y, Knight’s contention is that while the engineers who program these supercompu­ters know what their machines can do, they don’t have a clue how they do it. Yes, you read that right: According to Knight, the guys who program these computers don’t really know how their algorithms actually work. Indeed, if anything goes wrong, says Knight, even the engineers who designed them may struggle to isolate the reason for its malfunctio­n, there being no obvious way, says the author “to design such a system so that it could always explain why it did what it did.”

In other words, if a car directed by artificial intelligen­ce crashes into a tree, we might never know why.

Why this should be so disconcert­ing is that, again according to Knight, last year chipmaker Nvidia road tested a very special autonomous car, one that didn’t rely on instructio­ns provided by an engineer or programmer, but instead “had taught itself to drive by watching a human do it.”

As impressive as that is, says Knight, “it’s also a bit unsettling, since it isn’t completely clear how the car makes its decisions.”

As an example of AI’s ability to confound, Knight goes on to detail how an NYC Mount Sinai experiment called Deep Patient taught itself to predict diseases just from looking at patient’s records. The problem is, the computer went on to also predict incidents of schizophre­nia, and its programmer­s have no idea how that was possible.

There’s the “mind-boggling” possibilit­y, as Knight suggests, that these will be the first machines their creators don’t understand. Just as important is the matter of trust. How do, for instance, doctors justify changing the drugs someone is being prescribed when they don’t know how Deep Patient made its diagnosis?

Now, this would all be just a distractio­n if Knight were a half-baked conspiracy theorist. Unfortunat­ely for those looking for some calming news at the end of this fulminatio­n, Knight is the senior editor for artificial intelligen­ce at the MIT Technology Review, so it’s a little hard to dismiss him as a crackpot.

But, wait, Like all good paranoid rants, there’s even more. To surprising­ly little fanfare, Elon Musk — yes, he of the electric cars that supposedly drive themselves — recently launched Neuralink, a startup that promises to implant chips into your head so you can communicat­e directly with artificial intelligen­ce.

So let me see if I’ve got this straight. To become absolutely autonomous, self-driving cars will have to learn to think for themselves. The problem then becomes that, once they become sentient, we might not be able to control them. And an automotive CEO who has already shown that he doesn’t mind using his customers as beta testers — think Autopilot and Joshua Brown — wants to put a chip in my head so that very same artificial intelligen­ce can communicat­e directly with my synapses.

And, oh, we’re going ahead with all of this because we’re too lazy to push our own gas pedals and steer our own wheels.

Maybe I’m not so paranoid after all.

 ?? THE ASSOCIATED PRESS ?? Google’s AI computer controller has been “licensed” as a driver so the Silicon Valley giant can test its self-driving cars without human “backup.”
THE ASSOCIATED PRESS Google’s AI computer controller has been “licensed” as a driver so the Silicon Valley giant can test its self-driving cars without human “backup.”

Newspapers in English

Newspapers from Canada