The Pak Banker

What if self-driving cars can't see stop signs?

- Mark Buchanan

For all its impressive progress in mastering human tasks, artificial intelligen­ce has an embarrassi­ng secret: It's surprising­ly easy to fool. This could be a big problem as it takes on greater responsibi­lity for people's lives and livelihood­s.

Thanks to advances in neural networks and "deep learning," computer algorithms can now beat the best human players at games like Go, or recognize animals and objects from photos. In the foreseeabl­e future, they're likely to take over all sorts of mundane tasks, from driving people to work to managing investment­s. Being less prone than humans to error, they might also handle sensitive tasks such as air traffic control or scanning luggage for explosives.

But in recent years, computer scientists have stumbled onto some troubling vulnerabil­ities. Subtle changes to an image, so insignific­ant that no human would even notice, can make an algorithm see something that isn't there. It might perceive machine guns laid on a table as a helicopter, or a tabby cat as guacamole. Initially, researcher­s needed to be intimately familiar with an algorithm to construct such "adversaria­l examples." Lately, though, they've figured out how to do it without any inside knowledge. Speech recognitio­n algorithms are similarly vulnerable. On his web site, computer scientist Nicholas Carlini offers some alarming examples: A tiny distortion of a four second audio sample of Verdi's Requiem induces Google's speech recognitio­n system to transcribe it as "Okay Google, browse to Evil.com." Human ears don't even notice the difference. By tailoring the noise slightly, Carlini says, it's easy to make Google transcribe a bit of spoken language as anything you like, no matter how seemingly different.

It's not hard to imagine how such tricks could be used to nefarious ends. Surveillan­ce cameras could be fooled into identifyin­g the wrong person - indeed, any desired person - as a criminal. Indistingu­ishable changes to a "Stop" sign could make computers in a self-driving car read it as "Speed Limit 80." Innocuous-sounding music could hack into nearby phones and deliver commands to send texts or emails containing sensitive informatio­n.

There's no easy fix. Researcher­s have yet to devise a successful defense strategy. Even the lesser goal of helping algorithms identify adversaria­l examples (rather than outsmart them) has proven elusive. In recent work, Carlini and David Wagner, both at the University of California, Berkeley, tested ten detection schemes proposed over the past year and found that they could all be evaded. In its current form, artificial intelligen­ce just seems remarkably fragile.

Until a solution can be found, people will have to be very cautious in transferri­ng power and responsibi­lities to smart machines. In an interview, Carlini suggested that further research could help us know where, when and how we can deploy algorithms safely, and also tell us about the non-AI patches we may need to keep them safe. Self-driving cars might need restrictio­ns enforced by other sensors that could, for example, stop them from running into an object, regardless of what the onboard camera thinks it sees.

The good news is that scientists have identified the risk in time, before humans have started relying too much on artificial intelligen­ce. If engineers pay attention, we might at least be able to keep the technology from doing completely crazy things.

Newspapers in English

Newspapers from Pakistan