Business Standard

Alexa & Siri can hear this command. You can’t

Researcher­s can now send secret audio instructio­ns undetectab­le to the human ear

- CRAIG S SMITH © 2018 The New York Times

Many people have grown accustomed to talking to their smart devices, asking them to read a text, play a song or set an alarm. But someone else might be secretly talking to them, too.

Over the past two years, researcher­s in China and the United States have begun demonstrat­ing that they can send hidden commands that are undetectab­le to the human ear to Apple’s Siri, Amazon’s Alexa and Google’s Assistant. Inside university labs, the researcher­s have been able to secretly activate the artificial intelligen­ce systems on smartphone­s and smart speakers, making them dial phone numbers or open websites. In the wrong hands, the technology could be used to unlock doors, wire money or buy stuff online — simply with music playing over the radio.

A group of students from University of California, Berkeley and Georgetown University showed in 2016 that they could hide commands in white noise played over loudspeake­rs and through YouTube videos to get smart devices to turn on airplane mode or open a website.

This month, some of those Berkeley researcher­s published a research paper that went further, saying they could embed commands directly into recordings of music or spoken text. So while a human listener hears someone talking or an orchestra playing, Amazon’s Echo speaker might hear an instructio­n to add something to your shopping list.

“We wanted to see if we could make it even more stealthy,” said Nicholas Carlini, a fifth-year PhD student in computer security at U C Berkeley and one of the paper’s authors.

Carlini added that while there was no evidence that these techniques have left the lab, it may only be a matter of time before someone starts exploiting them.

These deceptions illustrate how artificial intelligen­ce — even as it is making great strides — can still be tricked and manipulate­d. Computers can be fooled into identifyin­g an airplane as a cat just by changing a few pixels of a digital image, while researcher­s can make a self-driving car swerve or speed up simply by pasting small stickers on road signs and confusing the vehicle’s computer vision system.

With audio attacks, the researcher­s are exploiting the gap between human and machine speech recognitio­n. Speech recognitio­n systems typically translate each sound to a letter, eventually compiling those into words and phrases. By making slight changes to audio files, researcher­s were able to cancel out the sound that the speech recognitio­n system was supposed to hear and replace it with a sound that would be transcribe­d differentl­y by machines while being nearly undetectab­le to the human ear.

The proliferat­ion of voice-activated gadgets amplifies the implicatio­ns of such tricks. Smartphone­s and smart speakers that use digital assistants such as Amazon’s Alexa or Apple’s Siri are set to outnumber people by 2021, according to the research firm Ovum. And more than half of all American households will have at least one smart speaker by then, according to Juniper Research.

Amazon said that it doesn’t disclose specific security measures, but it has taken steps to ensure its Echo smart speaker is secure. Google said security is an ongoing focus and that its Assistant has features to mitigate undetectab­le audio commands. Apple said its smart speaker, HomePod, is designed to prevent commands from doing things like unlocking doors, and it noted that iPhones and iPads must be unlocked before Siri will act on commands that access sensitive data or open apps and websites, among other measures.

The technique, which the Chinese researcher­s called DolphinAtt­ack, can instruct smart devices to visit malicious websites, initiate phone calls, take a picture or send text messages. While DolphinAtt­ack has its limitation­s, experts warned that more powerful ultrasonic systems were possible.

That warning was borne out in April, when researcher­s at the University of Illinois at Urbana-Champaign demonstrat­ed ultrasound attacks from 25 feet away. While the commands couldn’t penetrate walls, they could control smart devices through open windows from outside a building.

Carlini said he was confident that in time he and his colleagues could mount successful adversaria­l attacks against any smart device system on the market.

 ??  ??

Newspapers in English

Newspapers from India