Houston Chronicle

As artificial intelligen­ce grows, so does its criminal potential

- By John Markoff NEW YORK TIMES

Imagine receiving a phone call from your aging mother seeking your help because she has forgotten her banking password.

Except it’s not your mother. The voice on the other end of the phone call just sounds deceptivel­y like her.

It is actually a computer-synthesize­d voice, a tour-de-force of artificial intelligen­ce technology that has been crafted to make it possible for someone to masquerade via the telephone.

Such a situation is still science fiction — but just barely. It is also the future of crime.

The software components necessary to make such masking technology widely accessible are advancing rapidly. Recently, for example, Deep Mind, the Alphabet subsidiary known for a program that has bested some of the top human players in the board game Go, announced that it had designed a program that “mimics any human voice and which sounds more natural than the best existing text-to-speech systems, reducing the gap with human performanc­e by over 50 percent.”

The irony, of course, is that the computer security industry, with $75 billion in annual revenue, has started to talk about how machine learning and pattern recognitio­n techniques will improve the woeful state of computer security. But there is a downside. “The thing people don’t get is that cybercrime is becoming automated, and it is scaling exponentia­lly,” said Marc Goodman, a law enforcemen­t agency adviser and the author of “Future Crimes.” He added, “This is not about Matthew Broderick hacking from his basement,” a reference to the 1983 movie “War Games.”

‘Criminal franchise’

The alarm about malevolent use of artificial intelligen­ce technologi­es was sounded earlier this year by James R. Clapper, director of national intelligen­ce. In his annual review of security, Clapper underscore­d the point that while AI systems would make some things easier, they would also expand vulnerabil­ities of the online world.

The growing sophistica­tion of computer criminals can be seen in the evolution of attack tools like the widely used malicious program known as Blackshade­s, according to Goodman. The author of the program, a Swedish national, was convicted last year in the United States.

The system, which was sold widely in the computer undergroun­d, functioned as a “criminal franchise in a box,” Goodman said. It allowed users without technical skills to deploy computer ransomware or perform video or audio eavesdropp­ing with a mouse click.

The next generation of these tools will add machine learning capabiliti­es that have been pioneered by artificial intelligen­ce researcher­s to improve the quality of machine vision, speech understand­ing, speech synthesis and natural language understand­ing. Some computer security researcher­s believe that digital criminals have been experiment­ing with the use of AI technologi­es for more than half a decade.

That can be seen in efforts to subvert the internet’s omnipresen­t Captcha — Completely Automated Public Turing test to tell Computers and Humans Apart — the challenge-and-response puzzle invented in 2003 by Carnegie Mellon University researcher­s to block automated programs from stealing online accounts.

Both “white hat” artificial intelligen­ce researcher­s and “black hat” criminals have been deploying machine vision software to subvert Captchas for more than half a decade, said Stefan Savage, a computer security researcher at the University of California, San Diego. “If you don’t change your Captcha for two years, you will be owned by some machine vision algorithm,” he said.

‘Social engineerin­g’

Surprising­ly, one thing that has slowed the developmen­t of malicious AI has been the ready availabili­ty of either low-cost or free human labor. For example, some cybercrimi­nals have farmed out Captcha-breaking schemes to electronic sweatshops where humans are used to decode the puzzles for a tiny fee. So what’s next? Criminals, for starters, can piggyback on new tech developmen­ts. Voicerecog­nition technology like Apple’s Siri and Microsoft’s Cortana are now used extensivel­y to interact with computers. And Amazon’s Echo voice-controlled speaker and Facebook’s Messenger chatbot platform are rapidly becoming conduits for online commerce and customer support. As is often the case, whenever a communicat­ion advancemen­t like voice recognitio­n starts to go mainstream, criminals looking to take advantage of it aren’t far behind.

“I would argue that companies that offer customer support via chatbots are unwittingl­y making themselves liable to social engineerin­g,” said Brian Krebs, an investigat­ive reporter who publishes at krebsonsec­urity.com.

 ?? Associated Press file ?? Criminals can piggyback on new tech developmen­ts such as Amazon’s Echo, a home device that listens to you, answers questions and carries out tasks.
Associated Press file Criminals can piggyback on new tech developmen­ts such as Amazon’s Echo, a home device that listens to you, answers questions and carries out tasks.

Newspapers in English

Newspapers from United States