Waterloo Region Record

The good news is AI is getting cheaper. That’s the bad news, too

- CADE METZ

SAN FRANCISCO — A Silicon Valley startup recently unveiled a drone that can set a course entirely on its own. A handy smartphone app allows the user to tell the airborne drone to follow someone. Once the drone starts tracking, its subject will find it remarkably hard to shake.

The drone is meant to be a fun gadget — sort of a flying selfie stick. But it is not unreasonab­le to find this automated bloodhound a little unnerving.

On Tuesday, a group of artificial intelligen­ce researcher­s and policy-makers from prominent labs and think tanks in both the United States and Britain released a report that described how rapidly evolving and increasing­ly affordable AI technologi­es could be used for malicious purposes. They proposed preventive measures, including being careful with how research is shared: Don’t spread it widely until you have a good understand­ing of its risks.

AI experts and pundits have discussed the threats created by the technology for years, but this is among the first efforts to tackle the issue head-on. And the little tracking drone helps explain what they are worried about.

The drone, made by a company called Skydio and announced this month, costs $2,499. It was made with technologi­cal building blocks that are available to anyone: ordinary cameras, open-source software and low-cost computer chips.

In time, putting these pieces together — researcher­s call them dual-use technologi­es — will become increasing­ly easy and inexpensiv­e. How hard would it be to make a similar but dangerous device?

“This stuff is getting more available in every sense,” said one of Skydio’s founders, Adam Bry. These same technologi­es are bringing a new level of autonomy to cars, warehouse robots, security cameras and a wide range of internet services.

But at times, new AI systems also exhibit strange and unexpected behaviour because the way they learn from large amounts of data is not entirely understood. That makes

them vulnerable to manipulati­on; today’s computer vision algorithms, for example, can be fooled into seeing things that are not there.

“This becomes a problem as these systems are widely deployed,” said Miles Brundage, a research fellow at the University of Oxford’s Future of Humanity Institute and one of the report’s primary authors. “It is something the community needs to get ahead of.”

The report warns against the misuse of drones and other autonomous robots. But there may be bigger concerns in less obvious places, said Paul Scharre, another author of the report, who had helped set policy involving autonomous systems and emerging weapons technologi­es at the Defense Department and is now a senior fellow at the Center for a New American Security.

“Drones have really captured the imaginatio­n,” he said. “But what is harder to anticipate — and wrap our heads around — is all the less tangible ways that AI is being integrated into our lives.”

The rapid evolution of AI is creating new security holes. If a computer-vision system can be fooled into seeing things that are not there, for example, miscreants can circumvent security cameras or compromise a driverless car.

Researcher­s are also developing AI systems that can find and exploit security holes in all sorts of other systems, Scharre said. These systems can be used for both defence and offence.

Automated techniques will make it easier to carry out attacks that now require extensive human labour, including “spear phishing,” which involves gathering and exploiting personal data of victims. In the years to come, the report said, machines will be more adept at collecting and deploying this data on their own.

AI systems will also make it easier for bad actors to spread misinforma­tion online, the report said.

Some believe concerns over the progress of AI are overblown. Alex Dalyac, chief executive and co-founder of a computer vision startup called Tractable, acknowledg­ed that machine learning will soon produce fake audio and video that we humans cannot distinguis­h from the real thing. But he believes that other systems will also get better at identifyin­g misinforma­tion. Ultimately, he said, these systems will win the day.

To others, that sounds like an endless cat-and-mouse game between AI systems trying to create the fake content and those trying to identify it.

“We need to assume that there will be advances on both sides,” Scharre said.

 ?? LAURA MORTON NEW YORK TIMES ?? The Skydio R1 autonomous drone, files in the company's office in Redwood City, Calif.
LAURA MORTON NEW YORK TIMES The Skydio R1 autonomous drone, files in the company's office in Redwood City, Calif.

Newspapers in English

Newspapers from Canada