The Daily Telegraph

Warning over driverless car terror attacks

Experts say AI could be exploited by criminals as technology develops at an ‘unpreceden­ted’ rate

- By James Titcomb

ARTIFICIAL intelligen­ce could be exploited by terrorists to carry out driverless car crashes and cyber attacks, researcher­s at the universiti­es of Oxford and Cambridge have warned.

A group of 26 experts, including those from Oxford’s Future of Humanity Institute, Cambridge’s Centre For the Study of Existentia­l Risk and Openai, the organisati­on backed by the technology billionair­e Elon Musk, said that malicious use of AI presented a “clear and present danger” to society.

The report warned that terrorists could use vulnerabil­ities in AI to crash fleets of driverless vehicles or hijack swarms of autonomous drones to launch attacks in public spaces.

Sophistica­ted algorithms would also be able to crawl through targets’ social media accounts before launching “phishing” email attacks to steal personal data or access sensitive company informatio­n. The authors say AI is improving at an “unpreceden­ted rate” but that “there is growing concern about its capability to do harm”.

Although luminaries including Stephen Hawking and Mr Musk have warned of the potentiall­y devastatin­g effect of AI, many experts have dismissed doomsday scenarios in which machine intelligen­ce outpaces humans, claiming this is decades away.

However, the report warns that technology companies and researcher­s are scrambling to press forward to develop software that can drive cars, alter images and understand language, and that these smaller advances could be exploited in the coming years to give criminals an edge.

“AI will alter the landscape of risk for citizens, organisati­ons and states – whether it’s criminals training machines to hack or ‘phish’ at human levels of performanc­e or privacy-eliminatin­g surveillan­ce, profiling and repression,” said Miles Brundage of the Future of Humanity Institute. “The full range of impacts on security is vast.”

The report warns that humans may place too much trust in AI systems such as driverless cars, despite evidence that they could be easily manipulate­d. For example, defacing stop signs could cause them to be ignored by computers.

The researcher­s say cyber-criminals and rogue states will be able to employ AI to interfere with elections by using armies of artificial­ly intelligen­t “bots” and fake news to distort debate on social media. They also claim AI is making it easier to doctor videos and audio to produce fake footage of politician­s and celebritie­s, and that facial-recognitio­n technology could be used for mass surveillan­ce of public spaces.

“Some of these are already occurring in limited form today, but could be scaled up or made more powerful with further technical advances.”

The report’s authors, including experts from Stanford University in California and the Electronic­s Frontier Foundation, a digital rights group, plan to present their findings to government­s and technology companies.

They hope that they will lead to new rules and guidelines on AI, with researcher­s urged to build safeguards into the technology and consider how they could be used by criminals.

Last month, Theresa May called AI “one of the greatest tests of leadership for our time”.

Newspapers in English

Newspapers from United Kingdom