Warning over driverless car terror attacks
Experts say AI could be exploited by criminals as technology develops at an ‘unprecedented’ rate
ARTIFICIAL intelligence could be exploited by terrorists to carry out driverless car crashes and cyber attacks, researchers at the universities of Oxford and Cambridge have warned.
A group of 26 experts, including those from Oxford’s Future of Humanity Institute, Cambridge’s Centre For the Study of Existential Risk and Openai, the organisation backed by the technology billionaire Elon Musk, said that malicious use of AI presented a “clear and present danger” to society.
The report warned that terrorists could use vulnerabilities in AI to crash fleets of driverless vehicles or hijack swarms of autonomous drones to launch attacks in public spaces.
Sophisticated algorithms would also be able to crawl through targets’ social media accounts before launching “phishing” email attacks to steal personal data or access sensitive company information. The authors say AI is improving at an “unprecedented rate” but that “there is growing concern about its capability to do harm”.
Although luminaries including Stephen Hawking and Mr Musk have warned of the potentially devastating effect of AI, many experts have dismissed doomsday scenarios in which machine intelligence outpaces humans, claiming this is decades away.
However, the report warns that technology companies and researchers are scrambling to press forward to develop software that can drive cars, alter images and understand language, and that these smaller advances could be exploited in the coming years to give criminals an edge.
“AI will alter the landscape of risk for citizens, organisations and states – whether it’s criminals training machines to hack or ‘phish’ at human levels of performance or privacy-eliminating surveillance, profiling and repression,” said Miles Brundage of the Future of Humanity Institute. “The full range of impacts on security is vast.”
The report warns that humans may place too much trust in AI systems such as driverless cars, despite evidence that they could be easily manipulated. For example, defacing stop signs could cause them to be ignored by computers.
The researchers say cyber-criminals and rogue states will be able to employ AI to interfere with elections by using armies of artificially intelligent “bots” and fake news to distort debate on social media. They also claim AI is making it easier to doctor videos and audio to produce fake footage of politicians and celebrities, and that facial-recognition technology could be used for mass surveillance of public spaces.
“Some of these are already occurring in limited form today, but could be scaled up or made more powerful with further technical advances.”
The report’s authors, including experts from Stanford University in California and the Electronics Frontier Foundation, a digital rights group, plan to present their findings to governments and technology companies.
They hope that they will lead to new rules and guidelines on AI, with researchers urged to build safeguards into the technology and consider how they could be used by criminals.
Last month, Theresa May called AI “one of the greatest tests of leadership for our time”.