The New Zealand Herald

Highlights growing fears of hackers using AI against us

-

Rapid advances in artificial intelligen­ce are raising risks that malicious users will soon exploit the technology to mount automated hacking attacks, cause driverless car crashes or turn commercial drones into targeted weapons, a new report warns.

The study, published on yesterday by 25 technical and public policy researcher­s from Cambridge, Oxford and Yale universiti­es along with privacy and military experts, sounded the alarm for the potential misuse of AI by rogue states, criminals and lonewolf attackers.

The researcher­s said the malicious use of AI poses imminent threats to digital, physical and political security by allowing for large-scale, finely targeted, highly efficient attacks. The study focuses on plausible developmen­ts within five years.

“We all agree there are a lot of positive applicatio­ns of AI,” said Miles Brundage, a research fellow at Oxford’s Future of Humanity Institute. “There was a gap in the literature around the issue of malicious use.”

Artificial intelligen­ce, or AI, involves using computers to perform tasks normally requiring human intelligen­ce, such as taking decisions or recognisin­g text, speech or visual images. It is considered a powerful force for unlocking all manner of technical possibilit­ies but has become a focus of debate over whether the massive automation it enables could result in widespread unemployme­nt and other social dislocatio­ns.

The 98-page paper cautions that the cost of attacks may be lowered by the use of AI to complete tasks that would otherwise require human labour and expertise. New attacks may arise that would be impractica­l for humans alone to develop or which exploit the vulnerabil­ities of AI systems themselves. It reviews a growing body of academic research about the security risks posed by AI and calls on government­s and policy and technical experts to collaborat­e and defuse these dangers.

The researcher­s detail the power of AI to generate synthetic images, text and audio to impersonat­e others online, in order to sway public opinion, noting the threat that regimes could deploy such technology.

The report makes a series of recommenda­tions including regulating AI as a dual-use military/ commercial technology.

It also asks questions about whether academics and others should rein in what they publish or disclose about new developmen­ts in AI until other experts in the field have a chance to study and react to potential dangers they might pose.

“We ultimately ended up with a lot more questions than answers,” Brundage said. Miles Brundage

The paper was born of a workshop last year, and some of its prediction­s essentiall­y came true while it was being written. The authors speculated AI could be used to create highly realistic fake audio and video of public officials for propaganda purposes.

Late last year, so-called “deepfake” pornograph­ic videos began to surface online, with celebrity faces realistica­lly melded to different bodies. “It happened in the regime of pornograph­y rather than propaganda,” said Jack Clark, head of policy at OpenAI, the group founded by Tesla CEO Elon Musk and Silicon Valley investor Sam Altman to focus on friendly AI that benefits humanity. “But nothing about deepfakes suggests it can’t be applied to propaganda.”

Newspapers in English

Newspapers from New Zealand