MALICIOUS INTENT
Criminals and terrorists are as excited as technologists about AI.
Tech companies are high on Artificial Intelligence at the moment. The applications being researched and developed are so potentially beneficial and transformative that it is easy to sweep the downsides under the carpet. Although possible job losses from automation are discussed on an everyday basis, there are more catastrophic threats that are yet to get the spotlight. For example, AI can be used by criminals, terrorists, rogue states or anyone with malicious intent to wreak havoc at an unimaginable scale. What the equation will be between attackers and defenders is not easy to predict, but a group of 26 security experts from a number of institutions and universities such as the Future of Humanity Institute, University of Oxford and Centre for the Study of Existential Risk studied the landscape of threats from the potential malicious use of AI and came up with a 100-page report titled ‘ The Malicious Use of Artificial Intelligence, Forecasting, Prevention and Mitigation’. This can be easily found with a search and is recommended for any companies developing AI applications and solutions.
Experts see threats in three domains: first is the digital domain in which AI is expected to help automate cyberattacks resulting in unprecedented scale and efficiency. New types of attacks will also be brought about, exploiting human vulnerabilities and interfaces such as voice. Software vulnerabilities will also be leveraged for attacks and entire banks of data poisoned.
Cyber-physical attacks will also threaten physical security. Using AI, it will be possible to launch attacks with swarms of micro-drones, for example, and to bring autonomous systems to their knees – such as getting self-driven vehicles to crash.
The third domain is political and is an expansion of a threat that already exists. It includes automating mass persuasion – what was thought to have happened with the influence of Russian activities on the US elections – and deception with fake news and fake videos. Social manipulation, which is already quite evident, privacy invasion, surveillance and big data used to not just understand but influence behaviour will also become rampant. The accelerated use of AI will expand existing threats, bring a twist to the typical character of ongoing threats and introduce entirely novel ones.
The group of experts that created this paper strongly suggests that malicious use cases of AI be considered when developing applications and policymakers work closely with technical researchers to investigate, mitigate or prevent potential catastrophes.
THE ACCELERATED USE OF AI WILL EXPAND EXISTING THREATS, BRING A TWIST TO THE TYPICAL CHARACTER OF ONGOING THREATS AND INTRODUCE ENTIRELY NOVEL ONES