Top experts warn vs. ‘malicious use’ of AI
Artificial intelligence could be deployed by dictators, criminals and terrorists to manipulate elections and use drones in terrorist attacks, more than two dozen experts said Wednesday as they sounded the alarm over misuse of the technology.
In a 100-page analysis, they outlined a rapid growth in cybercrime and the use of “bots” to interfere with news gathering and penetrate social media among a host of plausible scenarios in the next five to 10 years.
Sean O hEigeartaigh, Executive Director of the Cambridge Centre for the Study of Existential Risk, told AFP, “AI may pose new threats, or change the nature of existing threats, across cyber-, physical, and political security.”
The common practice, for example, of “phishing”— sending emails seeded with malware or designed to finagle valuable personal data—could become far more dangerous, the report detailed.
Currently, attempts at phishing are either generic but transparent—such as scammers asking for bank details to deposit an unexpected windfall—or personalized but labor intensive—gleaning personal data to gain someone’s confidence, known as “spear phishing.”
In the political sphere, unscrupulous or autocratic leaders can already use advanced technology to sift through mountains of data collected from omnipresent surveillance networks to spy on their own people. “Dictators could more quickly identify people who might be planning to subvert a regime, locate them, and put them in prison before they act,” the report said.
Likewise, targeted propaganda along with cheap, highly believable fake videos have become powerful tools for manipulating public opinion “on previously unimaginable scales.”
Another danger zone on the horizon is the proliferation of drones and robots that could be repurposed to crash autonomous vehicles, deliver missiles, or threaten critical infrastructure to gain ransom.
The report details a plausible scenario in which an office-cleaning SweepBot fitted with a bomb infiltrates the German finance ministry by blending in with other machines of the same make. The intruding robot behaves normally — sweeping, cleaning, clearing litter — until its hidden facial recognition software spots the minister and closes in. “A hidden explosive device was triggered by proximity, killing the minister and wounding nearby staff,” according to the sci-fi storyline.
The authors called on policy makers and companies to make robot-operating software un-hackable, to impose security restrictions on some research, and to consider expanding laws and regulations governing AI development.
Another area of concern is the expanded use of automated lethal weapons. Last year, more than 100 robotics and AI entrepreneurs — Tesla and SpaceX CEO Elon Musk, and British astrophysicist Stephen Hawking — petitioned the United Nations to ban autonomous killer robots, warning that the digital-age weapons could be used by terrorists against civilians.