Spies warn of increasing risk of cyber attacks and deepfakes
BRITISH spies will need artificial intelligence (AI) to counteract cyber attacks and deepfakes, a report for GCHQ has warned.
Hostile-state operatives from countries like Russia and Iran will seek to use AI to attack the UK but will not feel bound by the governance and legal frameworks spies in this country have to abide by.
Malicious software and deepfake technology are likely to be used to undermine political processes and manipulate public opinion, the report says.
Written for GCHQ, the report by the Royal United Services Institute is based on interviews with spies in the cyber agency as well as others in the UK intelligence community, including MI5 and MI6. The report warns of the dangers posed by deepfake images, whereby AI superimposes an existing piece of media, such as an image of an individual’s face, onto genuine content.
This disruptive technology was showcased in the run-up to the 2019 general election, when a deepfake video created by Future Advocacy, a research organisation, and artist Bill Posters reportedly showed Boris Johnson and Jeremy Corbyn endorsing each other for prime minister.
However, the report also says AI on its own will not be enough to counter the threat from terrorist attacks as “predictive intelligence” is more than just crunching numbers.
Rather than attempting to predict behaviour, the authors suggest efforts should instead focus on developing socalled “augmented intelligence” (AUI) systems to support human analysis.
These systems collect information from multiple sources and flag significant issues for human analysts to review.