Truth be told
Failure to curb the power of Artificial Intelligence will lead to a web of lies Robin Pagnamenta
In 2017 – just a few months before his death – Stephen Hawking described the emergence of artificial intelligence as possibly “the worst event in the history of our civilisation”. The advent of GPT-3 – a highly accurate synthetic human language generator powered by AI – is the kind of technology that might just prove him right. As Donald Trump and Joe Biden square off in what is likely to be a brutal US election campaign, experts in the dark arts of misinformation have warned of the risks posed by “deepfakes” to manipulate public opinion.
At a critical moment, so the theory goes, a doctored video – perhaps depicting one of the candidates saying or doing something unconscionable – could be released into the ether to manufacture a scandal designed to dominate the news cycle during the climax of the campaign and swing the vote.
It’s a credible risk which we should be prepared for, especially after widespread evidence of meddling in the last presidential campaign.
In the longer run, however, it is GPT-3 and technologies like it which pose a bigger threat and about which we should be truly worried.
AI-powered synthetic human writing is becoming uncannily accurate and ever more difficult to distinguish from the real thing.
It raises the prospect of a dystopian future where much of the written text we read on the web is produced by algorithms – easy to produce in high volume and a propagandist’s dream.
Generative Pre-trained Transformer 3 (GPT-3), a language model that uses deep learning to produce highly credible human writing, is at the cutting edge of contemporary AI.
It is the third generation of a technology developed by OpenAI, a non-profit San Francisco AI research lab backed by Elon Musk, Peter Thiel and a string of other Silicon Valley luminaries.
Microsoft and Infosys, the Indian tech giant, have also helped fund the group, which started in 2015. The model – which is 115 times more powerful than its predecessor GPT-2 and which has been trained to read and write using millions of pages of written text drawn from the internet – was introduced in May 2020 and entered beta testing last month.
OpenAI wants to commercialise it by the end of this year – and points to its potential use building better chatbots, for example.
‘It raises the prospect of a dystopian future where text is produced by algorithms’
An article in MIT Technology Review summarised the quality of its writing by describing it as “shockingly good – and completely mindless”.
David Chalmers, an Australian philosopher, has described GPT-3 as “one of the most interesting and important AI systems ever produced”.
As is so often the case, its developers are all too aware of its potential for misuse, but pressed ahead regardless.
“Any socially harmful activity that relies on generating text could be augmented by powerful language models,” the researchers behind the technology warned in a paper that was published in May.
“Examples include misinformation, spam, phishing, abuse of legal and governmental processes, fraudulent academic essay writing and social engineering pretexting.
“Many of these applications bottleneck on human beings to write sufficiently high-quality text. Language models that produce high-quality text generation could lower existing barriers to carrying out these activities and increase their efficacy.”
In other words, if you think fake news is a problem now, wait until it can be manufactured in bulk by machines and pumped out into our information ecosystem on an industrial scale.
Unlike deepfake videos which can be exposed, undetectable textfakes masquerading as ordinary chat on social media platforms like Twitter or Facebook have the potential to influence us in potentially more subtle and dangerous ways.
By weaving an elaborate web of lies designed to deceive and manipulate, the aim will be to shift the way we think by immersing us in a soup of pervasive misinformation.
Toss in a plethora of other synthetic material – fake videos, images and audio – and it becomes increasingly difficult to trust anything at all on the internet, further eroding trust.
The technology also poses a new kind of challenge for the social media companies, of course. As neutral “platform operators” rather than publishers, they have always argued that it is not their job to judge whether people are using their services to tell the truth or not.
What happens if the lies are being produced, spread and commented upon entirely by algorithms? Do they have a responsibility to shut these down?
Hawking’s concern about artificial intelligence revolved around the lack of rules or any kind of governance over a powerful new technology – and the urgent need to set standards to supervise its use. “We simply need to be aware of the dangers, identify them, employ the best possible practice and management, and prepare for its consequences well in advance,” Hawking remarked.
There can be few more pressing areas where rules and standards need to be applied than here.