Richard MacManus.
Fake Facebook profiles and automated Twitter bots have been around since the beginnings of social media. One of my siblings once created a fake family member called Fred, who was a made-up character from the Zynga game FishVille. Cousin Fred amused us all for several months but, of course, nobody ever took it seriously.
Fast forward to 2018 and now we’re living in an era where it’s sometimes impossible to tell a fake profile from a real one. The fake personas are no longer cartoon characters. They could very well be your online doppelganger.
That’s because, with today’s AI technology, it’s possible to create a believable imitation of someone using their publicly available online data. This particular AI manipulation technique is known as ‘‘automated laser phishing’’.
Most of you have probably seen what happens when a friend on Facebook gets hacked. If the hacker tries to contact you on Messenger, the language is usually awkward and grammatically incorrect. Also what the hacker says has little resemblance to what your friend would say.
A sophisticated AI wouldn’t be so clumsy. If an AI was crafting messages using your persona, it would likely be capable of closely matching your voice and opinions.
It’s not just your online persona that can be manipulated. It’s images of you, too.
Victoria University lecturer Tom White created an image manipulation tool called SmileVector. It was the result of research he carried out for several years on the potential of generative neural net models.
In 2016, he released SmileVector as a Twitter bot that used neural nets to automatically add or remove smiles from photos.
After proving the success of SmileVector, White went to work on developing ‘‘a more controllable animation tool’’. In collaboration with Ian Loh, a masters degree student at Victoria University, White created a tool called TopoSketch. This one used a neural network to create animations from a dataset of faces.
Despite the success of his AI apps, White has mixed feelings about how the technology could be used. On the one hand, he thinks it has enormous potential.
‘‘It enables new types of creative mediums not possible before,’’ he said.
But White is also concerned about the potential negative impacts on society, such as the ability to create ‘‘convincing disinformation’’ with neural networks.
The danger is especially apparent in the ability of artificial intelligence (AI) to manipulate video. There’s already a disturbing trend on the web for face-swapped celebrity porn made using the latest AI techniques. Reddit recently banned this content from its platform, ruling that it falls under the company’s restrictions on ‘‘involuntary pornography’’.
It’s also possible now to realistically manipulate video and audio together. Several experiments have been done to prove how easily someone with the right tools could, for example, create a fake video of US President Donald Trump declaring war on North Korea.
While there hasn’t yet been a case of ‘‘synthetic media’’ fooling the public on a big news story, it’s surely a matter of time, given the tools that are available on the internet. One way to combat this is for companies like Facebook and Google to use the same tools to identify fake videos and automatically exclude them from