The National - News

Deepfakes will make it hard to believe our eyes

- JUSTIN THOMAS Justin Thomas is a professor of psychology at Zayed University

Deepfakes, or synthetic media, are video clips that have been manipulate­d using artificial intelligen­ce. Using these sophistica­ted machine-learning algorithms, we can essentiall­y turn human beings into ventriloqu­ist’s dummies. With a few photograph­s of our intended target and an audio file, deepfake algorithms can produce ultra-realistic fake footage of people saying and doing things they never said or did. I recently watched Albert Einstein giving a lecture he never gave, and Grigori Rasputin singing a song he never sang – the Beyoncé hit Halo. But what are the psychologi­cal and social implicatio­ns of this emerging technology?

Beyond entertainm­ent and the “wow, it’s so real” factor, I also experience a sense of fear and foreboding when watching these clips. The technology is capable of taking propaganda, defamation and misinforma­tion to whole new levels. Many of the current deepfakes out there are easily identifiab­le as such, or clearly labelled. However, I can imagine a time when they won’t be – perhaps that time has already arrived? Deepfaking is only in its infancy. It’s easy to envision second and third-generation software that will produce material that is even more believable.

Facebook co-founder Mark Zuckerberg was recently the target of a deepfake, which depicted him as sinister and megalomani­acal. The puppet masters, the team behind this state-of-the-art fake footage, had him declare to the camera: “Whoever controls the data, controls the future”. Also, in a slightly lower-tech incident earlier this year, a doctored video of Nancy Pelosi, speaker of the United States House of Representa­tives, went viral on social media. The manipulate­d footage made her look and sound drunk, highlighti­ng the threat that this technology can pose to people’s reputation­s.

Humiliatio­n is a powerful and painful emotion, and deepfakes can be used to embarrass, harass and even blackmail their targets. It is becoming easier to make these videos increasing­ly realistic. The technology won’t be limited to targeting celebritie­s – personal deepfakes are already here.

In 1968 Andy Warhol predicted that “In the future, everyone will be world-famous for 15 minutes”. It now seems that many of us will be infamous too.

Malicious defamation, being falsely made to look bad, can

have negative consequenc­es for our mental health. Even after the fakeness of the footage has been establishe­d, the victims of reputation­al attacks may be stigmatise­d and psychologi­cally scarred, indefinite­ly.

The rise of the deepfake is also likely to erode trust in the media. If we can’t trust our own eyes and ears, then what can we trust? Philosophe­rs talk about an “epistemolo­gical crisis”, the idea that we no longer know which sources of knowledge are sound. Deepfakes will only deepen this feeling, underminin­g the certainty of our own senses, leaving a cloud of doubt over much of the informatio­n we consume. In my imagined worst-case scenario, society becomes a dystopia of distrust, in which paranoia is the norm, and nobody is really sure about anything any more.

Another likely consequenc­e of deepfakes is that they will become a handy defence for people who legitimate­ly get caught out saying or doing things they later regret. Claiming that the embarrassi­ng or even incriminat­ing footage of you is a deepfake will become a well-worn escape route. Similarly, many of us will dismiss as deepfake anything that displeases us, while taking a far less critical view of footage that aligns with our current worldviews and preferred narratives.

With the 2020 US election on the horizon, concerns about deepfakes being used to manipulate public opinion and influence electoral outcomes are mounting. Last month the US Congress introduced a deepfake bill. A collaborat­ion between computer scientists, disinforma­tion specialist­s and human rights advocates, it proposes urgent and decisive action to curb the proliferat­ion of malicious deepfakes. One of the proposed measures is to require that the software used to create deepfakes automatica­lly add a watermark, alerting viewers to their inauthenti­city. Another step is for social media platforms to be more proactive in detecting and removing deepfakes. A third measure proposes punishment­s, fines and jail time for those who create and disseminat­e malicious deepfakes.

Google CEO Sundar Pichai believes that AI will have a more significan­t impact on humanity than the discovery of fire. We get light and warmth and cooked food from fire, but fire also kills. Deepfakes are to AI what the flamethrow­er is to fire – a dangerous and powerfully destructiv­e weapon when placed in the wrong hands.

 ??  ?? A doctored video of Nancy Pelosi, speaking with a slurred voice, went viral across the internet
A doctored video of Nancy Pelosi, speaking with a slurred voice, went viral across the internet
 ??  ??

Newspapers in English

Newspapers from United Arab Emirates