Business Day

We need e-guardian angels to fight deepfakes

- Izabella Kaminska 2020 The Financial Times

Anew generation of highly persuasive deepfakes designed to manipulate and confuse the public is worming its way through the internet. We may think we are invulnerab­le, but the sophistica­tion of this new breed is likely to catch out even the savviest of us — in fact, it has probably already done so.

Today’s deepfakes are more than just Twitter accounts controlled by robots or manipulate­d videos of real people in the public eye. They are being designed to pass as unremarkab­le ordinary people, or journalist­s.

Take the case of Oliver Taylor, a coffee-loving politics junkie who writes freelance editorials for The Jerusalem Post and The Times of Israel. Or so the world thought until a Reuters article in July noted that, despite his ample online footprint and convincing profile picture, “Taylor” does not exist.

It is not clear who was behind the fake persona, masqueradi­ng as a real person. The technology to generate deepfakes is now so accessible and cheap he could as easily have been generated by a hostile nation state or a teenage prankster in a basement.

His mission was seemingly to dupe editors into printing stories that promoted his agenda and built credibilit­y for his profile. He was only exposed after an academic he had accused of being a terrorist sympathise­r followed up on a hunch that something didn’t feel right and began to make inquiries. But for that, “Taylor” could still be going about his business unmasked.

His exposure is probably only the tip of the iceberg. In today’s online environmen­t, especially now that workingfro­m-home arrangemen­ts mean new profession­al relationsh­ips are being forged exclusivel­y online, we can no longer be sure who we are dealing with.

Some deepfakes could be masking human agents. But others, more worryingly, may be being powered by artificial intelligen­ce programs crafted to take advantage of the personal data we shed online to pinpoint our vulnerabil­ities, befriend us, and then manipulate us into doing their bidding. They are sinister precisely because they know us, and our weaknesses, so well.

We lack defences against these programs because much of the informatio­n that powers them is already out there, being used by private sector algorithms for marketing and advertisin­g. While privacy legislatio­n helps, it cannot protect us entirely because so much of the informatio­n that makes us vulnerable is voluntaril­y given up.

The US is particular­ly vulnerable due to laws that make domestic counter efforts legally contentiou­s — specifical­ly covert ones that might influence political processes, public opinion, policies or media. Rand Corporatio­n’s Rand Waltzman says you have to go back to the 1980s and president Ronald Reagan’s Active Measures Working Group to find the last official US counter propaganda programme.

That group was disbanded after the collapse of the Soviet Union. Its final report nonetheles­s warned that “as long as states and groups interested in manipulati­ng world opinion, limiting US government actions, or generating opposition to US policies and interests continue to use these techniques, there will be a need … to systematic­ally monitor, analyse and counter them”.

Counterint­elligence efforts by social media platforms or independen­t verifiers, meanwhile, can only go so far. Many online personas, especially those on platforms offering background legitimacy, such as LinkedIn, will have been cultivated for years to make them look legitimate.

LinkedIn understand­s the problem. Between January and June last year, the company’s artificial intelligen­ce algorithms intercepte­d 19.5-million fake accounts at the registrati­on stage alone. Another 2-million were intercepte­d after registrati­on, with an additional 67,000 intercepte­d following reports from other members. How many are getting through these filters, however, is impossible to say.

This is why what we need to protect the vulnerable online are active measures by trusted democratic states that are committed to human rights.

This means the deployment of data-mining techniques to flag our own online vulnerabil­ities to us. Think of it as the deployment of trusted digital guardian angels, operating overtly and in plain sight.

Failing that, the only fallback is to hire independen­t white-hat hacker groups, often made up of former intelligen­ce or military operatives who are already masters of digital disguise: a version of television’s The A-Team. Their slogan went: “If you have a problem … if no-one else can help … and if you can find them … maybe you can hire them”. But don’t hold your breath. /©

IT IS NOT CLEAR WHO WAS BEHIND THE FAKE PERSONA, MASQUERADI­NG AS A REAL PERSON

 ?? /123RF/Semisatch ?? Reality check: Most concerning about deepfakes is that they may be powered by artificial intelligen­ce programs.
/123RF/Semisatch Reality check: Most concerning about deepfakes is that they may be powered by artificial intelligen­ce programs.

Newspapers in English

Newspapers from South Africa