New Zealand Listener

Contagious & deadly

From Myanmar to Brazil, falsehoods spread insidiousl­y and take a tragic toll.

- Gavin Ellis is a weekly media commentato­r on RNZ National’s Nine to Noon. He attended a recent workshop on disinforma­tion in Taipei as a guest of the Taiwan Foundation for Democracy and the American Institute in Taiwan.

INDIA

WhatsApp, the Facebook-owned encrypted messaging service, was held accountabl­e by the Indian Government for 33 deaths in mob violence associated with false stories of child abduction.

MEXICO

It was also blamed for the spread of a similar falsehood in Mexico that led to two men being burnt to death by a mob. The following day, a mob pulled a man and a woman from their truck in a rural area and beat and burnt them, despite the pair’s pleas of innocence. The man died at the scene, and the woman in hospital.

BRAZIL

A mass yellow fever immunisati­on campaign in Brazil has been compromise­d by disinforma­tion, including one post – shared 300,00 times – that side effects of the vaccine (used for decades with no serious issues) had killed a teenage girl. A total of 1257 confirmed cases and 394 deaths from yellow fever were reported in Brazil between July 2017 and June last year.

MYANMAR

Facebook accounts run by Myanmar military personnel targeted the Rohingya Muslim minority. Humanright­s groups blame the anti-Rohingya disinforma­tion campaign for inciting murders, rapes and the largest forced human migration in recent history. More than 700,000 people have fled Rakhine state and a UN mission estimated 10,000 Rohingya have died.

of getting to the root of the disinforma­tion problem.

Although Facebook, Google and Twitter have moved to remove some of the false accounts used to plant disinforma­tion – Twitter confirmed last June that it was conducting a mega purge and eliminatin­g a million fake and suspicious accounts a day – the British government’s response to the Commons report in October was that the social-media companies were not doing enough.

Facebook and its kind earn little sympathy. They deserve to be accused of a form of third-party complicity by failing to build into their systems the checks and balances to prevent their misuse. However, it may be possible to grant them a thimbleful of understand­ing because the inherent characteri­stics of disinforma­tion – and the fact that its form is changing so fast – mean its detection and suppressio­n are becoming more difficult.

The false stories produced by bogus news sites and promulgate­d through Facebook and Twitter before the US presidenti­al election now appear rather crude. Hindsight does provide us with insight but, even as they surfaced, there were ways in which the falsehoods could be outed. Some, after all, were a little too obvious: the Pope’s endorsemen­t of presidenti­al candidate Trump was a falsehood too far.

Newsrooms were provided with services that might be crudely but accurately described as bullshit detectors. Online services such as TinEye were developed to determine whether images were real or doctored – by doing a form of reverse engineerin­g and checking back through image search engines such as Google to find full or partial matches. Services such as Storyful were set up to do verificati­on checks on trending stories using tried-and-true journalist­ic techniques. Verificati­on may be as simple as checking with the people mentioned in the story to ascertain whether they had actually said or done what was being attributed to them.

Most of the falsehoods that were produced in the run-up to the US election, and during European elections and the Brexit referendum, were detected and debunked. The British government’s response to the Commons report noted that it had not seen evidence of the successful use of disinforma­tion by foreign actors, including Russia, to influence UK democratic processes. It did not, however, define “successful”.

Disinforma­tion aimed at the UK may not have led to mass shifts of opinion, but it seldom seeks to achieve what Adolph Hitler

thought possible in Mein Kampf: that, by repetition and a clear understand­ing of psychology, you could prove to the masses that a square was in fact a circle. Rather, modern disinforma­tion seeks to discredit right-angles among people already predispose­d towards circles. This is not preaching to the converted, although they, too, will be willing recipients. It is a sophistica­ted targeting of what Indiana University researcher­s have identified as three different types of bias that are susceptibl­e to manipulati­on. Giovanni

Luca Ciampaglia and Filippo Menczer, of the university’s observator­y on social media, have developed tools to show people how cognitive, social and machine bias can aid the spread of disinforma­tion. Cognitive bias emerges from the way the brain copes with informatio­n overload to prioritise some ideas over others.

“We have found that steep competitio­n for users’ limited attention means that some ideas go viral despite their low quality – even when people prefer to share high-quality content,” they wrote. They added that the emotional connotatio­ns of a headline were a strong driver.

They found that when people connect directly with their peers via social media, the social biases that guided how they chose their friends also influenced the informatio­n they chose to see. It was also a significan­t factor in favourably evaluating informatio­n from within their own “echo chamber”. And these preference­s are fed by the machine – the algorithms that determine what people see online. The internet is not simply a system of highways on which we may choose to drive. It is an organism that mines data to build a profile of every driver and passenger on the system and to feed on their wants and preference­s. At the very least, its users are in semi-autonomous vehicles and, at worst, they have no control over the car whatsoever.

“These personalis­ation technologi­es are designed to select only the most engaging and relevant content for each individual user,” Ciampaglia and Menczer said. “But in doing so, it may end up reinforcin­g the cognitive and social biases of users, thus making them even more vulnerable to manipulati­on.”

Data is accessed by disinforma­tion sources, and algorithms used to identify groups (say, people predispose­d to circles) into which disinforma­tion can be seeded and sent on its merry viral way. Algorithmi­c selection ensures the message reaches the “right people”. Disinforma­tion campaigns may then be fed by social botnets that allow massive proliferat­ion of messages. These automated social-media accounts create and move huge amounts of material that unsuspecti­ng users believe is legitimate. This is how 45,000 tweets were created in the final two days of the Brexit campaign. A high percentage of the Twitter accounts elimi- nated after the US presidenti­al election were operated by bots, and the FBI found new bot accounts were created before the recent midterm elections. The bureau believed many emanated from St Petersburg.

Milking bias is all the easier when the disinforma­tion meets four criteria. According to Ben Nimmo, of the digital forensic lab at think tank Atlantic Council, a successful fake story has emotional appeal, a veneer of authority, an effective insertion point into the online space and an amplificat­ion network such as Twitter or Facebook.

And disinforma­tion has something else working for it: we humans seem to prefer fake over fact when it is presented in ways that trigger those biases. Massachuse­tts Institute of Technology has studied rumour cascades. These are rumour-spreading patterns that have a single origin and an unbroken chain of retweeting or reposting. The study found that falsehood reached far more people than the truth. Whereas the truth rarely spread to more than 1000 people, the top 1% of false-news cascades routinely spread to between 1000 and 100,000 people. It took the truth about six times as long as falsehood to reach 1500 people. In other words, we now have scientific proof that Jonathan Swift was right: “Falsehood flies, and truth comes limping after it.”

QUANTUM LEAP FORWARD

To date, most disinforma­tion has been believed by those who want to believe it. Others have rejected it, because they do not want to believe it and it has been relatively easy to discredit its content. That, however, is about to change.

Artificial intelligen­ce and machine learning were able to do reasonably credible service creating and spreading disinforma­tion, but there were telltale signs that these messages were created by bots. Tweets could be checked on Botometer, a joint project of the Network Science Institute and the Centre for Complex Networks and Systems Research at Indiana University, that used about 1200 features to characteri­se the suspected account’s profile, friends, social network structure, activity patterns, language and sentiment. Facebook posts often contained stilted, formulaic language. The use of bots to share disinforma­tion was harder to detect.

But now, artificial intelligen­ce has allowed disinforma­tion to take a quantum leap forward to the point where it is no longer possible to tell whether what you see and hear is real. The threat these so-called deep fakes pose is so serious that the US Department of Defense has tasked one of its agencies with finding ways of detecting fake video and audio.

The threat stems from software that takes alarmingly small amounts of authentic material – as little as 3.7 seconds of audio and 300-2000 images from a short video clip – to create a visual message in which words have, quite literally, been put in someone else’s mouth. Facial and body movements will be indistingu­ishable from the real thing.

In August, a team led by the Max Planck Institute for Informatic­s in Germany revealed a system called Deep Video Portraits. In contrast to existing approaches

Humans seem to prefer fake over fact when it is presented in ways that trigger our biases.

restricted to manipulati­ons of facial expression­s only, it was the first to transfer the full 3D head position, head rotation, face expression, eye gaze, and eye blinking from a source actor to a portrait video of a target actor. In the first test of the technology, the team showed real and manipulate­d videos of Vladimir Putin, Theresa May and Barack Obama to two groups from North America and Europe. More than half thought the manipulate­d videos were real and 65% thought the altered images of Putin were authentic. Perhaps as a symptom of a posttruth age, only 80% thought the real videos were authentic.

In parallel with the German-led research, the University of Washington has been perfecting lip-sync software that allows a third party to script what a person will say in a deep fake video. Somewhat naively, the American researcher­s see “a range of important practical applicatio­ns”, including allowing hearing-impaired people to lipread on their smartphone­s and providing new ways for Hollywood to seek box-office success.

One of the more worrying aspects of such research is the speed with which it is perfecting the software to create flawless fakes. The Defense Department’s Advanced Research Projects Agency has spent US$68 million but has so far found only limited ways to detect deep fakes. Matt Turek, head of the agency’s media forensics project, said in an ABC News interview – carried on the agency’s Facebook page that defensivel­y labelled it “a real piece” – that deep-fake detection was a “bit of a cat-and-mouse game”.

“A lot of times there are some indica- tors that you can see, particular­ly if you are trained or used to looking at them. But it is going to get more and more challengin­g over time … We are looking at sophistica­ted indicators of manipulati­on, from low-level informatio­n about the pixels to metadata associated with the imagery, the physical informatio­n that is present in the images or media and then comparing it [to] informatio­n that we know about the outside world.”

One indicator identified by the agency was blinking. In many of the fakes it examined, manipulate­d images of people did not blink in a natural way. However, the German research is rapidly overcoming that anomaly. Its programme transfers not only eye movement but authentic blinking rates from the source to the deep fake. And it hasn’t finished. The research paper concludes: “We see our approach as a step towards highly realistic synthesis of full-frame video content under control of meaningful parameters. We hope that it will inspire future research in this very challengin­g field.”

It will almost certainly inspire ever more realistic deep fakes that may rob us of one of our most basic assumption­s: that – in combinatio­n – we can believe our own eyes and ears. When we see a video of Obama, we expect it to be a captured version of what American philosophe­r John Searle calls “direct realism”: the camera as a surrogate for our own eyes. Perhaps it is inevitable that we are even less equipped to question the validity of a machine-created moving image than we are an AI-driven chatbot that can mimic human responses in text.

Yes, the word of the year 2019 will be disinforma­tion. You just may not recognise it when you see it.

Researcher­s have been perfecting lip-sync software that allows a third party to script what a person will say in a deep-fake video.

 ??  ??
 ??  ?? An explosion in Kobani, Syria, where false tweets have been used to discredit White Helmets humanitari­an workers (inset).
An explosion in Kobani, Syria, where false tweets have been used to discredit White Helmets humanitari­an workers (inset).
 ??  ?? Google co-founder Larry Page and, far right, Facebook CEO Mark Zuckerberg.
Google co-founder Larry Page and, far right, Facebook CEO Mark Zuckerberg.
 ??  ??
 ??  ?? A Time magazine cover critical of Taiwan’s President Tsai was later revealed as a fake.
A Time magazine cover critical of Taiwan’s President Tsai was later revealed as a fake.
 ??  ?? Fact or fantasy? Vladimir Putin, Barack Obama and Theresa May.
Fact or fantasy? Vladimir Putin, Barack Obama and Theresa May.
 ??  ??
 ??  ??

Newspapers in English

Newspapers from New Zealand