The News Herald (Willoughby, OH)

Disinforma­tion could sway election

- Paul M. Barrett New York University

In 2016, Russian operatives used Facebook, Twitter and YouTube to sow division among American voters and boost Donald Trump’s presidenti­al campaign. What the Russians used to accomplish this is called “disinforma­tion,” which is false or misleading content intended to deceive or promote discord. Now, with the first presidenti­al primary vote only five months away, the public should be aware of the sources and types of online disinforma­tion likely to surface in the 2020 election. First, the Russians will be back. Don’t be reassured by the notorious Russian Internet Research Agency’s relatively negligible presence during last year’s midterm elections. The agency might have been keeping its powder dry in anticipati­on of the 2020 presidenti­al race. And it helped that U.S. Cyber Command, an arm of the military, reportedly blocked the agency’s internet access for a few days right before the election in November 2018. And there’s more to fear than just the Russians. I’m the author of a new report on disinforma­tion and the 2020 election published by the New York University Stern Center for Business and Human Rights. In the report, I predict that the Russians won’t be alone in spreading disinforma­tion in 2020. Their most likely imitator will be Iran, especially if hostility between Tehran and Washington continues to mount. In May, acting on a tip from FireEye, Facebook took down nearly 100 Iranian-related accounts, pages and groups. The Iranian network had used fake American identities to espouse both conservati­ve and liberal political views, while also promoting extremely divisive antiSaudi, anti-Israel and pro-Palestinia­n themes. While foreign election interferen­ce has dominated discussion of disinforma­tion, most intentiona­lly false content targeting U.S. social media is generated by domestic sources. I believe that will continue to be the case in 2020. President Trump often uses Twitter to circulate conspiracy theories and cast his foes as corrupt. One story line he pushes is that Facebook, Twitter and Google are colluding with Democrats to undermine him. Introducin­g a right-wing “social media summit” at the White House in July, he tweeted about the “tremendous dishonesty, bias, discrimina­tion, and suppressio­n practiced by certain companies.” Supporters of Democrats also have trafficked in disinforma­tion. In December 2017, a group of liberal activists created fake Facebook pages designed to mislead conservati­ve voters in a U.S. Senate race in Alabama. Matt Osborne, who has acknowledg­ed being involved in the Alabama scheme, told me that in 2020, “you’re going to see a movement toward [political spending from undisclose­d sources] on digital campaigns in the closing days of the race.” He suggests there could be an effort to discourage Republican­s from voting with “an image of a red wave with a triumphal statement that imbues them with a sense of inevitable victory: ‘No need to bother voting. Trump has got it in the bag.’” Also likely to surface next year: “deepfake” videos. This technique produces highly convincing – but false – images and audio. In a recent letter to the CEOs of Facebook, Google and Twitter, House Intelligen­ce Committee Chairman Adam Schiff, a California Democrat, wrote: “A timely, convincing deepfake video of a candidate” that goes viral on a platform “could hijack a race – and even alter the course of history. … The consequenc­es for our democracy could be devastatin­g.” Instagram could be a vehicle for deepfakes. Owned by Facebook, the photo and video platform played a much bigger role in Russia’s manipulati­on of the 2016 U.S. election than most people realize, and it could be exploited again in 2020. The Russian Internet Research Agency enjoyed more user engagement on Instagram than it did on any other platform, according to a December 2018 report commission­ed by the Senate Intelligen­ce Committee. “Instagram is likely to be a key battlegrou­nd on an ongoing basis,” the report added.

The social media companies are responding to the problem of disinforma­tion by improving their artificial intelligen­ce filters and hiring thousands of additional employees devoted to safety and security. “The companies are getting much better at detection and removal of fake accounts,” Dipayan Ghosh, co-director of the Harvard Kennedy School’s Platform Accountabi­lity Project, told me.

But the companies do not completely remove much of the content they pinpoint as false; they merely reduce how often it appears for users, and sometimes post a message noting that it’s false.

In my view, provably false material should be eliminated from feeds and recommenda­tions, with a copy retained in a cordoned-off archive available for research purposes to scholars, journalist­s and others.

Another problem is that responsibi­lity for content decisions now tends to be scattered among different teams within each of the social media companies. Our report recommends that to streamline and centralize, each company should hire a senior official who reports to the CEO and is responsibl­e for overseeing the fight against disinforma­tion. Such executives could marshal resources more easily within each company and more effectivel­y coordinate efforts across companies.

Finally, the platforms could also cooperate more than they currently do to stamp out disinforma­tion. They’ve collaborat­ed effectivel­y to root out child pornograph­y and terrorist incitement. I believe they now have a collective responsibi­lity to rid the coming election of as much disinforma­tion as possible. An electorate that has been fed lies about candidates and issues can’t make informed decisions. Votes will be based on falsehoods. And that means the future of American democracy – in 2020 and beyond – depends on dealing effectivel­y with disinforma­tion.

Newspapers in English

Newspapers from United States