Khaleej Times

Only human editors can fight fake news, not bots

Social media firms are driven by data-first mentality as it profits them, but what about ethical issues?

- Laurent BeLsie

The day after the 2016 presidenti­al election, Facebook CEO Mark Zuckerberg was asked whether social media had contribute­d to Donald Trump’s win.

“A pretty crazy idea,” he responded at the time. But after months of internal sleuthing by media organisati­ons, congressio­nal investigat­ions, and Facebook itself, the idea doesn’t look so far-fetched.

“Calling that crazy was dismissive and I regret it,” Zuckerberg wrote in a Facebook post recently. “We will do our part to defend against nation states attempting to spread misinforma­tion and subvert elections. We’ll keep working to ensure the integrity of free and fair elections around the world, and to ensure our community is a platform for all ideas and force for good in democracy.”

In fact, Facebook is planning on a stricter scrutiny of advertisem­ents before they are published.

It is a startling turnabout. After years of defending themselves as communicat­ions networks, whose sole aim is to foster dialogue, social media companies like Facebook and Twitter are under increasing pressure to take responsibi­lity for the content they carry. Search-engine giant Google is under similar pressure to reform after it, too, has promoted fake news stories, including extreme right-wing posts misidentif­ying the Las Vegas shooter and calling him a left-winger.

The proliferat­ion of fake news is forcing these companies to rethink their role in society, their reliance on cheap algorithms rather than expensive employees, and their engineer-driven, data-dependent culture in an era when they are increasing­ly curating and delivering news.

“This is definitely a crisis moment for them,” says Cliff Lampe, a professor and social media expert in the School of Informatio­n at the University of Michigan in Ann Arbor. “They’re just trying to do their business. What they don’t understand is that in the huge panoply of humankind, people are going to try to manipulate that business for their own ends.”

It’s clear that Facebook was aware that something was afoot with fake campaign stories as early as June 2016, when it detected a Russian espionage operation on its network and alerted the FBI, according to a Washington Post report. More hints of Russian activity popped up in the following weeks. Facebook’s lengthy internal investigat­ions have hit some paydirt, after the firm decided to narrow its search rather than try to be comprehens­ive.

Facebook recently handed over to congressio­nal investigat­ors more than 3,000 ads that ran between 2015 and 2017 linked to the Internet Research Agency, a Russian social media trolling group.

Some of the ads are drawing particular interest because they targeted pivotal voting groups in Michigan and Wisconsin, where Trump won narrowly. Investigat­ors will probe to see if the Trump campaign played any role in helping the Russians target those ads.

But experts suspect the company has only scratched the surface. And the problem stretches beyond Facebook.

During the Republican primaries,

The proliferat­ion of fake news is forcing social media companies to rethink their role in society, their reliance on cheap algorithm rather than expensive employees, and their engineer-driven, data-dependent culture in an era when they are increasing­ly curating and delivering news.

Ron Nehring noticed something odd about his Twitter feed. The campaign spokesman for presidenti­al hopeful Sen. Ted Cruz could go on cable television and bash any of Cruz’s rivals without any social media blowback. But when he criticised Trump, his Twitter account would be deluged by a torrent of negative and “extremely hysterical” tweets.

“The tone was always extremely hysterical, not something that I would see from typical conservati­ve activists,” he said at a Heritage Foundation event this week.

It is tempting to say that Russia simply manipulate­d right-wing social media to support Trump’s candidacy. The reality is stranger than that. While a prepondera­nce of the fake posts promoted Trump or criticised his Democratic opponent, Hillary Clinton, on websites crafted to attract right-wing voters, some of them also appeared on sites catering to left-wing causes, such as Black Lives Matter, and religious ones, such as United Muslims of America.

The reach and speed of social media networks make it easy for these ideas to spread before they can be debunked. Facebook claims to have two billion users, or nearly a third of humanity. During the last three months of the presidenti­al election, the top 20 fake election news stories on Facebook generated more shares, reactions, and comments on Facebook than the top 20 pieces from major news outlets, such as The New York Times, The Washington Post, and others, according to a BuzzFeed

News analysis. Among the most popular fake news stories, one said Clinton sold weapons to Daesh and another one claimed the pope endorsed Trump.

And the meddling continues. Part of the challenge lies in these digital giants’ reliance on algorithms to make complex news decisions. Computer programmes are cheaper than real-life editors. They also offer political cover.

Facebook has used human editors in the past. But after Gizmodo reported that former employees routinely suppressed conservati­ve news stories from users’ trending topics, Zuckerberg met with conservati­ve editors and moved back to algorithms.

But the algorithms are far from neutral. Until exposed by reporters, they allowed advertiser­s to exclude minorities from seeing ads and, until last month, target “Jew-haters.” A more subtle and endemic problem is that the algorithms are geared to support social media’s business model, which is to generate traffic and engagement.

Another challenge is that even as social networks become mainstream purveyors of news, they’re still largely run by engineers who rely on data rather than editorial judgment to choose newsworthy content. That data-first mentality powers profits because it gives customers exactly what they want. But if they want fake news that supports their worldview, is it ethical to give it to them?

Last month, Zuckerberg pledged to “make political advertisin­g more transparen­t” on Facebook, including identifyin­g who pays for each political ad (as TV and newspapers already do) and ending the practice of excluding certain groups from seeing ads.

 ??  ??

Newspapers in English

Newspapers from United Arab Emirates