Imperial Valley Press

Vegas shooting showing social media can’t cull fake news from facts in a crisis

-

Many Americans have spent months stewing about “fake news” and how social media can pump falsehoods and mean-spirited myths into everyday life. Yet the giants of modern informatio­n disseminat­ion — Facebook, Google, YouTube and Twitter, for starters — were slow to address — and, in Facebook’s case, downright dismissive of — the idea that they were to blame for mushroomin­g fictitious or inflammato­ry posts. They did so even as it became clear some posts were part of covert Russian attempts to divide Americans in the 2016 presidenti­al campaign and in other political skirmishes.

It was only Saturday, while marking the end of Yom Kippur, the Jewish holy day of atonement, that Facebook founder Mark Zuckerberg apologized for how his company was used: “For the ways my work was used to divide people rather than bring us together, I ask for forgivenes­s and I will work to do better,” he wrote on Facebook.

The next night, the deadliest mass shooting in modern U.S. history happened in Las Vegas, and once again, users of the tech platforms were sharing untruths and malign speculatio­n. As BuzzFeed’s deputy global news director noted on Twitter, Google’s “top stories” results at one point featured posts from the notorious 4chan forum speculatin­g inaccurate­ly about the identity of the Mandalay Bay shooter. A reporter for The New York Times documented how Facebook’s Trending Stories highlighte­d news from Sputnik, a Russian propaganda site, and featured a false post asserting the FBI blamed the slaughter on Muslim terrorists. At the same time, a “Las Vegas Shooting/ Massacre” Facebook group sprung up and quickly grew to more than 5,000 members after the killings; it was run by Jonathan Lee Riches, a serial harasser with a criminal background and a history of farcical lawsuits, as The Atlantic pointed out.

All of this raises some doubt about whether Google and Facebook — among the richest and most successful companies in global history — can create foolproof algorithms that instantly evaluate what content is worth promoting and what content is best ignored in a time of crisis. It also raises some questions about whether the two companies, which have spent vast sums on artificial intelligen­ce research, can develop reliable, smart AI to protect the public from being manipulate­d and incited.

Facebook, for now, appears to recognize how much better it needs to do. Recode reported Sunday that Facebook plans to hire 1,000 more people to review and consider removing ads, to bar ads that show even “subtle expression­s of violence,” to “require more thorough documentat­ion” from those who want to buy political ads on the platform and to eventually make it easy to see all ads on a Facebook page, not just ones targeting certain users.

More traditiona­l media outlets are also moving to quash fake news. On Tuesday, the News Media Alliance began the second phase of a national campaign to emphasize the value of “real news produced by trusted news organizati­ons” that rely on “high-quality, investigat­ive journalist­s.”

Ultimately, news literacy matters. Algorithms and better corporate monitoring of social media content will never be enough, and everyone needs to develop the tools for evaluating what content is credible, what is junk and what absolutely needs confirmati­on before being shared. Consider the source. Check the URL. See who else is reporting it. Read it before sharing. Still unsure? Ask a friend — or verify it online. The internet is still good for that.

Newspapers in English

Newspapers from United States