The Morning Call

The best way to combat misinforma­tion on social media

- By Lucas Rentschler and Will Rinehart Lucas Rentschler is the director of the Experiment­al Economics Lab at the Center for Growth and Opportunit­y at Utah State University. Will Rinehart is a senior research fellow at the Center for Growth and Opportunit­y

On Tuesday, the Supreme Court held oral arguments for a case that could substantia­lly alter the internet. While the case was specifical­ly focused on who should be held liable for automatic recommenda­tions, the justices will ultimately decide how platforms manage content moderation and misinforma­tion.

While there are a lot of unanswered questions in content moderation, a paper we just published is one of the few rigorous studies of fact-checking that could settle some of them. It offers two striking insights.

First, we find that crowdsourc­ing fact-checks, much like Twitter’s Birdwatch program, are incredibly effective. Overall, it does a better job than just relying on the platform to police content.

Second, we find that platforms should focus moderation efforts on policing content, rather than policing individual­s. In other words, suspending accounts probably does more harm than good.

Nearly everyone agrees that misinforma­tion is a problem. Upward of 95% of people cite it as a challenge when accessing news or other informatio­n. But there is little agreement about the right mix of policies that can balance the need for context without needlessly censoring content.

Rightly, everyone is concerned that platforms are picking and choosing which content to flag. Previous work from our lab has shown that social media platforms have a vested interest in policing misinforma­tion. If companies want to promote user engagement and connection­s,

they need to address misinforma­tion.

On top of this, measuring how misinforma­tion affects real-world events is a tough empirical challenge for researcher­s. The Experiment­al Economics Lab, which is a part of the Center for Growth and Opportunit­y at Utah State University, was set up specifical­ly to understand these tangled questions.

This study, which is part of a larger research program on misinforma­tion, was set up to evaluate fact-checking policies in a controlled laboratory experiment.

Importantl­y, the decisions made by participan­ts affect the amount of money they earn, so misinforma­tion has real consequenc­es. The study was also structured to allow people to interact with others over multiple rounds via a messaging system that replicates a platform. While no study is perfect, ours comes as close as possible to approximat­ing real-world decision-making on platforms.

Three kinds of fact-checking scenarios were tested. In one version, individual­s could fact-check informatio­n shared

by other group members but they had to pay a small fee for it. The second scenario placed fact-checking in the hands of the platform and was randomly varied. Finally, we tested a combinatio­n of the two, both individual and platform fact-checking.

There were two consequenc­es of posting misinforma­tion. If misinforma­tion was identified, it was flagged, so that participan­ts knew. Likewise, users who were found to have posted misinforma­tion were automatica­lly factchecke­d in the following round.

The results are remarkable.

It is widely assumed that peerto-peer monitoring, especially when users must pay to factcheck content, would lead to bad outcomes. But to the contrary, we find that this approach yields better outcomes than just relying on the platform. We also found that adding platform moderation to this peer-to-peer approach has only a small additional benefit.

Platforms would do well to leverage this pro-social behavior because it does not require them to evaluate posts, it provides more objective fact-checking, and it is transparen­t.

In other words, social media users can be relied upon for fact-checking.

Even more important, the research suggests that added scrutiny for users who post misinforma­tion doesn’t really lead to deterrence. Simultaneo­usly, it can lower user engagement. Given this, platforms are likely to be better off focusing their efforts on individual posts, rather than trying to identify bad actors and banning them from the platform.

In total, our results provide support for Twitter’s current approach to content moderation. Birdwatch’s decentrali­zed and user-provided evaluation­s are effective. And the decision to be extremely judicious in banning accounts that have posted misinforma­tion is the right move.

It seems that Elon and Jack have the right idea.

 ?? PATRICK SEMANSKY/AP ?? The Supreme Court held oral arguments Tuesday for a case that could substantia­lly alter the internet. While the case was specifical­ly focused on who should be held liable for automatic recommenda­tions, the justices will ultimately be deciding how platforms manage content moderation and misinforma­tion.
PATRICK SEMANSKY/AP The Supreme Court held oral arguments Tuesday for a case that could substantia­lly alter the internet. While the case was specifical­ly focused on who should be held liable for automatic recommenda­tions, the justices will ultimately be deciding how platforms manage content moderation and misinforma­tion.

Newspapers in English

Newspapers from United States