Twitter will let users report misinformation for first time
Twitter Inc. is adding an option for users to report misinformation to the company, but says the expanded ability to flag tweets won’t necessarily lead to more fact-checking or labels on problematic posts.
The test, available only in a few markets, will let users notify the company about alleged misinformation in the same way they can alert Twitter to spam or abuse. But the social media company, which doesn’t have a robust factchecking operation, won’t review the legitimacy of each identified tweet or respond to users with updates as it does with other types of reports.
Instead, Twitter will use the reports as a way to study misinformation on the platform and identify trends or problem areas to focus on, a spokeswoman said. Twitter only factchecks tweets from select categories, like elections and COVID-19, but users can alert the company to any misinformation. Twitter may add more categories to the fact-checking operation based on the results of the test, which will run in the U.S., Australia and South Korea.
“We may not take action on and cannot respond to each report in the experiment,” the company tweeted Tuesday from one of its corporate accounts. “But your input will help us identify trends so that we can improve the speed and scale of our broader misinformation work.”
Social media companies have been under fire for failing to stop misinformation from spreading, especially about issues such as COVID-19 and the vaccines to fight it. Twitter’s misinformation efforts are more limited than those of competitors, like Facebook Inc.
Unlike Facebook’s army of outside fact-checkers, Twitter’s internal Trust and Safety team reviews tweets, and usually just flags the most egregious or highest-profile offenders.