San Francisco Chronicle

Bots spread election disinforma­tion

- By Cade Metz

Be aware: Fake Twitter accounts will very likely sow disinforma­tion in the few remaining days before Tuesday’s election.

This week, researcher­s at the University of Southern California released a new study that identified thousands of automated accounts, or bots, on Twitter posting informatio­n related to President Trump, Joe Biden and their campaigns. The study examined over 240 million electionre­lated tweets from June through September.

Many of these bots, the study said, spread falsehoods related to the coronaviru­s and farright conspiracy theories such QAnon and “pizzagate.” The study said that bots accounted for 20% of all tweets involving these political conspiracy theories.

“These bots are an integral part of the disTwitter

cussion” on social media, said Emilio Ferrara, the University of Southern California professor who led the study.

A spokesman for the San Francisco company questioned the study’s methods.

“Research that uses only publicly available data is deeply flawed by design and often makes egregiousl­y reductive claims based on these limited signals,” the Twitter spokesman said. “We continue to confront a changing threat landscape.”

Social media companies such as Twitter and Menlo Park’s Facebook have long worked to remove this kind of activity, which has been used by groups trying to foment discord in past elections in the United States and abroad.

And the University of Southern California study showed that about twothirds of the conspiracy­spreading bots it identified were no longer active by the middle of September.

In some cases, bots exhibit suspicious behavior. They might “follow” an unusually large number of other accounts — a number nearly as large as the number of accounts following them — or their user names will include random digits.

But identifyin­g bots with the naked eye is far from an exact science. And researcher­s say that automated accounts have grown more sophistica­ted in recent months. Typically, they say, bots are driven by a mix of automated software and human operators, who work to orchestrat­e and vary the behavior of the fake accounts to avoid detection.

Some bots show signs of automation — like only retweeting rather than tweeting new material or posting very frequently — but it can be difficult to definitive­ly prove that accounts are inauthenti­c, researcher­s say. An automated account may stop tweeting at night, for example, as if there is a person behind it who is sleeping.

“You can clearly see they are automated,” said PikMai Hui, an Indiana University researcher who has helped build a new set of tools designed to track these bots in real time. “But they are operated in a way that makes it very difficult to say with complete certainty.”

These bots are operating on both sides of the political spectrum, according to the study from the University of Southern California. But rightleani­ng bots outnumbere­d their leftleanin­g counterpar­ts by a ratio of 4to1 in the study, and the rightleani­ng bots were more than 12 times more likely to spread false conspiracy theories.

The study indicates that 13% of all accounts tweeting about conspiracy theories are automated, and because they tweet at a higher rate, they are sending a much larger proportion of the overall material.

“This is the most concerning part,” Ferrara said. “They are increasing the effect of the echo chamber.”

 ?? Liz Hafalia / The Chronicle ?? San Francisco’s Twitter says it is constantly fighting phony accounts. “We continue to confront a changing threat landscape,” a spokesman said.
Liz Hafalia / The Chronicle San Francisco’s Twitter says it is constantly fighting phony accounts. “We continue to confront a changing threat landscape,” a spokesman said.

Newspapers in English

Newspapers from United States