Texarkana Gazette

Tech Tuesday: Social media manipulati­on reportedly reaches to U.S. Senate,

- By Erika Kinetz Associated Press writer David Klepper in Providence, Rhode Island, contribute­d to this report.

BRUSSELS — The conversati­on taking place around two U.S. senators’ verified social media accounts remained vulnerable to manipulati­on through artificial­ly inflated shares and likes from fake users, even amid heightened scrutiny in the run up to the U.S. presidenti­al election, an investigat­ion by the NATO Strategic Communicat­ions Centre of Excellence found.

Researcher­s from the center, a NATOaccred­ited research group based in Riga, Latvia, paid three Russian companies 300 euros ($368) to buy 337,768 fake likes, views and shares of posts on Facebook, Instagram, Twitter, YouTube and TikTok, including content from verified accounts of Sens. Chuck Grassley and Chris Murphy.

Grassley’s office confirmed that the Republican from Iowa participat­ed in the experiment. Murphy, a Connecticu­t Democrat, said in a statement that he agreed to participat­e because it’s important to understand how vulnerable even verified accounts are.

“We’ve seen how easy it is for foreign adversarie­s to use social media as a tool to manipulate election campaigns and stoke political unrest,” Murphy said. “It’s clear that social media companies are not doing enough to combat misinforma­tion and paid manipulati­on on their own platforms and more needs to be done to prevent abuse.”

In an age when much public debate has moved online, widespread social media manipulati­on not only distorts commercial markets, it is also a threat to national security, NATO StratCom director Janis Sarts told The Associated Press.

“These kinds of inauthenti­c accounts are being hired to trick the algorithm into thinking this is very popular informatio­n and thus make divisive things seem more popular and get them to more people. That in turn deepens divisions and thus weakens us as a society,” he explained.

More than 98% of the fake engagement­s remained active after four weeks, researcher­s found, and 97% of the accounts they reported for inauthenti­c activity were still active five days later.

NATO StratCom did a similar exercise in 2019 with the accounts of European officials. They found that Twitter is now taking down inauthenti­c content faster and Facebook has made it harder to create fake accounts, pushing manipulato­rs to use real people instead of bots, which is more costly and less scalable.

“We’ve spent years strengthen­ing our detection systems against fake engagement with a focus on stopping the accounts that have the potential to cause the most harm,” a Facebook company spokespers­on said in an email.

But YouTube and Facebook-owned Instagram remain vulnerable, researcher­s said, and TikTok appeared “defenseles­s.”

“The level of resources they spend matters a lot to how vulnerable they are,” said Sebastian Bay, the lead author of the report. “It means you are unequally protected across social media platforms. It makes the case for regulation stronger. It’s as if you had cars with and without seatbelts.”

Researcher­s said that for the purposes of this experiment they promoted apolitical content, including pictures of dogs and food, to avoid actual impact during the U.S. election season.

Ben Scott, executive director of Reset.tech, a London-based initiative that works to combat digital threats to democracy, said the investigat­ion showed how easy it is to manipulate political communicat­ion and how little platforms have done to fix long-standing problems.

“What’s most galling is the simplicity of manipulati­on,” he said. “Basic democratic principles of how societies make decisions get corrupted if you have organized manipulati­on that is this widespread and this easy to do.”

Twitter said it proactivel­y tackles platform manipulati­on and works to mitigate it at scale.

“This is an evolving challenge and this study reflects the immense effort that Twitter has made to improve the health of the public conversati­on,” Yoel Roth, Twitter’s head of site integrity, said in an email.

YouTube said it has put in place safeguards to root out inauthenti­c activity on its site, and noted that more than 2 million videos were removed from the site in the third quarter of 2020 for violating its spam policies.

“We’ll continue to deal with attempts to abuse our systems and share relevant informatio­n with industry partners,” the company said in a statement.

TikTok said it has zero tolerance toward inauthenti­c behavior on its platform and that it removes content or accounts that promote spam or fake engagement, impersonat­ion or misleading informatio­n that may cause harm.

“We’re also investing in third-party testing, automated technology, and comprehens­ive policies to get ahead of the ever-evolving tactics of people and organizati­ons who aim to mislead others,” a company spokespers­on said in an email.

Newspapers in English

Newspapers from United States