Los Angeles Times

Social media algorithms get Senate scrutiny

- By Anna Edgerton and Ilya Banares Edgerton and Banares write for Bloomberg.

Executives from Facebook Inc., Twitter Inc. and Alphabet Inc.’s YouTube were pressed by lawmakers Tuesday on how user content is shared and highlighte­d on their platforms through algorithms that one senator said can be misused, “driving us into poisonous echo chambers.”

Sen. Ben Sasse, the top Republican on the Senate Judiciary Committee’s panel on Privacy, Technology and the Law, made the comment as members examined algorithms — the lines of software code that determine how user-generated informatio­n is displayed and who gets to see it.

“Algorithms, like almost any technologi­es that are new, have costs and benefits” and can be abused, said Sasse, who is from Nebraska.

The hearing took place as Congress is considerin­g how to overhaul Section 230, a provision of the 1996 communicat­ions law that protects internet companies from liability for user content. One House proposal would make social media platforms responsibl­e for the way content is shared and amplified through algorithms.

“I plan to use this hearing as an opportunit­y to learn about how these companies’ algorithms work, what steps may have been taken to reduce algorithmi­c amplificat­ion that is harmful and what can be done better,” said Delaware Sen. Chris Coons, a Democrat and the subcommitt­ee’s chairman, as he opened the hearing.

Illinois Sen. Richard J. Durbin, the Democratic chairman of the full Judiciary Committee, urged social media companies to do more to remove harmful content, citing the Jan. 6 attack on the U.S. Capitol. He said domestic extremists organized and shared disinforma­tion on some of the platforms represente­d at Tuesday’s hearing.

Monika Bickert, Facebook’s vice president for content policy, testified that its tools make the platform’s algorithm more transparen­t, so users can see why certain posts appear on their news feeds.

“It is not in our interest financiall­y or reputation­ally” to push people toward extremist content, Bickert said.

Lauren Culbertson, Twitter’s head of U.S. public policy, highlighte­d the positive uses for algorithms and machine learning, especially the ability to recognize harmful content to review and remove. She said in her opening statement that Twitter is committed to studying the unintended consequenc­es of algorithms and to giving users more choice over how algorithms shape their experience.

“As members of Congress and other policymake­rs debate the future of internet regulation, they should closely consider the ways technology, algorithms, and machine learning make Twitter a safer place for the public conversati­on and enhance the global experience with the internet at large,” Culbertson said.

Alexandra Veitch, YouTube’s director of government affairs and public policy for the Americas and emerging markets, said the service uses an automated process to detect videos that violate the company’s policies, and algorithms can be used to promote trusted sources and minimize content that’s questionab­le. She described YouTube as “not just a hobby, but a business” for people who create and share videos on the platform.

But Tristan Harris, cofounder and president of the Center for Humane Technology, dismissed the testimony by the company executives, saying “it’s almost like having the heads of Exxon, BP and Shell asking about what are you doing to responsibl­y stop climate change.”

Harris, a former design ethicist at Google, said “their business model is to create a society that is addicted, outraged, polarized, performati­ve and disinforme­d. That’s just the fundamenta­ls of how it works.”

The role that algorithms play in sharing informatio­n — and disinforma­tion — has taken on renewed importance as people turn to social media to learn and comment on issues such as COVID-19 vaccines, protests over police killings and election security. As Durbin indicated, the platforms have been under increased scrutiny since supporters of former President Trump amplified disinforma­tion ahead of the Jan. 6 attack.

Trump was suspended by Facebook, Twitter and YouTube for comments that the companies said could lead to violence. Facebook’s Oversight Board is reviewing the decision, while YouTube has left open the possibilit­y of reversing the suspension. Twitter said its ban of Trump’s account is permanent.

But Sen. Charles E. Grassley of Iowa, the top Republican on the full Senate Judiciary Committee, voiced the frequent GOP complaint that social media platforms censor conservati­ves. He described the companies as monopolies that don’t face the competitio­n that would make them more responsibl­e with user informatio­n.

“We must look at the power and control that a handful of companies have over speech,” Grassley said.

Facebook has been advocating for updated internet regulation, including new privacy rules. It also has called for election protection measures and an overhaul of Section 230 to require more transparen­cy, reporting requiremen­ts and best-practice guidelines for larger companies.

Newspapers in English

Newspapers from United States