The Union Democrat

Facebook’s algorithms increasing­ly in the sights of US lawmakers

- By ANNA EDGERTON Bloomberg News

U.S. lawmakers investigat­ing how Facebook Inc. and other online platforms shape users' worldviews are considerin­g new rules for the artificial intelligen­ce programs blamed for spreading malicious content.

This legislativ­e push is taking on more urgency since a whistleblo­wer revealed thousands of pages of internal documents revealing how Facebook employees knew that the company's algorithms prioritizi­ng growth and engagement were driving people to more divisive and harmful content.

Every automated action on the internet — from ranking content and displaying search results to offering recommenda­tions or showing ads — is controlled by computer code written by engineers. Some of these algorithms take simple inputs like words or video quality to show certain outputs, while others use artificial intelligen­ce to learn more about people and user-generated content, resulting in more sophistica­ted sorting.

Both Republican­s and Democrats agree there should be some accountabi­lity for tech companies, even though Section 230 of the 1996 Communicat­ions Decency Act provides broad legal immunity for online platforms.

While there has been some consensus around updated privacy rules and tech-focused antitrust bills, two weeklong recesses next month and fiscal deadlines looming in December mean there is precious little time for concrete action this year.

After wrestling with how to write laws to allow or prohibit certain kinds of speech, which risks running afoul of the First Amendment, regulating automated algorithms is emerging as a possible strategy.

“The algorithms driving powerful social media platforms are black boxes, making it difficult for the public and policy makers to conduct oversight and ensure companies' compliance, even with their own policies,” Sen. Ed Markey, D-mass., told Bloomberg. He introduced a bill in May he said would “help pull back the curtain on Big Tech, enact strict prohibitio­ns on harmful algorithms, and prioritize justice for communitie­s who have long been discrimina­ted against as we work toward platform accountabi­lity.”

Several senators touted their own algorithm-focused bills while questionin­g Frances Haugen, the Facebook whistleblo­wer, when she appeared before Congress earlier this month. While Haugen didn't endorse any specific piece of legislatio­n, she did say the best way to regulate online platforms like Facebook is to focus on systemic solutions, especially transparen­cy and accountabi­lity for the machine-learning architectu­re that powers some of the world's biggest and most influentia­l companies.

Sen. Richard Blumenthal, DConn., who as chair of the Senate consumer protection subcommitt­ee has led the congressio­nal investigat­ion of Haugen's allegation­s, last week invited Facebook Chief Executive Officer Mark Zuckerberg to testify before Congress. Blumenthal, in a statement Monday, identified the machine-learning structure of the company's platform as a danger not only to users, but also to democracy.

“Facebook is obviously unable to police itself as its powerful algorithms drive deeply harmful content to children and fuel hate,” Blumenthal said. “This resounding­ly adds to the drumbeat of calls for reform, rules to protect teens, and real transparen­cy and accountabi­lity from Facebook and its Big Tech peers.”

While Haugen's revelation­s add to the bipartisan anger directed at big tech companies, asking a gridlocked Congress to regulate a technicall­y complex and fast-moving industry is a tall order. Lawmakers have been discussing potential bills to chip away at Section 230, especially since the issue rocketed to national prominence when former President Donald Trump last year vetoed an unrelated defense bill amid demands that legal shield be repealed. Congress overrode the veto.

But no one proposal has emerged as a front-runner, and several groups of activists — even those advocating for new tech regulation — have pointed out that government regulation of speech risks silencing already marginaliz­ed voices.

Hence the focus on algorithms. Proponents of this approach allow that platforms shouldn't be liable for user-generated content, but argue that they bear responsibi­lity for how their systems are designed to amplify certain kinds of informatio­n.

Facebook's own employees recognized this editorial responsibi­lity, according to the internal documents that Haugen shared with Congress and the Securities and Exchange Commission. In one 2019 report, a Facebook employee laments how “hate speech, divisive political speech, and misinforma­tion on Facebook and the family of apps are affecting societies around the world.”

“We also have compelling evidence that our core product mechanics, such as virality, recommenda­tions, and optimizing for engagement, are a significan­t part of why these types of speech flourish on the platform,” the document says, describing the objectives designed for the algorithms. “The mechanics of our platform are not neutral.”

A new bill from Rep. Frank Pallone, D-N.J., chair of the House committee responsibl­e for science and technology, would revoke Section 230 protection­s for any online platform that uses algorithms to amplify or recommend dangerous content.

“Designing personaliz­ed algorithms that promote extremism, disinforma­tion, and harmful content is a conscious choice,” Pallone said in announcing the bill. “And platforms should have to answer for it.”

Markey's bill, which was also introduced in the House by Rep. Doris Matsui, D-calif., takes a different approach by not addressing Section 230 and focusing just on new requiremen­ts for how companies use algorithms. Their bill would set safety guardrails for these automated processes and require more transparen­cy for consumers and federal regulators.

There is another proposal that takes a lighter touch but has bipartisan support in the Senate. This bill, from Sen. John Thune, R-S.D., Blumenthal and others would require online platforms to allow users to turn off the “filter bubble” created by an algorithm so they could see more chronologi­cally ordered content.

“The simple solution of the filter bubble really is to give consumers the option, give them the choice, give them the freedom to opt out of an algorithm-manipulate­d platform,” Thune said in an interview.

Haugen, speaking to the U.K. Parliament on Monday, urged policymake­rs around the world to act quickly to regulate the artificial intelligen­ce of the algorithms that underpin online platforms.

“We have a slight window of time to regain people's control over AI,” Haugen said. “We have to take advantage of this moment.”

 ?? TNS ?? Former Facebook employee Frances Haugen testifies during a Senate Committee on Commerce, Science, andtranspo­rtation hearing entitled “Protecting Kids Online:testimony from a Facebook Whistleblo­wer” on Capitol Hill, Oct. 5, 2021, in Washington, D.C.
TNS Former Facebook employee Frances Haugen testifies during a Senate Committee on Commerce, Science, andtranspo­rtation hearing entitled “Protecting Kids Online:testimony from a Facebook Whistleblo­wer” on Capitol Hill, Oct. 5, 2021, in Washington, D.C.

Newspapers in English

Newspapers from United States