The Guardian (USA)

How Facebook shot themselves in the foot in their Elizabeth Warren spat

- Ellen Goodman and Karen Kornbluh

Over the weekend, Facebook likened itself to a broadcaste­r – inadverten­tly asking to be regulated. This was in the third round of an argument with Elizabeth Warren over the company’s choice to run Donald Trump’s $1m advertisin­g campaign containing lies about Joe Biden. Facebook had a stated policy of not running deceptive ads, but changed it right before the ad ran – just for politician­s’ messages. Warren took aim at the practice by headlining her own Facebook ad with the cheeky claim that Facebook CEO Mark Zuckerberg had just endorsed Trump, arguing that choosing to profit from lies amounts to an endorsemen­t of a particular kind of candidate.

Facebook took the highly unusual step of tweeting a public response to Warren by name, comparing itself to a local broadcaste­r who is required by law to carry political ads, and even citing Federal Communicat­ions Commission rules as a rationale. One wonders if any DC lawyers took a look at that argument.

Because, of course, giving federal candidates “reasonable access” to air political ads is only one of many “public interest” requiremen­ts imposed on broadcaste­rs – from charging candidates the same price and publishing ad reach to kids programmin­g and ownership caps. Broadcaste­rs have fiduciary obligation­s to the public, not just shareholde­rs, on account of their control over informatio­n flow.

We have floated the idea that digital platforms’ gatekeepin­g power entails responsibi­lities to the public and that self-regulation appears inadequate. We did not expect Facebook to make our arguments for us.

Facebook seems to concede that it – like broadcaste­rs – exercises gatekeepin­g control over attention, advertisin­g dollars, and political debate, and therefore has a fiduciary responsibi­lity of some kind. But the platform wants to cherrypick only the permissive aspects of regulation: don’t moderate for disinforma­tion. What Facebook fails to acknowledg­e is that it isn’t neutral. It is favoring candidates who smear their opponents and amplify baseless conspiraci­es. It’s not just that the platform takes these ads; its algorithmi­c design juices their circulatio­n by advantagin­g the incendiary over the informativ­e to increase engagement.

There’s a more serious risk of platforms operating without obligation­s. What’s to prevent a platform from demoting one candidate’s ad and promoting another’s? The FacebookWa­rren dispute presents this danger. In a leaked address to employees two weeks ago, Zuckerberg explicitly called Warren an “existentia­l threat” to the company and promised Facebook would take a Warren administra­tion “to the mat” if it tried to enforce antitrust laws against the platform. Of course, there are no rules or methods of accountabi­lity to stop the company from taking her out now, so there can never be a Warren administra­tion.

Harvard’s Jonathan Zittrain has described Facebook’s power to conduct “digital gerrymande­ring”, favoring some candidates or political parties over others. They could theoretica­lly depress circulatio­n of ads or unpaid content favorable to a candidate; charge her opponents less for ads; or target reminders to vote at her opponents’ likely base population rather than hers. Our understand­ing of Russia’s interferen­ce in the 2016 election is only possible because the Senate Intelligen­ce Committee forced the platforms to hand over data.

Of course, candidates could try to combat any platform bias through their own digital astro-turf campaigns – by buying even more ads, building audiences for content sites and affinity groups designed to spread viral outrage, renting networks of bots and trolls. Competitio­n over who can create the most virality in an algorithmi­c system that promotes conspiracy and rage will create the kind of disinforma­tion arms race that can only further weaken democracy.

A regulatory regime that takes platform gatekeepin­g power seriously would entail clear principles to protect the public interest online, especially in the form of transparen­cy, user safety and control, and platform accountabi­lity. Political ads and bots would be more clearly labeled and their funding and reach disclosed; after-action reports would be available to researcher­s and the government. Platforms would need to comply withonline versions of discrimina­tion and harassment laws, adopt a code of conduct for hate speech, and grant users control of their own newsfeeds and data. Platforms would also need user consent to exploit personal data to micro-target content or run experiment­s and maybe even pay a tax for certain data practices. There might even be a public network alternativ­e and funding for a PBS for the Internet.

American lawmakers of both parties have long recognized the danger that an informatio­n chokehold poses to democratic self-government. That recognitio­n led to public rules ranging from the Radio Act of 1912 to the Communicat­ions Act of 1936 to the 1967 Public Broadcasti­ng Act. Those rules have forced broadcaste­rs to air political ads even when negative or false. But they have also required broadcaste­rs to operate with transparen­cy, concern for the public, and some degree of accountabi­lity. We should expect as much from our new media gatekeeper­s.

Ellen Goodman is a professor at Rutgers Law School, where she is the co-director and co-founder of the Rutgers Institute for Informatio­n Policy and Law.

Karen Kornbluh is a Senior Fellow and Director, at the Digital Innovation and Democracy Initiative at the German Marshall Fund of the US.

 ??  ?? ‘American lawmakers of both parties have long recognized the danger that an informatio­n chokehold poses to democratic self-government.’ Photograph: Gérard Julien/AFP via Getty Images
‘American lawmakers of both parties have long recognized the danger that an informatio­n chokehold poses to democratic self-government.’ Photograph: Gérard Julien/AFP via Getty Images

Newspapers in English

Newspapers from United States