Facebook and Twitter dodge a 2016 repeat
Since 2016, when Russian hackers andwikileaks injected stolen emails from the Hillary Clinton campaign into the closing weeks of the presidential race, politicians and pundits have called on tech companies to do more to fight the threat of foreign interference.
This week, less than a month from another election, we saw what “doing more” looks like.
Earlywednesday, the New York Post published a splashy frontpage article about supposedly incriminating photos and emails found on a laptop belonging to Hunter Biden, son of Joe Biden. To many Democrats, the unsubstantiated article — which included a bizarre set of details involving a Delaware computer repair shop, the FBI and Rudy Giuliani, the president’s personal lawyer —
smelled suspiciously like the result of a hack and leak operation.
To be clear, there is no evidence tying the Post’s report to a foreign disinformation campaign. Many questions remain about how the paper obtained the emails and whether they were authentic. Even so, the social media companies were taking no chances.
Within hours, Twitter banned all links to the Post’s article and locked the accounts of people, including some journalists and thewhite House press secretary, Kayleigh Mcenany, who tweeted it. The company said it made the move because the article contained images showing private personal information and because it viewed the article as a violation of its rules against distributing hacked material.
Facebook took a less nuclear approach. It said that it would reduce the visibility of the article on its service until it could be factchecked by a third party, a policy it has applied to other sensitive posts. (The move did not seem to damage the article’s prospects; by wednesday night, stories about Hunter Biden’s emails were among the most engaged posts on Facebook.)
Both decisions angered a chorus of Republicans, who called for Facebook and Twitter to be sued, stripped of their legal protections, or forced to account for their choices. Sen. Josh Hawley, RMO., called in a tweet for
Twitter and Facebook to be subpoenaed by Congress to testify about censorship, accusing them of trying to “hijack American democracy by censoring the news & controlling the expression of Americans.”
A few caveats: There is still a lot we still don’t know about the Post article. We don’t know if the emails it describes are authentic, fake or some combination of both, or if the events they purport to describe actually happened. Biden’s campaign denied the central claims in the article, and a Biden campaign surrogate lashed out against the Post onwednesday, calling the article “Russian disinformation.”
Even if the emails are authentic, we don’t know how they were obtained or how they ended up in the possession of Giuliani, who has been spearheading efforts to paint Biden and his family as corrupt. The owner of the Delaware computer shop who reportedly turned over the laptop to investigators gave several conflicting accounts to reporters about the laptop’s chain of custody wednesday.
Critics on all sides can quibble with the decisions these companies made or how they communicated them. Even Jack Dorsey, Twitter’s chief executive, said the company had mishandled the original explanation for the ban.
But the truth is less salacious than a Silicon Valley electionrigging attempt. Since 2016, lawmakers, researchers and journalists have pressured these companies to take more and faster action to prevent false or misleading information from spreading on their services. The companies have also created new policies governing the distribution of hacked material, in order to prevent a repeat of 2016’s debacle.
It’s true that banning links to a story published by a 200yearold American newspaper — albeit one that is now a Rupert Murdochowned tabloid — is a more dramatic step than cutting off wiki leaks or some lesserknown misinformation purveyor. Still, it’s clear that what Facebook and Twitter were actually trying to prevent was not free expression, but a bad actor using their services as a conduit for a damaging cyberattack or misinformation.
These decisions get made quickly, in the heat of the moment, and it’s possible that more contemplation and debate would produce more satisfying choices. But time is a luxury these platforms don’t always have. In the past, they have been slow to label or remove dangerous misinformation about COVID19, mailin voting and more, and have only taken action after the bad posts have gone viral, defeating the purpose.
That left the companies with three options, none of them great. Option A: They could treat the Post’s article as part of a hack and leak operation and risk a backlash if it turned out to be more innocent. Option B: They could limit the article’s reach, allowing it to stay up but choosing not to amplify it until more facts emerged. Or, Option C: They could do nothing and risk getting played again by a foreign actor seeking to disrupt an American election.
Twitter chose Option A. Facebook chose Option B. Given the pressures they have been under for the past four years, it’s no surprise that neither company chose Option C. (Although Youtube, which made no public statement about the Post’s story, seems to be keeping its head down and hoping the controversy passes.)
Since the companies made those decisions, Republican officials began using the actions as an example of Silicon Valley censorship run amok. Onwednesday, several prominent Republicans, including President Trump, repeated their calls for Congress to repeal Section 230 of the Communications Decency Act, a law that shields tech platforms from many lawsuits over usergenerated content.
That leaves the companies in a precarious spot. They are criticized when they allow misinformation to spread. They are also criticized when they try to prevent it.
Perhaps the strangest idea to emerge in the past couple of days, though, is that these services are only now beginning to exert control over what we see. Rep. Doug Collins, RGA., made this point in a letter to Mark Zuckerberg, chief executive of Facebook, in which he derided the social network for using “its monopoly to control what news Americans have access to.”
The truth, of course, is that tech services have been controlling our information diets for years, whether we realized it or not. Their decisions were often buried in obscure “community standards” updates or hidden in tweaks to the blackbox algorithms that govern which posts users see. But make no mistake: These apps have never been neutral, handsoff conduits for news and information. Their leaders have always been editors masquerading as engineers.
What’s happening now is simply that, as these companies move to rid their services of bad behavior, their influence is being made more visible. Rather than letting their algorithms run amok (which is an editorial choice in itself), they’re making highstakes decisions about flammable political misinformation in full public view, with human decisionmakers who can be debated and held accountable for their choices. That’s a positive step for transparency and accountability, even if it feels like censorship to those who are used to getting their way.
After years of inaction, Facebook and Twitter are finally starting to clean up their messes. And in the process, they’re enraging the powerful people who have thrived under the old system.