Targeting ‘malinformation,’ inconvenient truths
LAST month, I noted that the Centers for Disease Control and Prevention had repeatedly exaggerated the scientific evidence supporting face mask mandates during the COVID-19 pandemic. Facebook attached a warning to that column, which it said was “missing context” and “could mislead people.”
According to an alliance of social media platforms, government-funded organizations and federal officials that journalist Michael Shellenberger calls the “censorship-industrial complex,” I had committed “malinformation.” Unlike “disinformation,” which is intentionally misleading, or “misinformation,” which is erroneous, “malinformation” is true but inconvenient.
As illustrated by internal Twitter communications that journalist Matt Taibbi highlighted last week, malinformation can include emails from government officials that undermine their credibility and “true content which might promote vaccine hesitancy.”
The latter category encompasses accurate reports of “breakthrough infections,” accounts of “true vaccine side effects,” objections to vaccine mandates, criticism of politicians and citations of peer-reviewed research on natural immunity.
Disinformation and misinformation have always been contested categories, defined by the fallible and frequently subjective judgments of public officials and other government-endorsed experts. But malinformation is even more in the eye of the beholder because it is defined not by its alleged inaccuracy but by its perceived threat to public health or democracy, which often amounts to nothing more than questioning expert wisdom.
Taibbi’s revelations focused on the Virality Project, which the taxpayer-subsidized Stanford Internet Observatory launched in 2020. Although Renee Diresta, the SIO’S research manager, concedes that “misinformation is ultimately speech,” meaning the government cannot directly suppress it, she says the threat it poses requires “that social media platforms, independent researchers and the government work together as partners in the fight.”
That sort of collaboration raises obvious free speech concerns. If platforms such as Twitter and Facebook were independently making these assessments, their editorial discretion would be protected by the First Amendment. But the picture looks different when government officials publicly and privately chastise social media companies for not doing enough to suppress speech they view as dangerous.
In a federal lawsuit filed last year, the attorneys general of Missouri and Louisiana, joined by scientists who ran afoul of the ever-expanding crusade against disinformation, misinformation and malinformation, argue that such pressure violates the First Amendment. This week, Terry A. Doughty, a federal judge in Louisiana, allowed that lawsuit to proceed, saying the plaintiffs had adequately alleged “significant encouragement and coercion that converts the otherwise private conduct of censorship on social media platforms into state action.”
Whatever the ultimate outcome of that case, Congress can take steps to discourage censorship by proxy. Shellenberger argues that it should stop funding groups such as the ISO and “mandate instant reporting of all communications between government officials and contractors with social media executives relating to content moderation.”
The interference that Shellenberger describes should not be a partisan issue. It should trouble anyone who prefers open inquiry and debate to government manipulation of online speech.