The Denver Post

Tech platforms must move against anti-vaxxers

- By Molly Roberts

Vaccines don’t kill, but insisting otherwise can. Facebook, Google and Twitter know that — which is why, as measles outbreaks send children to intensive-care units across the country, they have all decided to do something about it.

“Do something!” is exactly what people around the world have been saying to social media sites that, until recently, refused to accept responsibi­lity for what happened on their platforms. That attitude is changing, but what “something” means is still up in the air. Do what, exactly, to whom? And will it help?

The most prominent platforms already ban hate speech and incitement­s to violence, at least in theory. They remain more reluctant, however, to remove or limit falsehoods. They’re not the arbiters of truth, they say, and they have always looked to free speech as a lodestar. Policies differ from platform to platform. But firms should take aggressive action when there’s a high likelihood of real-world harm.

It’s not a perfect metric. Neither is anything else. Even amid the messiness, though, agitating against life-saving inoculatio­ns falls cleanly on the wrong side of the line. Health officials studying the resurgence of a disease that was supposed to have been eliminated in this country almost two decades ago have made it clear: The outbreak of misinforma­tion online is facilitati­ng literal outbreaks of disease.

Companies apparently agree — to a point. Platforms could remove all anti-vax material, but so far they won’t, citing either squeamishn­ess about policing belief or a desire not to stop parents from having conversati­ons about so personal a decision. They could remove people, pages or groups that systematic­ally promote anti-vax material, but they won’t do that, either. That leaves seeking to limit the reach of false messages.

Facebook has announced it is down-ranking anti-vaxxer groups and pages in users’ news feeds and in searches, as well as cutting them out entirely from recommenda­tions and prediction­s and getting rid of their advertisem­ents. Its sister company Instagram has blocked hashtags such as #vaccinesca­useautism or #vaccineski­ll.

Youtube, which is owned by Google, has stopped anti-vaccinatio­n channels from running ads, and says hoaxes will appear less often in its “up next” module. When viewers do watch those videos, they’ll also see “informatio­n panels” with corrective context. Twitter has created a tool that pulls up a handy link to a government website offering facts about vaccinatio­n for anyone who searches for the subject, and it won’t auto-suggest terms that tend to lure people toward the inaccurate.

The remedies that focus on searches seek to fill what’s known as a “data void” — a sort of digital black hole that sucks curious consumers into the realm of the factless.

But these approaches can fall short. Search “#vaccines” on Instagram, leaving the whole “cause autism” or “kill” thing out, and the first accounts to show up are conspirato­rial, with names such as “vaccines_uncovered” and “vaccines_revealed.” Believe it or not, they aren’t dedicated to touting the benefits of polio shots.

You can’t fill a data void with more emptiness, so approaches that don’t also surface enough authoritat­ive sources to replace the junk have a fatal flaw. Twitter’s pop-ups help solve that half of the problem by linking to a government site, but the platform leaves alone the anti-vax content that appears right below.

Even if platforms try to push down trustworth­y sources and prop up reliable ones, algorithms miss things. They’re even more likely to miss when their targets dodge. The anti-vaccine community, which prefers the term “vaccine hesitant,” is no stranger to language games. Shifting rhetoric to talk about “doubts” or the need for parents to “decide” for themselves can skirt automatic filters. Hoaxers can also avoid policies about what counts as a lie, leaving the humans who set the rules flummoxed over where they should draw their lines.

Maybe some combinatio­n of strategies, over time, will spare some children the misery of measles or tetanus or whooping cough. Or maybe platforms will eventually have to supplement all that reach-limiting with some speech-limiting, too, at least for the most dangerous actors, many of whom prey on vulnerable communitie­s. Maybe the answer is more fundamenta­l, and sites will have to alter the incentives, from engagement algorithms to likes to follower counts, that reward extremism and sensationa­lism. Maybe, and it’s likely, they will have to do all these things at once.

The Internet didn’t create vaccine denialism, just as it didn’t create the other maladies platforms are now being asked to moderate away. It did, however, help the hoax go viral. The Web was meant to empower everyone, and now those who oversee it are trying to — have to — take some of that power away. Doing something isn’t as easy as it sounds, but controllin­g this outbreak can at least offer lessons about how to handle the next one.

 ??  ?? Molly Roberts writes about technology and society for The Washington Post’s Opinions.
Molly Roberts writes about technology and society for The Washington Post’s Opinions.

Newspapers in English

Newspapers from United States