Social media companies must curtail misinformation
About 500 hours of video gets uploaded to YouTube every minute. The online video-sharing platform houses more than 800 million videos and is the second most visited site in the world, with 2.5 billion active monthly users.
Given the deluge of content flooding the site every day, one would surmise that YouTube must have an army of people guarding against the spread of misinformation — especially in the wake of the Jan. 6, 2021, insurrection that was fueled by lies on social media.
Well, not actually. Following recent cutbacks, there is just one person at YouTube in charge of misinformation policy worldwide, according to a recent report in the New York Times.
YouTube is owned by Google. The cutbacks were part of a broader reduction by Alphabet, Google’s parent company, which shed 12,000 jobs in an effort to boost profits, which were around $60 billion last year.
YouTube is not the only social media company easing some of the already limited safeguards put in place following the Russian disinformation campaign that helped elect Donald Trump in 2016.
Meta, which owns Facebook, Instagram and WhatsApp, slashed 11,000 jobs last fall and is reportedly preparing more layoffs.
Those cuts came as Facebook, which made $23 billion last year, quietly reduced its efforts to thwart foreign interference and voting misinformation before the November midterm elections.
Twitter implemented even deeper cuts, laying off 50% of its employees days before the midterm election in November. The cuts included employees in charge of preventing the spread of misinformation. Additional layoffs in the so-called trust and safety team occurred in January.
It’s not just the spread of political misinformation that is misleading and dividing the public. Twitter recklessly ended its ban on COVID-19 misinformation, which will likely lead to more needless deaths.
Hate speech also exploded on Twitter since Elon Musk purchased the company for $44 billion in October.
To be sure, the First Amendment makes it difficult to regulate social media companies. But doing nothing is not the answer. The rise of artificial intelligence to create sophisticated chatbots such as ChatGPT and deepfake technology will worsen the spread of fake news, further threatening democracy. Policymakers must soon strike a balance between the First Amendment and regulating social media.
Meanwhile, the European Union is pushing forward with its own landmark regulations called the Digital Services Act. The measure takes effect next year and aims to place substantial content moderation requirements on social media companies to limit false information, hate speech, and extremism.
The spread of misinformation and disinformation is a growing threat to civil society. Social media companies can’t ignore their responsibility.