The Philadelphia Inquirer on how social media companies must curtail the spread of misinformation:
About 500 hours of video gets uploaded to Youtube every minute. The online video-sharing platform houses more than 800 million videos and is the second most visited site in the world, with 2.5 billion active monthly users.
Given the deluge of content flooding the site every day, one would surmise that Youtube must have an army of people guarding against the spread of misinformation — especially in the wake of the Jan. 6, 2021, insurrection that was fueled by lies on social media.
Well, not actually.
Following recent cutbacks, there is just one person in charge of misinformation policy worldwide, according to a recent report in the New York Times. This is alarming, since fact-checking organizations have said Youtube is a major pipeline in the spread of disinformation and misinformation.
Youtube is owned by Google. The cutbacks were part of a broader reduction by Alphabet, Google’s parent company, which shed 12,000 jobs in an effort to boost profits, which were around $60 billion last year.
Youtube is not the only social media company easing some of the already limited safeguards put in place following the Russian disinformation campaign that helped elect Donald Trump in 2016.
Meta, which owns Facebook, Instagram and Whatsapp, slashed 11,000 jobs last fall and is reportedly preparing more layoffs.
Those cuts came as Facebook, which made $23 billion last year, quietly reduced its efforts to thwart foreign interference and voting misinformation before the November midterm elections.
Facebook also shut down an examination into how lies are amplified in political ads on the social media site and indefinitely banned a team of New York University researchers from the site.
Twitter implemented even deeper cuts, laying off 50% of its employees days before the midterm election in November. The cuts included employees in charge of preventing the spread of misinformation. Additional layoffs in the so-called trust and safety team occurred in January.
It’s not just the spread of political misinformation that is misleading and dividing the public. Twitter recklessly ended its ban on COVID-19 misinformation, which will likely lead to more needless deaths.
Hate speech also exploded on Twitter since Elon Musk purchased the company for $44 billion in October.
In the weeks after Musk took control of Twitter, antisemitic posts jumped more than 61%. Slurs against Black people soared by more than 200%, while slurs against gay men increased by 58%. The hate spewed online has been linked to an increase in violence toward people of color and immigrants around the world.
But Musk says he is a free speech absolutist — except when it impacts him. The billionaire temporarily suspended the accounts of several journalists and blocked others who rebuked him on Twitter. He also fired employees at Spacex, one of his other companies, who criticized him.
More to the point, Musk fails to understand that freedom of speech is not absolute. As much as this board supports and cherishes the First Amendment, there are rules and regulations surrounding what can be said.
For example, you can’t harass or violate the rights of others. Just ask Alex Jones. The conspiracy theorist and Infowars founder was ordered to pay nearly $1 billion in damages to the families of eight victims of the Sandy Hook Elementary School shooting for his repeated lies that the massacre was a hoax.
To be sure, the First Amendment makes it difficult to regulate social media companies. But doing nothing is not the answer. The rise of artificial intelligence to create sophisticated chatbots such as CHATGPT and deepfake technology will worsen the spread of fake news, further threatening democracy. Policymakers must soon strike a balance between the First Amendment and regulating social media.
Texas and Florida have already muddied the regulation debate by passing laws that will upend the already limited content moderation efforts by social media companies and make the internet an even bigger free-for-all. The U.S. Supreme Court put off whether to take up the cases, leaving the state laws in limbo for now.
Meanwhile, the European Union is pushing forward with its own landmark regulations called the Digital Services Act. The measure takes effect next year and aims to place substantial content moderation requirements on social media companies to limit false information, hate speech and extremism.
The spread of misinformation and disinformation is a growing threat to civil society. Social media companies can’t ignore their responsibility.