The Phoenix

Can social media be fixed? Let’s start with transparen­cy

- Lata Nott Columnist Lata Nott is executive director of the First Amendment Center of the Freedom Forum Institute.

This week, executives from Twitter, Facebook and Google testified before Congress. Again. This was the third congressio­nal hearing this year where the internet giants were grilled on their content policies, their privacy and security practices and their role in democracy.

It’s been a rough couple of years for social media platforms. They’ve come under fire for so many different things it can be hard to remember all of them.

To recap: For enabling Russian propagandi­sts to influence our presidenti­al election and terrorist organizati­ons to find new recruits. For allowing fake news stories to go viral. For exacerbati­ng political polarizati­on by trapping their users in “filter bubbles.” For giving hate mongers and conspiracy theorists a platform to reach a wider audience.

For filtering or down-ranking conservati­ve viewpoints. For collecting private user data and selling it to the highest bidder. For siphoning profits away from struggling local news organizati­ons.

The social media platforms are taking various actions to mitigate these problems. But every potential solution seems to bring forth another unanticipa­ted consequenc­e. YouTube is currently trying to debunk conspiracy videos on its site by displaying links to more accurate informatio­n right alongside of them — but there’s concern that the presence of a link to an authoritat­ive source will make a video seem more legitimate, even if the text and link directly contradict the video.

Twitter CEO Jack Dorsey has expressed a desire to break up his users’ filter bubbles by injecting alternativ­e viewpoints in their feeds. But new research suggests that exposing people to opposing political views may actually cause them to double down on their own — ironically, actually increasing political polarizati­on.

Facebook instituted a system for users to flag questionab­le news stories for review by their fact-checkers — but soon ran into the problem that users would falsely report stories as “fake news” if they disagreed with the premise of the story, or just wanted to target the specific publisher.

Some doubt the sincerity behind these efforts. As former Reddit CEO Ellen Pao says, “Social media companies and the leaders who run them are rewarded for focusing on reach and engagement, not for positive impact or for protecting subsets of users from harm.” In other words, what’s good for a company’s bottom line and what’s good for society as a whole are often at odds with each other.

It’s no wonder that the government is looking to step into the fray. If the numerous congressio­nal hearings don’t make that clear, a proposed plan to regulate social media platforms that leaked from Senator Mark Warner’s office last month ought to. Just last week, President Trump announced that he wanted to take action against Google and Twitter for allegedly not displaying conservati­ve media in his search results.

It’s unlikely that the president would be able to do much about that, just as it’s unlikely that Congress would be able to force Facebook to say, ban all fake news stories from its platform. Twitter, Facebook and Google are all private companies, and the First Amendment prohibits government officials from limiting or compelling speech by private actors.

So what can the government do? It can encourage (and, if necessary, regulate) these companies to be more transparen­t.

It’s shocking how little we know about the algorithms, content moderation practices and internal policies that control what informatio­n we receive and how we communicat­e with one another. It’s reckless that we only become aware of these things when something catastroph­ic happens.

 ??  ??

Newspapers in English

Newspapers from United States