Houston Chronicle

What Google and Facebook must do about content problems

Vivek Wadhwa says social media giants are reaping huge profits, but are forsaking their responsibi­lity to curb hateful and harmful material.

- Wadhwa is Distinguis­hed Fellow and professor at Carnegie Mellon University Engineerin­g at Silicon Valley and a director of research at Center for Entreprene­urship and Research Commercial­ization at Duke. He wrote this for the Washington Post.

Google could lose as much as $750 million because of a boycott by advertiser­s, according to Nomura Research. Companies are protesting against the placement of their ads next to extremist and hateful content. An even worse offender is Facebook, which had enabled the propagatio­n of fake news that may have influenced the outcome of the U.S. elections. The two companies have reaped massive profits from the spread of misinforma­tion; yet they have claimed both ignorance of how their technology is misused, and an inability to control it.

It wasn’t supposed to be this way. Social media was developed with the promise of spreading democracy, community and freedom, not ignorance, bigotry and hatred. Connecting billions of people together and allowing them to share knowledge and ideas, it could have enabled them to achieve equality and justice; to expose what is wrong and crowd-solve global problems. Instead, it has become a tool enabling technology companies to mine data to sell to marketers, politician­s and specialint­erest groups, enabling them to spread disinforma­tion. It has created echo chambers in which people with similar views reinforce their ignorance and bias. And the loss of control over user data has now affected not just the economic lives of Americans but also the political messages they receive on platforms such as Facebook.

Part of the problem is that a handful of large technology companies have become small oligopolie­s in connectivi­ty and informatio­n; they are reaping incredible profits but forsaking the responsibi­lities that come with the power they have gained. Facebook, for example, has become a media company with more power and influence than the Washington Post and the New York Times. More than 65 percent of its users — 44 percent of U.S. adults — get their news through its platform. Yet it claims not to be a publisher and to not have responsibi­lity for what appears on its platform or done with its marketing data.

In light of the backlash, Facebook and Google have acknowledg­ed the problem and pledged to do something about it. In a blog post, Facebook chief executive Mark Zuckerberg detailed plans to build a safe, informed, civically engaged and inclusive community that fulfills the benevolent promise of social media. But he says that Facebook can’t possibly review the billions of billions of posts that are made on its platform every day and solving the problem will require artificial intelligen­ce — which is “technicall­y difficult” and will take years of research and developmen­t.

We can’t wait years. And it isn’t that this industry is powerless when it comes to controllin­g the misuse of its platforms; there is insufficie­nt motivation. Go back a few years in time, for example, to when our mailboxes were getting flooded with spam. Tech companies created filters and blacklists and many other defenses and virtually eliminated it. When marketers learned how to game Google’s page-ranking system by creating multitudes of websites with links to each other, it updated its algorithms to penalize the offenders. Whenever it comes to making money, the tech industry always seems to be able to find a way — and it doesn’t take years.

Trolling is also common problem on Twitter, with millions of automated bots being available for hire. You can openly purchase fake accounts and fake followers, and have other accounts spread marketing content as well as misinforma­tion and hate. The company has the technology to disable these accounts, but it doesn’t, possibly because doing so would hurt its stock price.

There is no way of turning back technology; what we need is for their owners to steer them in a more positive direction. The problems of fake news and the spread of disinforma­tion can be remedied by opening up social networks and vetting news in a more effective manner; by using technology and imaginatio­n to solve the problems that technology has created through lack of imaginatio­n.

In his book, “Whose Global Village?,” Ramesh Srinivasan explains that digital technologi­es are not neutral but are socially constructe­d: created by people within organizati­ons, who in turn approach the design process on the basis of a set of values and presumptio­ns. The platforms that have come to dominate our experience of the internet, Google and Facebook, are for-profit companies, not democratic institutio­ns. As they become the face of journalism and public informatio­n, they must be held accountabl­e for their effects.

Srinivasan points out that invisible algorithms determine the content that social-media networks curate and present to us; they decide what is important. These algorithms take input from the people we associate with on social media — and this leads to the echo chambers — but a lot more is done in secret. What we do know is that they tend to confirm our existing biases, and those of our existing networks. Yet as users we know almost nothing about the choices that went into these personaliz­ation algorithms, and we are not given much of an alternativ­e.

Srinivasan argues for a few important choices:

First, we can ask for socialmedi­a companies to make transparen­t and comprehens­ible the filters and choices that go into the most important algorithms that shape interactiv­ity. This does not mean having to publish proprietar­y software code, but rather giving users an explanatio­n of how the content they view is selected. Facebook can explain whether content is chosen because of location, number of common friends, or similarity in posts. Google can tell us what factors lead to the results we see in a search and provide a method to change their order.

Second, we must provide users with the opportunit­y to choose between different types of informatio­n, whether it be the news shared by people beyond their social networks or options on the filters on their feeds. Such filters would allow users to determine what parts of the world they’d like to see informatio­n from and the range of political opinions they choose to be exposed to.

Third, we can return to a practice that long characteri­zed the Web: open-ended browsing and surfing. Socialmedi­a companies can develop tools that allow news credibilit­y to be visualized, enabling users to browse content within and beyond their immediate social network. Facebook could make posts available from users who are not in the user’s friend network, or provide the user with tools to browse the networks of others, with their permission. It could even develop interfaces that allow users to look across posts from multiple perspectiv­es, places, and cultures in relation to a given topic.

The bigger issue is that we need to develop political literacy in our educationa­l and social systems. This entails viewing no piece of informatio­n, whether presented on social media or through a traditiona­l news outlet as infallible, but instead learning to scrutinize that story’s framing, the agenda it serves, and the integrity and transparen­cy of its sources.

In other words, as a society, we need to up our own game.

 ?? Loic Venance / AFP / Getty images ?? Tech giants like Google must be held accountabl­e for their effects.
Loic Venance / AFP / Getty images Tech giants like Google must be held accountabl­e for their effects.

Newspapers in English

Newspapers from United States