Business Standard

The actual problem with social media

Algorithms used by Google and others have a “downward” bias

- MARK BUCHANAN

Donald Trump’s claim that Google’s search engine is biased against conservati­ves would be more interestin­g if it didn’t rest on the crazy view that most mainstream outlets such as the Associated Press, Reuters, sports channel ESPN or Business Insider are hotbeds of radicalize­d left-wing politics. Still, Republican­s will generate plenty of hot air over the matter even if Google is not present during Congressio­nal hearings Wednesday on how tech companies manage data flows.

Unfortunat­ely, the hearings — to include representa­tives from Facebook and Twitter — almost certainly won’t examine a much more important issue, which is how the business model of these three tech giants may have a lot to do with the rise of hatred, violence and political polarisati­on, not only in the United States, but worldwide. An alarming possibilit­y is that these companies’ automated algorithms, which analyse human behaviour to boost user engagement, have learned that they perform best by setting us against one another.

There are, of course, many worrying things about the internet and social media. In his book The Shallows, journalist Nicholas Carr warned that their use was changing our brains. We’re losing our ability to focus deeply on a single task, and becoming more adept at handling fragments of informatio­n and switching frequently between shallow tasks. Technology is having a profound effect on how we read. The skimming reader fails to grasp the complexity of arguments, and has little time to form creative thoughts of his or her own.

Far worse is how the technology may be acting as a vast optimised engine of social degradatio­n. That’s the argument of a recent book by former tech engineer Jaron Lanier, known for his early work on virtual reality. In the current business model, Google, Twitter and Facebook offer free services and use them to gather immense quantities of user data. The companies’ algorithms then use that data to help advertiser­s feed users optimised stimuli to modify their behaviour — encouragin­g them to buy stuff, for example. We’re so used to this model that, aside from sporadic privacy concerns, we see it as almost natural.

This model may also be a natural route to disaster. Facebook, for example, makes money by helping advertiser­s target messages — including lies and conspiraci­es — to the people most likely to be persuaded. The algorithms looking for the best ways to engage users have no conscience, and will simply exploit anything that works. Lanier believes that the algos have learned that we’re more energised if we’re made to feel negative emotions, such as hatred, suspicion or rage.

“Social media is biased not to the left or the right,” as he puts it, “but downward,” toward an explosive amplificat­ion of negativity in human affairs.

Lanier doesn’t support this argument with hard data, but plenty of other research makes the hypothesis sound all too plausible. A United Nations report concluded that the spread of rumours on Facebook and other social media was crucial in sparking genocidal violence against the Rohingya in Myanmar.

The link seems to be quite general, as suggested by another recent study linking usage of Facebook with outbreaks of violence against immigrants across Germany. In the data, a rise in per-person Facebook use of one standard deviation above the national average meant a 50 per cent increase in the number of attacks on refugees.

At least in part, these may be the tragic human consequenc­es of mechanical algorithms relentless­ly acting to exploit a truth they’ve discovered.

What can be done? There’s no reason the advertisin­g-based model needs to remain dominant, especially if we realise the immense damage it’s causing. An alternativ­e would be to give up our free services — Gmail, Facebook, Twitter — and pay for them directly. If social media companies made money from their users, instead of from third parties aiming to prey on those users, they would be more likely to serve users’ needs. Making it happen will take concerted government pressure, and from users as well, as the companies profit so handsomely from the current set up.

Get rid of the advertisin­g model, Lanier notes, and anyone will still be completely free to pay to see poisonous propaganda. It's just that no one will be able to pay in secret to have poison directed at someone else. That would make a big difference.

 ??  ??

Newspapers in English

Newspapers from India