Arab News

The new dilemma for Google and Facebook

- JOHN LLOYD

Is it possible to curb hate speech while protecting free speech? The tech giants will have to try, or take a hit to their profits.

IN a flurry of confident pronouncem­ents within an hour of last week’s massacre at a Las Vegas country music festival, conservati­ve commentato­rs linked the gunman, Stephen Paddock, to liberal or Islamist influences.

Rush Limbaugh, the doyen of rightwing talk radio, credited Daesh with being Paddock’s ideologica­l home, arguing that it was disguised by the liberal media because “for the American left, there is no such thing as militant Islamic terrorism.” Pat Robertson, the socially conservati­ve activist and televangel­ist, said the shooting stemmed from the news media’s and liberal protesters’ “profound disrespect for our president” and other institutio­ns.

On the other side of the American culture war, a CBS vice president and legal counsel, Hayley Geftman-Gold, said she was “not even sympatheti­c” to the victims because “country music fans often are Republican gun toters.” Unlike her right-wing opposites, she suffered for her opinion: she was fired.

Should any of these comments be the concern of the state? The general opinion, especially in the US, is that government­s should stay out of it. For Washington, the anger or distress such remarks may cause must be endured in deference to the near-absolute right of free speech protected by the First Amendment to the Constituti­on.

But should we tolerate such verbal brutality? Do people have to suffer distress because of the voiced prejudices of others who often — as Limbaugh does — make a rich living from their display? There’s a growing faction saying no, and it has reached, at least in Europe, the stage of state action. The EU Justice Commission­er, Vera Jourova, has told social media giants such as Facebook and Twitter that they must eliminate both hate speech and fake news, or face legislatio­n criminaliz­ing them for not doing so. That’s a sweeping statement: unpacking what it might mean in practice takes us deep into an area that should be marked with signs saying: “Danger! Free speech in Peril!”

Fake news is not the same as hate speech, but it can also be used to inflame social tensions. In Italy, the anti-trust chief Giovanni Petruzella has said that EU countries should create government­appointed bodies to remove fake news and even fine the media for violations. But how is fake news to be distinguis­hed, by either artificial or human intelligen­ce, from true news? It’s a delicate operation, since much news striving to be “true” contains false informatio­n, and much fake news has the ring of truth and would take careful investigat­ion to disprove.

In Germany, a new law came into force this month criminaliz­ing some digital platforms being used for hate speech. Called, challengin­gly, the Netzwerkdu­rchsetzung­sgesetz, NetzDG for short, it commands that Facebook and Twitter take down “blatantly illegal” hate speech within 24 hours or, if the offending material is less obviously illegal, in a week — on pain of a fine of up to 50 million euros. The problem with it, critics claim, is that it is imprecise about what constitute­s hate speech. It merely points to the passage in the German Criminal Code that declares the “defamation of religions, religious and ideologica­l associatio­ns” illegal. What is defamation? When is one person’s unbearable insult another’s opinion?

Lisa Feldman Barrett, professor of psychology at Northeaste­rn University, argues that “there is a difference between permitting a culture of casual brutality and entertaini­ng an opinion you strongly oppose. The former is a danger to a civil society (and to our health); the latter is the lifeblood of democracy.” Speech of the first kind, which “bullies and torments,” is “from the perspectiv­e of our brain cells … literally a form of violence.”

Put that way, it appears obvious: the speech that harms should be criminaliz­ed, and in parts of Europe it is being so. Facebook, Twitter and Google are now under increasing state and public pressure to stop posting material that causes more than distress but, apparently, real damage to the brain. UK Prime Minister Theresa May spoke out at the UN last month, calling on the tech companies to go much farther and faster in combating the dangerous messages they carry.

At a meeting with Google staff in London last week, I was told that the concerns of government­s and the public were registered, and reform was on the way.

When I quoted the view of Fiyaz Mughal, head of the anti-extremist British advocacy organizati­on Faith Matters, that tech companies were “not dealing with the problem” because their “bottom line is money,” I was assured this was not so. The default of the communicat­ions behemoths to absolutism in free speech has been replaced, it was said, by a finer-grained examinatio­n of cause and effect, and of what could reasonably be done to address concerns.

It’s true that to juggle the demands of free speech and security is now one of the largest ethical and practical problems facing democratic states — and the tech corporatio­ns. And it’s also true that even if Mughal is right that the companies’ first care is the bottom line — for which corporatio­n is that not true? — the large fines now being prepared for failing to reform would be a powerful incentive to change.

Yet in the course of this complex balancing act, between security and liberty, profit and regulation, there is the danger of substantia­l damage to the freedoms of speech and the news media which democracie­s have been able to safeguard for most of the past 70 years.

Liberals have a tricky task ahead, to address two different publics: one alarmed by hate speech and militant messages, the other by measures to stop them. Confusingl­y, these two publics are sometimes one.

John Lloyd co-founded the Reuters Institute for the Study of Journalism at the University of Oxford, where he is senior research fellow. — Reuters

Q

 ??  ??

Newspapers in English

Newspapers from Saudi Arabia