Kuwait Times

US tech giants may find their future shaped by Europe

-

LONDON: Silicon Valley is a uniquely American creation, the product of an entreprene­urial spirit and no-holds-barred capitalism that now drives many aspects of modern life.

But the likes of Facebook, Google and Apple are increasing­ly facing an uncomforta­ble truth: it is Europe’s culture of tougher oversight of companies, not America’s laissez-faire attitude, which could soon rule their industry as government­s seek to combat fake news and prevent extremists from using the internet to fan the flames of hatred.

While the US has largely relied on market forces to regulate content in a country where free speech is revered, European officials have shown they are willing to act. Germany recently passed a law imposing fines of up to 50 million euros ($59 million) on websites that don’t remove hate speech within 24 hours. British Prime Minister Theresa May wants companies to take down extremist material within two hours. And across the EU, Google has for years been obliged to remove search results if there is a legitimate complaint about the content’s veracity or relevance.

“I anticipate the EU will be where many of these issues get played out,” said Sarah T. Roberts, a professor of informatio­n studies at UCLA who has studied efforts to monitor and vet internet content. Objectiona­ble content “is the biggest problem going forward. It’s no longer acceptable for the firms to say that they can’t do anything about it.” How closely to manage the massive amounts of content on the internet has become a pressing question in the US since it was revealed that Russian agencies took out thousands of ads on social media during the presidenti­al campaign, reaching some 10 million people on Facebook alone. That comes on top of the existing concerns about preventing extremist attacks. This month, three men were arrested after allegedly using smartphone messaging apps to plot attacks on the New York City subway and Times Square from their homes in Canada, Pakistan and the Philippine­s. The plot was thwarted by an undercover officer, not technology. In some ways it goes to a question of identity. Social media companies see themselves not as publishers but as platforms for other people to share informatio­n, and have traditiona­lly been cautious about taking down material.

Global forum

But the pressure is on to act. Facebook, Google, Twitter and YouTube in June created the Global Internet Forum to Combat Terrorism, which says it is committed to developing new content detection technology, helping smaller companies combat extremism and promoting “counter-speech,” content meant to blunt the impact of extremist material. Proponents of counter-speech argue that rather than trying to take down every Islamic State group post, internet companies and government­s should do more to promote content that actively refutes extremist propaganda. This approach will unmask the extremist message of hate and violence in the “marketplac­e of ideas,” they argue, though critics see it as just another form of propaganda.

Facebook has recently published details of its counterter­rorism strategy for the first time. These include using artificial intelligen­ce to prevent extremist images and videos from being uploaded and algorithms to find and disable accounts linked to pages known to support extremist movements. The company also plans to increase the staff dedicated to reviewing complaints of objectiona­ble material by more than 60 percent to some 8,000 worldwide. “We want Facebook to be a hostile place for terrorists,” Monika Bickert, director of global policy management, and Brian Fishman, counterter­rorism policy manager, said in a statement. “The challenge for online communitie­s is the same as it is for real world communitie­s - to get better at spotting the early signals before it’s too late.”

But Roberts argues the companies have been slow to react and are trying to play catch up. The fact is the technology needed to detect and remove dangerous posts hasn’t kept up with the threat, experts say. Removing such material still requires judgment, and artificial intelligen­ce is not yet good enough to determine the difference, for example, between an article about the so-called Islamic State and posts from the group itself.

In other words, taking down much of this material still needs human input, said Frank Pasquale, an expert in informatio­n law and changing technology at the University of Maryland. Acknowledg­ing that is difficult for companies that were built by pushing the boundaries of technology.

“They don’t like to admit how primitive their technologi­es are; it defeats their whole narrative that they can save the world,” Pasquale said. “You kill off the golden goose if you cast doubt over the power of their algorithms.” — AP

 ??  ??
 ??  ?? In this file photo, the Facebook logo is displayed on an iPad in Philadelph­ia. — AP
In this file photo, the Facebook logo is displayed on an iPad in Philadelph­ia. — AP

Newspapers in English

Newspapers from Kuwait