U.S. tech giants run into Europe’s oversight
Silicon Valley is a uniquely American creation, the product of an entrepreneurial spirit and no-holds-barred capitalism that now drives many aspects of modern life.
But the likes of Facebook, Google and Apple are facing an uncomfortable truth: it is Europe’s culture of tougher oversight of companies, not America’s laissez-faire attitude, which could soon rule their industry as governments seek to combat fake news and prevent extremists from using the internet to spread hatred.
While the U.S. has largely relied on market forces to regulate content in a country where free speech is revered, European officials have shown they are willing to act. Germany recently passed a law imposing fines of up to $59 million on websites that don’t remove hate speech within 24 hours.
How closely to manage the massive amounts of content on the internet has become a pressing question in the U.S. since it was revealed that Russian agencies took out thousands of ads on social media during the presidential campaign, reaching some 10 million people on Facebook alone. That comes on top of the existing concerns about preventing extremist attacks.
In some ways it goes to a question of identity. Social media companies see themselves not as publishers but as platforms for other people to share information, and have traditionally been cautious about taking down material.
But the pressure is on to act. Facebook, Google, Twitter and YouTube in June created the Global Internet Forum to Combat Terrorism, which says it is committed to developing new content detection technology, helping smaller companies combat extremism and promoting “counter-speech,” content meant to blunt the impact of extremist material.
Proponents of counterspeech argue that rather than trying to take down every Islamic State group post, internet companies and governments should do more to promote content that actively refutes extremist propaganda.
The fact is the technology needed to detect and remove dangerous posts hasn’t kept up with the threat, experts say. Removing such material still requires judgment, and artificial intelligence is not yet good enough to determine the difference, for example, between an article about the Islamic State and posts from the group itself.
Taking down much of this material still needs human input, said Frank Pasquale, an expert in information law and changing technology at the University of Maryland. Acknowledging that is difficult for companies that were built by pushing the boundaries of technology.
“They don’t like to admit how primitive their technologies are; it defeats their whole narrative that they can save the world,” Pasquale said.
Employing enough people to fill in where their algorithms leave off would be a massive task given the volume of material posted on social media sites every day.
Siva Vaidhyanathan, director of the Center for Media and Citizenship at the University of Virginia, said he believes that moderating content is ultimately impossible because you can’t create a system that works for everyone from Saudi Arabia to Sweden.
“The problem is the very idea of the social media system — it is ungovernable,” he said. “Facebook is designed as if we are nice to each other. And we’re not.”