Tech CEOS dodge issues by warning audiences about AI
LONDON/BRUSSELS: Technology’s most influential leaders have a new message: It’s not us you need to worry about—it’s artificial intelligence (AI).
Two years ago big tech embarked on a repentance tour to Davos in response to criticism about the companies’ role in issues such as election interference by Russia-backed groups; spreading misinformation; the distribution of extremist content; antitrust violations; and tax avoidance. Uber Technologies Inc.’s new chief even asked to be regulated.
These problems haven’t gone away—last year tech’s issues were overshadowed by the world’s—but this time executives warned audiences that artificial intelligence that must be regulated, rather than the companies themselves.
“AI is one of the most profound things we’re working on as humanity. It’s more profound than fire or electricity,” Alphabet Inc. chief executive officer Sundar Pichai said in an interview at the World Economic Forum in Switzerland on Wednesday.
Comparing it to international discussions on climate change, he said, “You can’t get safety by having one country or a set of countries working on it. You need a global framework.”
The call for standardized rules on artificial intelligence was echoed by Microsoft Corp. CEO Satya Nadella and IBM CEO Ginni Rometty.
“I think the U.S. and China and the EU having a set of principles that governs what this technology can mean in our societies and the world at large is more in need than it was over the last 30 years,” Nadella said.
It’s an easy argument to make. Letting companies dictate their own ethics around artificial intelligence has led to employee protests. Google notably decided to withdraw from Project Maven, a secret government program that used the technology to analyze images from military drones, in 2018 after a backlash. Researchers agree.
“We should not put companies in a position of having to decide between ethical principles and bottom line,” said Stefan Heumann, co-director of think tank Stiftung Neue Verantwortung in Berlin. “Instead our political institutions need to set and enforce the rules regarding artificial intelligence.”
The current wave of artificial intelligence angst is also timely. In a few weeks the EU is set to unveil its plans to legislate the technology, which could include new legally binding requirements for AI developers in “high-risk sectors,” such as health care and transport, according to an early draft obtained by Bloomberg. The new rules could require companies to be transparent about how they build their systems.
Warning the business elite about the dangers of AI has meant little time has been spent at Davos on recurring problems, notably a series of revelations about how much privacy users are sacrificing to use tech products. Amazon.com Inc. workers were found to be listening in to people’s conversations via their Alexa digital assistants, Bloomberg reported last year, leading EU regulators to look at more ways to police the technology.
In July, Facebook Inc. agreed to pay US regulators $5 billion to resolve the Cambridge Analytica data scandal.
And in September Google’s Youtube settled claims that it violated US rules, which ban data collection on children under 13.