Our AI won’t be used for harm
Pichai unveils principles similar to Isaac Asimov’s
After thousands of Google employees protested a company contract with the Pentagon, CEO Sundar Pichai has unveiled principles declaring that the company will not develop or pursue harmful uses of artificial intelligence.
Pichai said Google will continue to work with governments and militaries but will not “design or deploy AI” for use in weapons or surveillance.
“We recognize that such powerful technology raises equally powerful questions about its use,” he said in a blog post Thursday. “How AI is developed and used will have a significant impact on society for many years to come.”
Last month, some of the company’s employees resigned over Project Maven, a Google contract with the Pentagon that involves drone analysis, Gizmodo reported. That led the company to back off from the project, with Google Cloud CEO Diane Greene reportedly telling employees the company would not renew that contract after it expires next year. The contract could eventually have been worth up to $250 million a year, according to the Intercept, which saw internal emails.
In April, more than 3,000 Google employees wrote a letter to Pichai, the New York Times reported. The letter began with: “We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology.”
Now, Pichai has outlined seven principles for Google and AI:
• Be socially beneficial.
• Avoid creating or reinforcing unfair bias.
• Be built and tested for safety.
• Be accountable to people.
• Incorporate privacy design principles.
• Uphold high standards
of scientific excellence. • Be made available for uses that accord with these principles.
The principles might bring to mind sci-fi legend Isaac Asimov’s “Three
Laws of Robotics,” which boil down to robots shouldn’t harm humans, they should protect them.
But can Google realistically stick to its now-public principles?
Irina Raicu, director for the Markkula Center for Applied Ethics at Santa Clara University, pointed out that Pichai also said,
“Many technologies have multiple uses. We will work to limit potentially harmful or abusive applications.”
“In other words, the company acknowledges that some AI developed for one purpose may in fact be re-purposed in unintended ways, even by the military,” she said Friday. “This is the reality faced by any developers of what are usually called dual-use technologies. A related question, then, is what is the ongoing responsibility of a technology’s developer once its products are released into the world.”