GOOGLE PLEDGES NOT TO USE A. I. FOR WEAPONS, SURVEILLANCE PURPOSES
SAN FRANCISCO — Google is pledging that it will not use artificial intelligence in applications related to weapons or surveillance that violate international norms, or that work in ways that go against human rights. It planted its ethical flag on use of AI just days confirming it would not renew a contract with the U. S. military to use its AI technology to analyze drone footage.
The principles, spelled out by Google CEO Sundar Pichai in a blog post Thursday, commit the company to building AI applications that are “socially beneficial,” that avoid creating or reinforcing bias and that are accountable to people.
The search giant had been formulating a patchwork of policies around these ethical questions for years, but finally put them in writing. Aside from making the principles public, Pichai didn’t specify how Google or its parent Alphabet would be accountable for conforming to them. He also said Google would continue working with governments and the military on noncombat applications involving such things as veterans’ health care and search and rescue.
“This approach is consistent with the values laid out in our original founders’ letter back in 2004,” Pichai wrote, citing the document in which Larry Page and Sergey Brin set out their vision for the company to “organize the world’s information and make it universally accessible and useful.”
Pichai said the latest principles help it take a long- term perspective “even if it means making shortterm trade- offs.”
The document, which also enshrines “relevant explanations” of how AI systems work, lays the groundwork for the rollout of Duplex, a human- sounding digital concierge that was shown off booking appointments with human receptionists at a Google developers conference in May.
Some ethicists were concerned that call recipients could be duped into thinking the robot was human. Google has said Duplex will identify itself so that wouldn’t happen.
Other companies leading the race developing AI are also grappling with ethical issues — including Apple, Amazon, Facebook, IBM and Microsoft, which have formed a group with Google called the Partnership on AI.
Making sure the public is involved in the conversations is important, said Terah Lyons, director of the partnership.
At an MIT technology conference on Tuesday, Microsoft President Brad Smith even welcomed government regulation, saying something “as fundamentally impactful” as AI shouldn’t be left to developers or the private sector on its own.