The Mercury News

Our AI won’t be used for harm

Pichai unveils principles similar to Isaac Asimov’s

- By Levi Sumagaysay lsumagaysa­y@ bayareanew­sgroup.com

After thousands of Google employees protested a company contract with the Pentagon, CEO Sundar Pichai has unveiled principles declaring that the company will not develop or pursue harmful uses of artificial intelligen­ce.

Pichai said Google will continue to work with government­s and militaries but will not “design or deploy AI” for use in weapons or surveillan­ce.

“We recognize that such powerful technology raises equally powerful questions about its use,” he said in a blog post Thursday. “How AI is developed and used will have a significan­t impact on society for many years to come.”

Last month, some of the company’s employees resigned over Project Maven, a Google contract with the Pentagon that involves drone analysis, Gizmodo reported. That led the company to back off from the project, with Google Cloud CEO Diane Greene reportedly telling employees the company would not renew that contract after it expires next year. The contract could eventually have been worth up to $250 million a year, according to the Intercept, which saw internal emails.

In April, more than 3,000 Google employees wrote a letter to Pichai, the New York Times reported. The letter began with: “We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractor­s will ever build warfare technology.”

Now, Pichai has outlined seven principles for Google and AI:

• Be socially beneficial.

• Avoid creating or reinforcin­g unfair bias.

• Be built and tested for safety.

• Be accountabl­e to people.

• Incorporat­e privacy design principles.

• Uphold high standards

of scientific excellence. • Be made available for uses that accord with these principles.

The principles might bring to mind sci-fi legend Isaac Asimov’s “Three

Laws of Robotics,” which boil down to robots shouldn’t harm humans, they should protect them.

But can Google realistica­lly stick to its now-public principles?

Irina Raicu, director for the Markkula Center for Applied Ethics at Santa Clara University, pointed out that Pichai also said,

“Many technologi­es have multiple uses. We will work to limit potentiall­y harmful or abusive applicatio­ns.”

“In other words, the company acknowledg­es that some AI developed for one purpose may in fact be re-purposed in unintended ways, even by the military,” she said Friday. “This is the reality faced by any developers of what are usually called dual-use technologi­es. A related question, then, is what is the ongoing responsibi­lity of a technology’s developer once its products are released into the world.”

 ??  ?? Pichai
Pichai

Newspapers in English

Newspapers from United States