Google pledges not to use AI for weapons, sur­veil­lance

Manteca Bulletin - - Dollars & Sense -

MONTAIN VIEW (AP) — Google pledged Thurs­day that it will not use ar­ti­fi­cial in­tel­li­gence in ap­pli­ca­tions re­lated to weapons, sur­veil­lance that vi­o­lates in­ter­na­tional norms, or that works in ways that go against hu­man rights. It planted its eth­i­cal flag on use of AI just days con­firm­ing it would not re­new a con­tract with the U.S. mil­i­tary to use its AI tech­nol­ogy to an­a­lyze drone footage.

The prin­ci­ples, spelled out by Google CEO Sundar Pichai in a blog post , com­mit the com­pany to build­ing AI ap­pli­ca­tions that are “so­cially ben­e­fi­cial,” that avoid cre­at­ing or re­in­forc­ing bias and that are ac­count­able to peo­ple.

The search gi­ant had been for­mu­lat­ing a patch­work of poli­cies around these eth­i­cal ques­tions for years, but fi­nally put them in writ­ing. Aside from mak­ing the prin­ci­ples pub­lic, Pichai didn’t spec­ify how Google or its par­ent Al­pha­bet would be ac­count­able for con­form­ing to them. He also said Google would con­tinue work­ing with govern­ments and the mil­i­tary on non­com­bat ap­pli­ca­tions in­volv­ing such things as vet­er­ans’ health care and search and res­cue.

“This ap­proach is con­sis­tent with the val­ues laid out in our orig­i­nal founders’ let­ter back in 2004,” Pichai wrote, cit­ing the doc­u­ment in which Larry Page and Sergey Brin set out their vi­sion for the com­pany to “or­ga­nize the world’s in­for­ma­tion and make it uni­ver­sally ac­ces­si­ble and use­ful.”

Pichai said the lat­est prin­ci­ples help it take a long-term per­spec­tive “even if it means mak­ing short-term trade-offs.”

The doc­u­ment, which also en­shrines “rel­e­vant ex­pla­na­tions” of how AI sys­tems work, lays the ground­work for the roll­out of Du­plex, a hu­man-sound­ing dig­i­tal concierge that was shown off book­ing ap­point­ments with hu­man re­cep­tion­ists at a Google de­vel­op­ers con­fer­ence in May.

Some ethi­cists were con­cerned that call re­cip­i­ents could be duped into think­ing the robot was hu­man. Google has said Du­plex will iden­tify it­self so that wouldn’t hap­pen.

Other com­pa­nies lead­ing the race de­vel­op­ing AI are also grap­pling with eth­i­cal is­sues — in­clud­ing Ap­ple, Ama­zon, Face­book, IBM and Mi­crosoft, which have formed a group with Google called the Part­ner­ship on AI.

Mak­ing sure the pub­lic is in­volved in the con­ver­sa­tions is im­por­tant, said Terah Lyons, di­rec­tor of the part­ner­ship.

At an MIT tech­nol­ogy con­fer­ence on Tues­day, Mi­crosoft Pres­i­dent Brad Smith even wel­comed gov­ern­ment reg­u­la­tion, say­ing some­thing “as fun­da­men­tally im­pact­ful” as AI shouldn’t be left to de­vel­op­ers or the pri­vate sec­tor on its own.

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.