Google worker re­bel­lion against mil­i­tary pro­ject grows

Daily Sabah (Turkey) - - Business -

AN in­ter­nal pe­ti­tion call­ing for Google to stay out of “the busi­ness of war” was gain­ing support Tues­day, with some work­ers re­port­edly quit­ting to protest a col­lab­o­ra­tion with the U.S. mil­i­tary.

About 4,000 Google em­ploy­ees were said to have signed a pe­ti­tion that be­gan cir­cu­lat­ing about three months ago urg­ing the in­ter­net gi­ant to re­frain from us­ing ar­ti­fi­cial in­tel­li­gence to make the U.S. mil­i­tary drones bet­ter at rec­og­niz­ing what they are mon­i­tor­ing.

Tech news web­site Giz­modo re­ported this week that about a dozen Google em­ploy­ees are quit­ting in an eth­i­cal stand.

The Cal­i­for­nia-based com­pany did not im­me­di­ately re­spond to in­quiries about what was re­ferred to as Pro­ject Maven, which re­port­edly uses ma­chine learn­ing and en­gi­neer­ing tal­ent to dis­tin­guish peo­ple and ob­jects in drone videos for the De­fense Depart­ment.

“We be­lieve that Google should not be in the busi­ness of war,” the pe­ti­tion reads, ac­cord­ing to copies posted on­line.

“There­fore, we ask that Pro­ject Maven be can­celled, and that Google draft, pub­li­cize and en­force a clear pol­icy stat­ing that nei­ther Google nor its con­trac­tors will ever build war­fare tech­nol­ogy.”

The Elec­tronic Fron­tier Foun­da­tion, an in­ter­net rights group, and the In­ter­na­tional Com­mit­tee for Ro­bot Arms Con­trol (ICRAC) were among those who have weighed in with support.

While re­ports in­di­cated that ar­ti­fi­cial in­tel­li­gence find­ings would be re­viewed by hu­man an­a­lysts, the tech­nol­ogy could pave the way for au­to­mated tar­get­ing sys­tems on armed drones, ICRAC rea- soned in an open let­ter of support to Google em­ploy­ees against the pro­ject.

“As mil­i­tary com­man­ders come to see the ob­ject recog­ni­tion al­go­rithms as re­li­able, it will be tempt­ing to at­ten­u­ate or even re­move hu­man re­view and over­sight for these sys­tems,” ICRAC said in the let­ter.

“We are then just a short step away from au­tho­riz­ing au­ton­o­mous drones to kill au­to­mat­i­cally, with­out hu­man su­per­vi­sion or mean­ing­ful hu­man con­trol.”

Google has gone on the record say­ing that its work to im­prove ma­chines’ abil­ity to rec­og­nize ob­jects is not for of- fen­sive uses, but pub­lished doc­u­ments show a “murkier” pic­ture, the EFF’s Cindy Cohn and Pe­ter Eck­er­s­ley said in an on­line post last month.

“If our read­ing of the pub­lic record is cor­rect, sys­tems that Google is sup­port­ing or build­ing would flag peo­ple or ob­jects seen by drones for hu­man re­view, and in some cases this would lead to sub­se­quent mis­sile strikes on those peo­ple or ob­jects,” said Cohn and Eck­er­s­ley.

“Those are hefty eth­i­cal stakes, even with hu­mans in the loop fur­ther along the ‘kill chain.’”

The EFF and oth­ers wel­comed in­ter­nal Google de­bate, stress­ing the need for moral and eth­i­cal frame­works regarding the use of ar­ti­fi­cial in­tel­li­gence in weaponry.

“The use of AI in weapons sys­tems is a cru­cially im­por­tant topic and one that de­serves an in­ter­na­tional pub­lic dis­cus­sion and likely some in­ter­na­tional agree­ments to en­sure global safety,” Cohn and Eck­er­s­ley said.

“Com­pa­nies like Google, as well as their coun­ter­parts around the world, must con­sider the con­se­quences and de­mand real ac­count­abil­ity and stan­dards of be­hav­ior from the mil­i­tary agen­cies that seek their ex­per­tise, and from them­selves.”

Newspapers in English

Newspapers from Turkey

© PressReader. All rights reserved.