The Star Malaysia - Star2

Google worker rebellion against military project grows

-

AN INTERNAL petition calling for Google to stay out of “the business of war” is gaining support, with some workers reportedly quitting to protest a collaborat­ion with the US military.

About 4,000 Google employees were said to have signed a petition that began circulatin­g about three months ago urging the Internet giant to refrain from using artificial intelligen­ce to make US military drones better at recognisin­g what they are monitoring.

Tech news website Gizmodo reported that about a dozen Google employees are quitting in an ethical stand.

The California-based company did not immediatel­y respond to inquiries about what was referred to as Project Maven, which reportedly uses machine learning and engineerin­g talent to distinguis­h people and objects in drone videos for the US Defence Department.

“We believe that Google should not be in the business of war,” the petition reads, according to copies posted online.

“Therefore, we ask that Project Maven be cancelled, and that Google draft, publicise and enforce a clear policy stating that neither Google nor its contractor­s will ever build warfare technology.”

Killer drones

The electronic Frontier Foundation, an internet rights group, and the Internatio­nal Committee for Robot Arms Control (ICRAC) were among those who have weighed in with support.

While reports indicated that artificial intelligen­ce findings would be reviewed by human analysts, the technology could pave the way for automated targeting systems on armed drones, ICRAC reasoned in an open letter of support to Google employees against the project.

“As military commanders come to see the object recognitio­n algorithms as reliable, it will be tempting to attenuate or even remove human review and oversight for these systems,” ICRAC said in the letter.

“We are then just a short step away from authorisin­g autonomous drones to kill automatica­lly, without human supervisio­n or meaningful human control.”

Google has gone on the record saying that its work to improve machines’ ability to recognise objects is not for offensive uses, but published documents show a “murkier” picture, the EFF’s Cindy Cohn and Peter eckersley said in an online post last month.

“If our reading of the public record is correct, systems that Google is supporting or building would flag people or objects seen by drones for human review, and in some cases this would lead to subsequent missile strikes on those people or objects,” said Cohn and eckersley.

“Those are hefty ethical stakes, even with humans in the loop further along the ‘kill chain’.”

The EFF and others welcomed internal Google debate, stressing the need for moral and ethical frameworks regarding the use of artificial intelligen­ce in weaponry.

“The use of AI in weapons systems is a crucially important topic and one that deserves an internatio­nal public discussion and likely some internatio­nal agreements to ensure global safety,” Cohn and eckersley said.

“Companies like Google, as well as their counterpar­ts around the world, must consider the consequenc­es and demand real accountabi­lity and standards of behaviour from the military agencies that seek their expertise – and from themselves.”

 ?? — Filepic ?? Google will be using its AI to make the US military drones better at distinguis­hing people and objects.
— Filepic Google will be using its AI to make the US military drones better at distinguis­hing people and objects.

Newspapers in English

Newspapers from Malaysia