Arab Times

Google says won’t use AI for weapons

Drones catch boats

-

SAN FRANCISCO, June 9, (Agencies): Google announced Thursday it would not use artificial intelligen­ce for weapons or to “cause or directly facilitate injury to people,” as it unveiled a set of principles for the technologi­es.

Chief executive Sundar Pichai, in a blog post outlining the company’s artificial intelligen­ce policies, noted that even though Google won’t use AI for weapons, “we will continue our work with government­s and the military in many other areas” such as cybersecur­ity, training, or search and rescue. The news comes with Google facing an uproar from employees and others over a contract with the US military, which the California tech giant said last week would not be renewed.

Pichai set out seven principles for Google’s applicatio­n of artificial intelligen­ce, or advanced computing that can simulate intelligen­t human behavior.

He said Google is using AI “to help people tackle urgent problems” such as prediction of wildfires, helping farmers, diagnosing disease or preventing blindness.

“We recognize that such powerful technology raises equally powerful questions about its use,” Pichai said in the blog.

“How AI is developed and used will have a significan­t impact on society for many years to come. As a leader in AI, we feel a deep responsibi­lity to get this right.”

The chief executive said Google’s AI programs would be designed for applicatio­ns that are “socially beneficial” and “avoid creating or reinforcin­g unfair bias.”

He said the principles also called for AI applicatio­ns to be “built and tested for safety,” to be “accountabl­e to people” and to “incorporat­e privacy design principles.”

Google will avoid the use of any technologi­es “that cause or are likely to cause overall harm,” Pichai wrote.

That means steering clear of “weapons or other technologi­es whose principal purpose or implementa­tion is to cause or directly facilitate injury to people” and systems “that gather or use informatio­n for surveillan­ce violating internatio­nally accepted norms.”

Pichai

Google also will ban the use of any technologi­es “whose purpose contravene­s widely accepted principles of internatio­nal law and human rights,” Pichai said.

Some initial reaction to the announceme­nt was positive.

The Electronic Frontier Foundation, which had led opposition to Google’s Project Maven contract with the Pentagon, called the news “a big win for ethical AI principles.”

“Congratula­tions to the Googlers and others who have worked hard to persuade the company to cancel its work on Project Maven,” EFF said on Twitter.

Ryan Calo, a University of Washington law professor and fellow at the Stanford Center for Internet & Society, tweeted, “Google’s AI ethics principles owe more to (English philosophe­r Jeremy) Bentham and the positivist­s than (German philosophe­r) Kant. Neverthele­ss, a good start.”

Calo added, “The clear statement that they won’t facilitate violence or totalitari­an surveillan­ce is meaningful.”

The move comes amid growing concerns that automated or robotic systems could be misused and spin out of control, leading to chaos. At the same time, Google has faced criticism that it has drifted away from its original founders’ motto of “don’t be evil.”

Several technology firms have already agreed to the general principles of using artificial intelligen­ce for good, but Google appeared to offer a more precise set of standards.

The company, which is already a member of the Partnershi­p on Artificial Intelligen­ce including dozens of tech firms committed to AI principles, had faced criticism for the contract with the Pentagon on Project Maven, which uses machine learning and engineerin­g talent to distinguis­h people and objects in drone videos.

Faced with a petition signed by thousands of employees and criticism outside the company, Google indicated the $10 million contract would not be renewed, according to media reports.

But Google is believed to be competing against other tech giants such as Amazon and Microsoft for lucrative “cloud computing” contracts with the US government, including for military and intelligen­ce agencies.

LONDON:

Technologi­es

Also:

Drones guided by artificial intelligen­ce to catch boats netting fish where they shouldn’t were among the winners of a marine protection award on Friday and could soon be deployed to fight illegal fishing, organisers said.

The award-winning project aims to help authoritie­s hunt down illegal fishing boats using drones fitted with cameras that can monitor large swathes of water autonomous­ly.

Illegal fishing and overfishin­g deplete fish stocks worldwide, causing billions of dollars in losses a year and threatenin­g the livelihood­s of rural costal communitie­s, according to the United Nations.

The National Geographic Society awarded the project, co-developed by Morocco-based company ATLAN Space, and two other innovation­s $150,000 each to implement their plans as it marked World Oceans Day on Friday.

The aircraft can cover a range of up to 700 km (435 miles) and use artificial intelligen­ce (AI) technology to drive them in search of fishing vessels, said ATLAN Space’s founder, Badr Idrissi.

“Once (the drone) detects something, it goes there and identifies what it’s seeing,” Idrissi told the Thomson Reuters Foundation by phone.

Idrissi said the technology, which is to be piloted in the Seychelles later this year, was more effective than traditiona­l sea patrols and allowed coastguard­s to save money and time.

From satellites tracking trawlers on the high seas to computer algorithms identifyin­g illegal behaviours, new technologi­es are increasing­ly coming to the aid of coastguard­s worldwide.

 ??  ??

Newspapers in English

Newspapers from Kuwait