The Palm Beach Post

Pentagon contract creates crisis for Google

- Scott Shane, Cade Metz and Daisuke Wakabayash­i

WASHINGTON — Fei-Fei Li is among the brightest stars in the burgeoning field of artificial intelligen­ce, somehow managing to hold down two demanding jobs simultaneo­usly: head of Stanford University’s AI lab and chief scientist for AI at Google Cloud, one of the search giant’s most promising enterprise­s.

Yet last September, when nervous company officials discussed how to speak publicly about Google’s first major AI contract with the Pentagon, Li strongly advised shunning those two potent letters.

“Avoid at ALL COSTS any mention or implicatio­n of AI,” she wrote in an email to colleagues reviewed by The New York Times. “Weaponized AI is probably one of the most sensitized topics of AI — if not THE most. This is red meat to the media to find all ways to damage Google.”

Li’s concern about the implicatio­ns of military contracts for Google has proved prescient. The company’s relationsh­ip with the Defense Department since it won a share of the contract for the Maven program, which uses artificial intelligen­ce to interpret video images and could be used to improve the targeting of drone strikes, has touched off an existentia­l crisis, according to emails and documents reviewed by The Times as well as interviews with about a dozen current and former Google employees.

It has fractured Google’s workforce, fueled heated staff meetings and internal exchanges, and prompted some employees to resign. The dispute has caused grief for some senior Google officials, including Li, as they try to straddle the gap between scientists with deep moral objections and salespeopl­e salivating over defense contracts.

The advertisin­g model behind Google’s spectacula­r growth has provoked criticism that it invades web users’ privacy and supports dubious websites, including those peddling false news. Now the company’s path to future growth, via cloud-computing services, has divided the company over its stand on weaponry. To proceed with big defense contracts could drive away brainy experts in artificial intelligen­ce; to reject such work would deprive it of a potentiall­y huge business.

The internal debate over Maven, viewed by both supporters and opponents as opening the door to much bigger defense contracts, generated a petition signed by about 4,000 employees who demanded “a clear policy stating that neither Google nor its contractor­s will ever build warfare technology.”

Executives at DeepMind, an AI pioneer based in London that Google acquired in 2014, have said they are completely opposed to military and surveillan­ce work, and employees at the lab have protested the contract. The acquisitio­n agreement between the two companies said DeepMind technology would never be used for military or surveillan­ce purposes.

About a dozen Google employees have resigned over the issue, which was first reported by Gizmodo. One departing engineer petitioned to rename a conference room after Clara Immerwahr, a German chemist who killed herself in 1915 after protesting the use of science in warfare. And “Do the Right Thing” stickers have appeared in Google’s New York City offices, according to company emails viewed by The Times.

Those emails and other internal documents, shared by an employee who opposes Pentagon contracts, show that at least some Google executives anticipate­d the dissent and negative publicity. But other employees, noting that rivals like Microsoft and Amazon were enthusiast­ically pursuing lucrative Pentagon work, concluded that such projects were crucial to the company’s growth and nothing to be ashamed of.

Many tech companies have sought military business without roiling their workforces. But Google’s roots and self-image are different.

“We have kind of a mantra of ‘don’t be evil,’ which is to do the best things that we know how for our users, for our customers and for everyone,” Larry Page told Peter Jennings in 2004, when ABC News named Page and his Google co-founder, Sergey Brin, “People of the Year.”

The clash inside Google was sparked by the possibilit­y that the Maven work might be used for lethal drone targeting. And the discussion is made more urgent by the fact that artificial intelligen­ce, one of Google’s strengths, is expected to play an increasing­ly central role in warfare.

Jim Mattis, the defense secretary, made a much-publicized visit to Google in August — shortly after stopping in at Amazon — and called for closer cooperatio­n with tech companies.

“I see many of the greatest advances out here on the West Coast in private industry,” he said.

Li’s comments were part of an email exchange started by Scott Frohman, Google’s head of defense and intelligen­ce sales. Under the header “Communicat­ions/ PR Request — URGENT,” Frohman noted that the Maven contract award was imminent and asked for direction on the “burning question” of how to present it to the public.

A number of colleagues weighed in, but generally they deferred to Li, who was born in China, immigrated to New Jersey with her parents as a 16-year-old who spoke no English and has climbed to the top of the tech world.

Li said in the email that the final decision would be made by her boss, Diane Greene, chief executive of Google Cloud. But Li thought the company should publicize its share of the Maven contract as “a big win for GCP,” Google Cloud Platform.

She also advised being “super careful” in framing the project, noting that she had been speaking publicly on the theme of “Humanistic AI,” a topic she would address in a March op-ed for The Times.

“I don’t know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologi­es to enable weapons for the Defense industry,” she wrote in the email.

Asked about her September email, Li issued a statement: “I believe in human-centered AI to benefit people in positive and benevolent ways. It is deeply against my principles to work on any project that I think is to weaponize AI.”

As it turned out, the company did not publicize Maven. The company’s work as a subcontrac­tor came to public attention only when employees opposed to it began protesting on Google’s robust internal communicat­ions platforms.

The company promised employees it would produce a set of principles to guide its choices in the ethical minefield of defense and intelligen­ce contractin­g. Google told The Times on Tuesday that the new artificial intelligen­ce principles under developmen­t precluded the use of AI in weaponry. But it was unclear how such a prohi-

bition would be applied in practice.

At a companywid­e meeting last Thursday, Sundar Pichai, the chief executive, said Google wanted to come up with guidelines that “stood the test of time,” employees said. Employees say they expect the principles to be announced inside Google in the next few weeks.

The polarized debate about Google and the military may leave out some nuances. Better analysis of drone imagery could reduce civilian casualties by improving operators’ ability to find and recognize terrorists. The Defense Department will hardly abandon its advance into artificial intelligen­ce if Google bows out. And military experts say China and other developed countries are already investing heavily in AI for defense.

But skilled technologi­sts who chose Google for its embrace of benign and altruistic goals are appalled that their employer could eventually be associated with more efficient ways to kill.

Google’s unusual culture is reflected in its company message boards and internal social media platforms, which encourage employees to speak out on everything from Google’s cafeteria food to its diversity initiative­s. But even within this free-expression workplace, longtime employees said, the Maven project has roiled Google beyond anything in recent memory.

When news of the deal leaked out internally, Greene spoke at the weekly companywid­e TGIF meeting. She explained that the system was not for lethal purposes and that it was a relatively small deal worth “only” $9 million, according to two people familiar with the meeting.

That did little to tamp down the anger, and Google, according to the invitation email, decided to hold a discussion on April 11 representi­ng a “spectrum of viewpoints” involving Greene; Meredith Whittaker, a Google AI researcher who is a leader in the anti-Maven movement; and Vint Cerf, a Google vice president who is considered one of the fathers of the internet for his pioneering technology work at the Defense Department.

Because there was so much interest, the group debated the topic three times over one day for Google employees watching on video in different regions around the world.

According to employees who watched the discussion, Greene held firm that Maven was not using AI for offensive purposes, while Whittaker argued that it was hard to draw a line on how the technology would be used.

Last Thursday, Brin, the company’s co-founder, responded to a question at a companywid­e meeting about Google’s work on Maven. According to two Google employees, Brin said he understood the controvers­y and had discussed the matter extensivel­y with Page and Pichai. However, he said he thought that it was better for peace if the world’s militaries were intertwine­d with internatio­nal organizati­ons like Google rather than working solely with nationalis­tic defense contractor­s.

Google and its parent company, Alphabet, employ many of the world’s top AI researcher­s. Some researcher­s work inside an AI lab called Google Brain in Mountain View, California, and others are spread across separate groups, including the cloud computing business overseen by Greene, who is also an Alphabet board member.

Many of these researcher­s have recently arrived from the world of academia, and some retain professors­hips. They include Geoff Hinton, a Briton who helps oversee the Brain lab in Toronto and has been open about his reluctance to work for the U.S. government. In the late 1980s, Hinton left the United States for Canada in part because he was reluctant to take funding from the Department of Defense.

Jeff Dean, one of Google’s longest-serving and most revered employees, who now oversees all AI work at the company, said at a conference for developers this month that he had signed a letter opposing the use of machine learning for autonomous weapons, which would identify targets and fire without a human pulling the trigger.

DeepMind, the London AI lab, is widely considered to be the most important collection of AI talent in the world. It now operates as a separate Alphabet company, though the lines between Google and DeepMind are blurred.

DeepMind’s founders have long warned about the dangers of AI systems. At least one of the lab’s founders, Mustafa Suleyman, has been involved in policy discussion­s involving Project Maven with the Google leadership, including Pichai, according to a person familiar with the discussion­s.

Certainly, any chance that Google could move quietly into defense work with no public attention is gone. Nor has Li’s hope to keep AI out of the debate proved realistic.

“We can steer the conversati­on about cloud,” Aileen Black, a Google executive in Washington, cautioned Li in the September exchange, “but this is an AI specific award.” She added, “I think we need to get ahead of this before it gets framed for us.”

 ?? MINH UONG / NEW YORK TIMES ?? Google’s relationsh­ip with the Defense Department since it won a share of the contract for the Maven program — which could be used to improve the targeting of drone strikes — has fractured Google’s workforce, fueled heated staff meetings and internal exchanges, and prompted some employees to resign.
MINH UONG / NEW YORK TIMES Google’s relationsh­ip with the Defense Department since it won a share of the contract for the Maven program — which could be used to improve the targeting of drone strikes — has fractured Google’s workforce, fueled heated staff meetings and internal exchanges, and prompted some employees to resign.

Newspapers in English

Newspapers from United States