San Francisco Chronicle

Pentagon asks for tech firms’ AI assistance

- By Cade Metz

There is little doubt that the Defense Department needs help from Silicon Valley’s biggest companies as it pursues work on artificial intelligen­ce. The question is whether the people who work at those companies are willing to cooperate.

Robert Work, a former deputy secretary of defense, announced last week that he is teaming up with the Center for a New American Security, an influentia­l Washington think tank that specialize­s in national security, to create a task force of former government officials, academics and representa­tives from private industry. Their goal is to explore how the federal government should embrace AI technology and work better with big

tech companies and other organizati­ons.

There is a growing sense of urgency to the question of what the United States is doing in artificial intelligen­ce. China has vowed to become the world’s leader in AI by 2030, committing billions of dollars to the effort. Like many other officials from government and industry, Work believes the United States risks falling behind.

“The question is how should the United States respond to this challenge?” he said. “This is a Sputnik moment.”

The military and intelligen­ce communitie­s have long played a big role in the technology industry and had close ties with many of Silicon Valley’s early tech giants. David Packard, HewlettPac­kard’s co-founder, even served as the deputy secretary of defense under President Richard Nixon.

But those relations have soured in recent years — at least with the rank and file of some better-known companies. In 2013, documents leaked by former defense contractor Edward Snowden revealed the breadth of spying on Americans by intelligen­ce services, including monitoring the users of several large internet companies.

Two years ago, that antagonism grew worse after the FBI demanded that Apple create special software to help it gain access to a locked iPhone that had belonged to a gunman involved in a mass shooting in San Bernardino.

“In the wake of Edward Snowden, there has been a lot of concern over what it would mean for Silicon Valley companies to work with the national security community,” said Gregory Allen, an adjunct fellow with the Center for a New American Security. “These companies are — understand­ably — very cautious about these relationsh­ips.”

The Pentagon needs help on AI from Silicon Valley because that’s where the talent is. The tech industry’s biggest companies have been hoarding AI expertise, sometimes offering multimilli­on-dollar pay packages that the government could never hope to match.

Work was the driving force behind the creation of Project Maven, the Defense Department’s sweeping effort to embrace artificial intelligen­ce. His new task force will include Terah Lyons, executive director of the Partnershi­p on AI, an industry group that includes many of Silicon Valley’s biggest companies.

Work will lead the 18-member task force with Andrew Moore, the dean of computer science at Carnegie Mellon University. Moore has warned that too much of the country’s computer science talent is going to work at America’s largest Internet companies.

With tech companies gobbling up all that talent, who will train the next generation of AI experts? Who will lead government efforts?

“Even if the U.S. does have the best AI companies, it is not clear they are going to be involved in national security in a substantiv­e way,” Allen said.

Google illustrate­s the challenges that big Internet companies face in working more closely with the Pentagon. Google’s former executive chairman, Eric Schmidt, who is still a member of the board of directors of its parent company, Alphabet, also leads the Defense Innovation Board, a federal advisory committee that recommends closer collaborat­ion with industry on AI technologi­es.

This month two news outlets revealed that the Defense Department had been working with Google in developing AI technology that can analyze aerial footage captured by drones. The effort was part of Project Maven, led by Work. Some employees were angered that the company was contributi­ng to military work.

Google runs two of the best AI research labs in the world — Google Brain in California and DeepMind in London.

Top researcher­s inside both Google AI labs have expressed concern over the use of AI by the military. When Google acquired DeepMind, the company agreed to set up an internal board that would help ensure that the lab’s technology was used in an ethical way. And one of the lab’s founders, Demis Hassabis, has explicitly said its AI would not be used for military purposes.

Google acknowledg­ed in a statement that the military use of AI “raises valid concerns” and said it is working on policies around the use of its machine learning technologi­es.

Among AI researcher­s and other technologi­sts, there is widespread fear that today’s machine learning techniques could put too much power in dangerous hands. A recent report from prominent labs and think tanks in both the United States and Britain detailed the risks, including issues with weapons and surveillan­ce equipment.

Google said it was working with the Defense Department to build technology for “nonoffensi­ve uses only.” And Work said the government explored many technologi­es that did not involve “lethal force.” But it is unclear where Google and other top internet companies will draw the line.

“This is a conversati­on we have to have,” Work said.

 ?? Paul Chinn / The Chronicle 2017 ?? Eric Schmidt leads the Defense Innovation Board, which works with industry on artificial intelligen­ce.
Paul Chinn / The Chronicle 2017 Eric Schmidt leads the Defense Innovation Board, which works with industry on artificial intelligen­ce.
 ??  ??

Newspapers in English

Newspapers from United States