Fast Company

Google on the Brain

The tech behemoth’s worldshaki­ng quest for AI supremacy.

-

THE HUMAN BRAIN IS A FUNNY THING. CERTAIN MEMORIES CAN STICK WITH US FOREVER: THE BIRTH OF A CHILD, A CAR CRASH, AN ELECTION DAY. BUT WE ONLY STORE SOME DETAILS—THE COLOR OF THE HOSPITAL DELIVERY ROOM OR THE SMELL OF THE POLLING STATION—WHILE OTHERS FADE, SUCH AS THE FACE OF THE NURSE WHEN THAT CHILD WAS BORN, OR WHAT WE WERE WEARING DURING THAT ACCIDENT. FOR GOOGLE CEO SUNDAR PICHAI, THE DAY HE WATCHED AI RISE OUT OF A LAB IS ONE HE’LL REMEMBER FOREVER.

“This was 2012, in a room with a small team, and there were just a few of us,” he tells me. An engineer named Jeff Dean, a legendary programmer at Google who helped build its search engine, had been working on a new project and wanted Pichai to have a look. “Anytime Jeff wants to update you on something, you just get excited by it,” he says.

Pichai doesn’t recall exactly which building he was in when Dean presented his work, though odd details of that day have stuck with him. He remembers standing, rather than sitting, and someone joking about an HR snafu that had designated the newly hired Geoffrey Hinton—the “Father of Deep Learning,” an AI researcher for four decades, and, later, a Turing Award winner—as an intern.

The future CEO of Google was an SVP at the time, running Chrome and Apps, and he hadn’t been thinking about AI. No one at Google was, really, not in a significan­t way. Yes, Google cofounders Larry Page and Sergey Brin had stated publicly 12 years prior that artificial intelligen­ce would transform the company: “The ideal search engine is smart,” Page told Online magazine in May 2000. “It has to understand your query, and it has to understand all the documents, and that’s clearly AI.” But at Google and elsewhere, machine learning had been delivering meager results for decades, despite grand promises.

Now, though, powerful forces were stirring inside Google’s servers. For a little more than a year, Dean, Andrew Ng, and their colleagues had been building a massive network of interconne­cted computers, linked together in ways modeled on the human brain. The team had engineered 16,000 processors in 1,000 computers, which—combined—were capable of making 1 billion connection­s. This was unpreceden­ted for a computer system, though still far from a human brain’s capacity of more than 100 trillion connection­s.

To test how this massive neural net processed data, the engineers had run a deceptivel­y simple experiment. For three days straight, they had fed the machine a diet of millions of random images from videos on Youtube, which Google had acquired in 2006. They gave it no other instructio­ns, waiting to see what it would do if left on its own. What they learned was that a computer brain bingeing on Youtube is not so different from a human’s. In a remote part of the computer’s memory, Dean and his peers discovered that it had spontaneou­sly generated a blurry, overpixela­ted image of one thing it had seen repeatedly over the course of 72 hours: a cat.

This was a machine teaching itself to think.

The day he watched this kind of intelligen­ce emerge from Google’s servers for the first time, Pichai remembers feeling a shift in his thinking, a sense of premonitio­n. “This thing was going to scale up and maybe reveal the way the universe works,” he says. “This will be the most important thing we work on as humanity.”

The rise of AI inside Google resembles a journey billions of us are on collective­ly, hurtling into a digital future that few of us fully understand—and that we can’t opt out of. One dominated in large part by Google. Few other companies (let alone government­s) on the planet have the ability or ambition to advance computeriz­ed thought. Google operates more products, with 1 billion users, than any other tech company on earth: Android, Chrome, Drive, Gmail, Google Play Store, Maps, Photos, Search, and Youtube. Unless you live in China, if you have an internet connection, you almost certainly rely on Google to augment some parts of your brain.

Shortly after Pichai took over as CEO, in 2015, he set out to remake Google as an “AI first” company. It already had several research-oriented AI divisions, including Google Brain and Deepmind (which it acquired in 2014), and Pichai focused on turning all that intelligen­ce about intelligen­ce into new and better Google products. Gmail’s Smart Compose, introduced in May 2018, is already suggesting more than 2 billion characters in email drafts each week. Google Translate can re-create your own voice in a language you don’t speak. And Duplex, Google’s Aipowered personal assistant, can book appointmen­ts or reservatio­ns for you by

phone using a voice that sounds so human, the recipients of the calls weren’t aware it was a robot talking until the company was forced in the wake of public complaints to have Duplex identify itself.

The full reach of Google’s AI influence stretches far beyond the company’s offerings. Outside developers—at startups and big corporatio­ns alike—now use Google’s AI tools to do everything from training smart satellites to monitoring changes to the earth’s surface to rooting out abusive language on Twitter (well, it’s trying). There are now millions of devices using Google AI, and this is just the beginning. Google is on the verge of achieving what’s known as quantum supremacy. This new breed of computer will be able to crack complex equations a million or more times faster than regular ones. We are about to enter the rocket age of computing.

Used for good, artificial intelligen­ce has the potential to help society. It may find cures to deadly diseases (Google execs say that its intelligen­t machines have demonstrat­ed the ability to detect lung cancer a full year earlier than human doctors), feed the hungry, and even heal the climate. A paper submitted to a Cornell University science journal in June by several leading AI researcher­s (including ones affiliated with Google) identified several ways machine learning can address climate change, from accelerati­ng the developmen­t of solar fuels to radically optimizing energy usage.

Used for ill, AI has the potential to empower tyrants, crush human rights, and destroy democracy, freedom, and privacy. The American Civil Liberties Union issued a report in June titled “The Dawn of Robot Surveillan­ce” that warned how millions of surveillan­ce cameras (such as those sold by Google) already installed across the United States could employ AI to enable government monitoring and control of citizens. This is already happening in parts of China. A lawsuit filed that same month accuses Google of using AI in hospitals to violate patients’ privacy.

Every powerful advance in human history has been used for both good and evil. The printing press enabled the spread of Thomas Paine’s “Common Sense” but also Adolf Hitler’s fascist manifesto “Mein Kampf.” With AI, however, there’s an extra dimension to this predicamen­t: The printing press doesn’t choose the type it sets. AI, when it achieves its full potential, would be able to do just that.

Now is the time to ask questions. “Think about the kinds of thoughts you wish people had inventing fire, starting the industrial revolution, or [developing] atomic power,” says Greg Brockman, cofounder of Openai, a startup focused on building artificial general intelligen­ce that received a $1 billion investment from Microsoft in July.

Parties on both the political left and right argue that Google is too big and needs to be broken up. Would a fragmented Google democratiz­e AI? Or, as leaders at the company warn, would it hand AI supremacy to the Chinese government, which

has stated its intention to take the lead? President Xi Jinping has committed more than $150 billion toward the goal of becoming the world’s AI leader by 2030.

Inside Google, dueling factions are competing over the future of AI. Thousands of employees are in revolt against their leaders, trying to stop the tech they’re building from being used to help government­s spy or wage war. How Google decides to develop and deploy its AI may very well determine whether the technology will ultimately help or harm humanity. “Once you build these [AI] systems, they can be deployed across the whole world,” explains Reid Hoffman, the Linkedin cofounder and VC who’s on the board of the Institute for Human-centered Artificial Intelligen­ce at Stanford University. “That means anything [their creators] get right or wrong will have a correspond­ingly massive-scale impact.”

“IN THE BEGINNING, THE NEURAL NETWORK IS

untrained,” says Jeff Dean one glorious spring evening in Mountain View, California. He is standing under a palm tree just outside the Shoreline Amphitheat­re, where Google is hosting a party to celebrate the opening day of I/O, its annual technology showcase.

This event is where Google reveals to developers—and the rest of the world—where it is heading next. Dean, in a mauve-gray polo, jeans, sneakers, and a backpack double-strapped to his shoulders, is one of the headliners. “It’s like meeting Bono,” gushes one Korean software programmer who rushed over to take a selfie with Dean after he spoke at one event earlier in the day. “Jeff is God,” another tells me solemnly, almost surprised that I don’t already know this. Around Google, Dean is often compared to Chuck Norris, the action star known for his kung fu moves and taking on multiple assailants at once.

“Oh, that looks good! I’ll have one of those,” Dean says with a grin as a waiter stops by with a tray of vegan tapioca pudding cups. Leaning against a tree, he speaks about neural networks the way Laird Hamilton might describe surfing the Teahupo’o break. His eyes light up and his hands move in sweeping gestures. “Okay, so here are the layers of the network,” he says, grabbing the tree and using the grizzled trunk to explain how the neurons of a computer brain interconne­ct. He looks intently at the tree, as though he sees something hidden inside it.

Last year, Pichai named Dean head of Google AI, meaning that he’s responsibl­e for what the company will invest in and build—a role he earned in part by scaling the Youtube neural net experiment into a new framework for training their machines to think on a massive scale. That system started as an internal project called Distbelief, which many teams, including Android, Maps, and Youtube, began using to make their products smarter.

But by the summer of 2014, as Distbelief grew inside Google, Dean started to see that it had flaws. It had not been designed to adapt to technologi­cal shifts such as the rise of GPUS (the computer chips that process graphics) or the emergence of speech as a highly complex data set. Also, Distbelief was not initially designed to be open source, which limited its growth. So he made a bold decision: Build a new version that would be open to all. In November 2015, Pichai introduced Tensorflow, Distbelief’s successor, one of his first big announceme­nts as CEO.

“THEY ARE TELLING US, ‘DON’T WORRY ABOUT IT. WE GOT THIS,’” SAYS ONE CUSTOMER OF GOOGLE’S AI. “WE ALL KNOW THEY DON’T ‘GOT THIS.’”

It’s impossible to overstate the significan­ce of opening Tensorflow to developers outside of Google. “People couldn’t wait to get their hands on it,” says Ian Bratt, director of machine learning at Arm, one of the world’s largest designers of computer chips. Today, Twitter is using it to build bots to monitor conversati­ons, rank tweets, and entice people to spend more time in their feed. Airbus is training satellites to be able to examine nearly any part of the earth’s surface, within a few feet. Students in New Delhi have transforme­d mobile devices into airquality monitors. This past spring, Google released early versions of Tensorflow 2.0, which makes its AI even more accessible to inexperien­ced developers. The ultimate goal is to make creating AI apps as easy as building a website.

Tensorflow has now been downloaded approximat­ely 41 million times. Millions of devices—cars, drones, satellites, laptops, phones—use it to learn, think, reason, and create. An internal company document shows a chart tracking the usage of Tensorflow inside Google (which, by extension, tracks machine learning projects): It’s up by 5,000% since 2015.

Tech insiders, though, point out that if Tensorflow is a gift to developers, it may also be a Trojan horse. “I am worried that they are trying to be the gatekeeper­s of AI,” says an ex-google engineer, who asked not to be named because his current work is dependent on access to Google’s platform. At present, Tensorflow has just one main competitor, Facebook’s Pytorch, which is popular among academics. That gives Google a lot of control over the foundation­al layer of AI, and could tie its availabili­ty to other Google imperative­s. “Look at what [Google’s] done with Android,” this person continues. Last year, European Union regulators levied a $5 billion fine on the company for requiring electronic­s manufactur­ers to pre-install Google apps on devices running its mobile operating system. Google is appealing, but it faces further investigat­ions for its competitiv­e practices in both Europe and India.

By helping AI proliferat­e, Google has created demand for new tools and products that it can sell. One example is Tensor Processing Units (TPUS), which are integrated circuits designed to accelerate applicatio­ns using Tensorflow. If developers need more power

for their Tensorflow apps—and they usually do—they can pay Google for time and space using these chips running in Google data centers.

Tensorflow’s success has won over the skeptics within Google’s leadership. “Everybody knew that AI didn’t work,” Sergey Brin recalled to an interviewe­r at the World Economic Forum in 2017. “People tried it, they tried neural nets, and none of it worked.” Even when Dean and his team started making progress, Brin was dismissive. “Jeff Dean would periodical­ly come up to me and say, ‘Look, the computer made a picture of a cat,’ and I said, ‘Okay, that’s very nice, Jeff,’ ” he said. But he had to admit that AI was “the most significan­t developmen­t in computing in my lifetime.”

STAGE 4 OF THE SHORELINE AMPHITHEAT­RE FITS 526 PEOPLE,

and every seat is taken. It’s the second day of I/O, and Jen Gennai, Google’s head of responsibl­e innovation, is hosting a session on “Writing the Playbook for Fair and Ethical Artificial Intelligen­ce and Machine Learning.” She tells the crowd: “We’ve identified four areas that are our red lines, technologi­es that we will not pursue. We will not build or deploy weapons. We will also not deploy technologi­es that we feel violate internatio­nal human rights.” (The company also pledges to eschew technologi­es that cause “overall harm” and “gather or use informatio­n for surveillan­ce, violating internatio­nally accepted norms.”) She and two other Google executives go on to explain how the company now incorporat­es its AI principles into everything it builds, and that Google has a comprehens­ive plan for tackling everything from rooting out biases in its algorithms to forecastin­g the unintended consequenc­es of AI.

After the talk, a small group of developers from different companies mingles, dissatisfi­ed. “I don’t feel like we got enough,” observes one, an employee of a large internatio­nal corporatio­n that uses Tensorflow and frequently partners with Google. “They are telling us, ‘Don’t worry about it. We got this.’ We all know they don’t ‘got this.’ ”

These developers have every right to be skeptical. Google’s rhetoric has often contrasted with its actions, but the stakes are higher with artificial intelligen­ce. Gizmodo was first to report, in March 2018, that the company had a Pentagon contract for AI drone-strike technology, dubbed Project Maven. After Google employees protested for three months, Pichai announced that the contract would not be renewed. Shortly thereafter, another project came to light: Dragonfly, a search engine for Chinese users designed to be as powerful and ubiquitous as the one reportedly used for 94% of U.S. searches, except that it would also comply with China’s censorship rules, which ban content on some topics related to human rights, democracy, freedom of speech, and civil disobedien­ce. Dragonfly would also link users’ phone numbers to their searches. After employees protested for another four months, and activists attempted to enlist Amnesty Internatio­nal and Google shareholde­rs in the fight, Google backed down, saying it wouldn’t launch the search engine.

During that turmoil, a Google engineer confronted Dean directly about whether the company would continue working with oppressive

“IT’ S A BIT COUNTERINT­UITIVE,” SAYS GOOGLE CEO SUNDAR PICHAI, “BUT I THINK AI GIVES US A CHANCE TO ENHANCE PRIVACY.”

(Continued on page 92)

 ??  ??
 ??  ??
 ??  ??
 ??  ??
 ??  ??

Newspapers in English

Newspapers from United States