Thought process: Build­ing an ar­ti­fi­cial brain

Paul Allen’s $500 mil­lion quest to dis­sect the mind and code a new one

The Washington Post Sunday - - BUSINESS - BY ARIANA EUNJUNG CHA

Paul Allen has been wait­ing for the emer­gence of in­tel­li­gent ma­chines for a very long time. As a young boy, Allen spent much of his time in the li­brary read­ing science-fic­tion nov­els in which robots man­age our homes, per­form surgery and fly around sav­ing lives like su­per­heroes. In his imag­i­na­tion, these be­ings would live among us, serv­ing as our ad­vis­ers, com­pan­ions and friends.

Now62 and worth an es­ti­mated $17.7 bil­lion, the Mi­crosoft co-founder is us­ing his wealth to back two sep­a­rate phil­an­thropic re­search ef­forts at the in­ter­sec­tion of neu­ro­science and ar­ti­fi­cial in­tel­li­gence that he hopes will has­ten that fu­ture.

The first pro­ject is to build an ar­ti­fi­cial brain from scratch that can pass a high school science test. It sounds sim­ple enough, but try­ing to teach a ma­chine not only to re­spond but also to rea­son is one of the hard­est soft­ware-en­gi­neer­ing en­deav­ors at­tempted — far more com­plex than build­ing his for­mer com­pany’s break­through Win­dows op­er­at­ing sys­tem, said to have 50 mil­lion lines of code.

The sec­ond pro­ject aims to un­der­stand in­tel­li­gence by com­ing at it from the op­po­site di­rec­tion— by start­ing with na­ture and de­con­struct­ing and an­a­lyz­ing the pieces. It’s an at­tempt to re­verse-engi­neer the hu­man brain by slic­ing it up — lit­er­ally — mod­el­ing it and run­ning sim­u­la­tions.

“Imag­ine be­ing able to take a clean sheet of pa­per and repli­cate all the amaz­ing things the hu­man brain does,” Allen said in an in­ter­view.

He per­suaded Univer­sity of Washington AI re­searcher Oren Etzioni to lead the brain build­ing team and Cal­tech neu­ro­sci­en­tist Christof Koch to lead the brain-de­con­struc­tion team. For them and the small army of other PhD sci­en­tists work­ing for Allen, the quest to

un­der­stand the brain and hu­man in­tel­li­gence has par­al­lels in the early 1900s when men first be­gan to pon­der how to build a ma­chine that could fly.

There were those who be­lieved the best­way would be to sim­u­late birds, while there were oth­ers, like the Wright broth­ers, who were build­ing ma­chines that looked very dif­fer­ent from species that could fly in na­ture. And it wasn’t clear back then which ap­proach would get hu­man­ity into the skies first.

Whether they cre­ate some­thing re­flected in na­ture or in­vent some­thing en­tirely novel, the mis­sion is the same: con­quer­ing the fi­nal fron­tier of the hu­man body — the brain — to en­able peo­ple to live longer, bet­ter lives and an­swer fun­da­men­tal ques­tions about hu­mans’ place in the uni­verse.

“We are start­ing with bi­ol­ogy. But first you have to fig­ure out how you rep­re­sent that knowl­edge in a soft­ware data­base,” Allen said. “I wish I could say our un­der­stand­ing of the brain could in­form that, but we’re prob­a­bly a decade away from that. Our un­der­stand­ing of the brain is so el­e­men­tal at this point that we don’t know how lan­guage works in the brain.”

In the Hol­ly­wood ver­sion of the ap­proach­ing era of ar­ti­fi­cial in­tel­li­gence, the ma­chines will be so sleek and so­phis­ti­cated and al­lur­ing that hu­mans will fall in love with them. The 21st cen­tury re­al­ity is a lit­tle more bor­ing.

At its most ba­sic level, ar­ti­fi­cial in­tel­li­gence is an area of com­puter science in which coders de­sign pro­grams to en­able ma­chines to act in­tel­li­gently, in the ways that hu­mans do. To­day’s AI pro­grams can ad­just the tem­per­a­ture in your home or your driv­ing route to work based on your pat­terns and traf­fic con­di­tions. They can tell you some­one stole your credit card to make a charge in a strange city or who has the best odds of win­ning tonight’s soc­cer match.

In medicine, ar­ti­fi­cial in­tel­li­gence al­go­rithms are al­ready be­ing used to do things such as pre­dict­ing manic episodes in those suf­fer­ing men­tal dis­ease; pin­point­ing dan­ger­ous hot spots of asthma on maps; guess­ing which can­cer treat­ments might give you a bet­ter chance at liv­ing longer based on your ge­netic makeup and med­i­cal history; and find­ing con­nec­tions be­tween things such as weather, traf­fic and your health.

But when it comes to gen­eral knowl­edge, sci­en­tists have strug­gled to cre­ate a tech that can do as well as a 4-year-old hu­man on a stan­dard IQ test. Although to­day’s com­put­ers are great at stor­ing knowl­edge, re­triev­ing it and find­ing pat­terns, they are of­ten still stumped by a sim­ple ques­tion: “Why?”

So while Ap­ple’s Siri, Ama­zon’s Alexa, Mi­crosoft’s Cor­tana — de­spite their mad­den­ing quirks — do a pretty good job of re­mind­ing you what’s on your cal­en­dar, you’d prob­a­bly fire them in short of a week if you put them up against a real per­son.

That will al­most cer­tainly change in the com­ing years as bil­lions of dol­lars in Sil­i­con Val­ley in­vest­ments lead to the de­vel­op­ment of more so­phis­ti­cated al­go­rithms and up­grades in mem­ory stor­age and pro­cess­ing power.

The most ex­cit­ing — and dis­con­cert­ing — de­vel­op­ments in the field may be in pre­dic­tive an­a­lyt­ics, which aims to make an in­formed guess about the fu­ture. Although it’s cur­rently mostly be­ing used in re­tail to fig­ure out who is more likely to buy, say, a cer­tain sweater, there are also test pro­grams that at­tempt to fig­ure out who might be more likely to get a cer­tain dis­ease or even com­mit a crime.

Google, which ac­quired AI com­pany Deep Mind in 2014 for an es­ti­mated $400 mil­lion, has been se­cre­tive about its plans in the field, but the com­pany has said its goal is to “solve in­tel­li­gence.” One of its first real-world ap­pli­ca­tions could be to help self­driv­ing cars be­come bet­ter aware of their en­vi­ron­ments. Face­book chief ex­ec­u­tive Mark Zucker­berg says his so­cial net­work, which has opened three dif­fer­ent AI labs, plans to build ma­chines “that are bet­ter than hu­mans at our pri­mary senses: vi­sion, lis­ten­ing, etc.”

All of this may one day be pos­si­ble. But is it a good idea?

Ad­vances in science of­ten have made peo­ple un­easy, even an­gry, go­ing back to Coper­ni­cus, who placed the sun — not the Earth — at the cen­ter of the uni­verse. Ar­ti­fi­cial in­tel­li­gence is par­tic­u­larly sen­si­tive, be­cause the brain and its abil­ity to rea­son is what makes us hu­man.

In May 2014, cos­mol­o­gist Stephen Hawk­ing caused a stir when he warned that in­tel­li­gent com­put­ers could be the down­fall of hu­man­ity and “po­ten­tially our worst mis­take in history.” Elon Musk — the bil­lion­aire phi­lan­thropist who helped found SpaceX, Tesla Mo­tors and PayPal — in Oc­to­ber 2014 lamented that a pro­gram whose func­tion is to get rid of e-mail spam may de­ter­mine “the best way of get­ting rid of spam is get­ting rid of hu­mans.” He wasn’t jok­ing.

Allen and Etzioni say that they also have thought a lot about how AI might change the world and that they re­spect­fully dis­agree with the doom­say­ers. The tech­nol­ogy will not ex­ter­mi­nate but em­power, they say, mak­ing hu­mans more in­ven­tive and help­ing solve huge global prob­lems such as cli­mate change.

“There are peo­ple who say, ‘I don’t care about the ethics of it all. I’m a tech­nol­o­gist.’ We are the op­po­site of that. We think about the im­pact of this kind of tech­nol­ogy on so­ci­ety all the time,” said Etzioni, who is chief ex­ec­u­tive of the Allen In­sti­tute for Ar­ti­fi­cial In­tel­li­gence, “and what we see is a very pos­i­tive im­pact.” Koch is more hes­i­tant. “Run­away ma­chine in­tel­li­gence is some­thing we need to think about more,” Koch, pres­i­dent and chief science of­fi­cer of the Allen In­sti­tute for Brain Science, said. “Clearly, we can’t say let’s not de­velop any more AI. That’s never go­ing to hap­pen. But we need to fig­ure out what are the imag­ined dan­gers and what are the real ones and how to min­i­mize them.”

Allen’s vi­sion is cre­at­ing an AI ma­chine that would be like a smart as­sis­tant, rather than an in­de­pen­dent be­ing, “an­swer­ing ques­tions and clar­i­fy­ing things for you and so forth.” But he ad­mits he has won­dered whether it will one day be pos­si­ble for that as­sis­tant or its de­scen­dants to evolve into some­thing more.

“It’s a very deep ques­tion,” Allen said. “No­body re­ally knows what it would take to cre­ate some­thing that is self-aware or has a per­son­al­ity. I guess I could imag­ine a day when per­haps, if we can un­der­stand how it works in the hu­man brain, which is un­be­liev­ably com­pli­cated, it could be pos­si­ble. But that is a long, long ways away.”

Hu­man brains

Made up of 100 bil­lion neu­rons, each one con­nected to as many as 10,000 oth­ers, the hu­man brain is the most com­plex bi­o­log­i­cal sys­tem in ex­is­tence. When you see, hear, touch, taste or think, neu­rons fire with an elec­tro­chem­i­cal sig­nal that trav­els across the synapses be­tween neu­rons, where in­for­ma­tion is ex­changed.

Some­where within this snarl are pat­terns and con­nec­tions that make peo­ple who they are— their mem­o­ries, pref­er­ences, habits, skills and emo­tions.

Build­ing on the work that Allen ac­cel­er­ated through his phi­lan­thropy, gov­ern­ments around the world have launched their own brain ini­tia­tives in re­cent years. The Euro­pean Com­mis­sion’s Hu­man Brain Pro­ject, which be­gan in 2013 with about $61 mil­lion in ini­tial fund­ing, aims to cre­ate an ar­ti­fi­cial model of the hu­man brain within a decade. Pres­i­dent Obama an­nounced the United States’ own BRAIN (Brain Re­search through Ad­vanc­ing In­no­va­tive Neu­rotech­nolo­gies) ef­fort in 2014 to great fanfare, com­par­ing it to the Hu­man Genome Pro­ject that led to the cur­rent ge­netic revo­lu­tion. BRAIN was launched with ini­tial fund­ing of $110 mil­lion.

Some fu­tur­ists even be­lieve that the brain, not the body, may be the key to im­mor­tal­ity — that at some point we’ll be able to down­load our brains to a com­puter or another body and live on long af­ter the bod­ies we were born in have de­cayed.

Allen’s own in­ter­est in the brain be­gan with his love of tin­ker­ing.

He al­ways has been in­ter­ested in how things were put to­gether, from steam en­gines to phones, and as he grew older he be­came fas­ci­nated with the brain.

“Com­put­ers are re­ally ba­si­cally com­put­ing el­e­ments and a lot of mem­ory,” he said. “They are pretty easy to un­der­stand, as com­pared to the brain, which was de­signed by evo­lu­tion.”

But it wasn’t un­til his mother, Faye, a for­mer ele­men­tary school teacher, be­came ill with Alzheimer’s that Allen’s brain phi­lan­thropy took shape.

Allen was very close to her and was dev­as­tated when she be­gan to regularly ex­hibit symp­toms in 2003.

“It deep­ened all my mo­ti­va­tions to want to bring for­ward re­search about the func­tions of the brain so that we can cre­ate treat­ments for the dif­fer­ent patholo­gies that can de­velop. ... They are hor­rific to watch progress,” he said.

Within months, he had founded the Allen In­sti­tute for Brain Science and seeded it with $100 mil­lion. But he didn’t want to just repli­cate what was be­ing done at univer­sity and gov­ern­ment labs.

“He wanted to do a dif­fer­ent brand of science, tackle big­ger ques­tions,” said Allan Jones, who was in­volved in the found­ing of the in­sti­tute and is now its chief ex­ec­u­tive. Allen’s march­ing or­ders were sim­ple: Fig­ure out “how in­for­ma­tion is coded in the brain.”

Allen, who has com­mit­ted a to­tal of nearly $500 mil­lion to the in­sti­tute, thought that gath­er­ing great minds un­der one roof, all fo­cused on the same goal, could ac­cel­er­ate the process of dis­cov­ery.

“Our whole ap­proach is to do science on an in­dus­trial scale and try­ing to do things ex­haus­tively and not just fo­cus on one path,” Allen said.

Allen’s “big science” strat­egy has at­tracted and sig­nif­i­cantly in­creased the salaries of some of the world’s top tal­ent— in­clud­ing a num­ber of tenured pro­fes­sors at the peak of their ca­reers, such as R. Clay Reid, a neu­ro­bi­ol­o­gist who left Har­vard Med­i­cal School in 2012 to con­tinue his work on how vi­sion works in the brain.

“The brain is the hard­est puz­zle I can think of, and never be­fore has such a large group been di­rected to re­verse-engi­neer how it works,” he said.

The Allen In­sti­tute also has pi­o­neered a num­ber of other ap­proaches un­com­mon in bi­ol­ogy re­search.

First, the brain in­sti­tute started with data, not a hy­poth­e­sis. Not just or­di­nary big data but ex­abytes of it — bil­lions of gi­ga­bytes, the scale of global In­ter­net traf­fic in a month — de­tail­ing the growth, white mat­ter and con­nec­tions of ev­ery gene ex­pressed in the brain. Re­searchers spent their first few years painstak­ingly slic­ing donor brains into thou­sands of mi­crothin anatom­i­cal cross sec­tions that were then an­a­lyzed and mapped.

Then, it took a page from the open-source move­ment, which ad­vo­cates mak­ing soft­ware code trans­par­ent and free, and it made all of its data pub­licly avail­able, invit­ing any­one to scru­ti­nize and build upon it.

By 2006, the in­sti­tute’s sci­en­tists had cre­ated the most com­pre­hen­sive three-di­men­sional map of how the mouse brain is wired and re­leased that at­las to the public, as promised. By 2010, they had mapped the hu­man brain. Since then, re­searchers around the world have built on their work; the mouse brain pa­per alone has been cited by more than 1,800 peer-re­viewed sci­en­tific ar­ti­cles.

Now­many of the in­sti­tute’s 265 em­ploy­ees are turn­ing to more tan­gi­ble prob­lems, study­ing autism, schizophre­nia, trau­matic brain in­jury and glioblas­toma, a rare but par­tic­u­larly ag­gres­sive type of brain tu­mor, as well as projects to un­der­stand the na­ture of vi­sion.

Ar­ti­fi­cial brains

All along, Allen has been back­ing par­al­lel projects in ar­ti­fi­cial brains.

He won­dered whether it might be pos­si­ble to en­code books — es­pe­cially text­books — into a com­puter brain to cre­ate a foun­da­tion upon which a ma­chine could be a dig­i­tal Aris­to­tle, us­ing a higher level of knowl­edge to in­ter­act with hu­mans.

“I wasn’t aim­ing to solve the mys­tery of hu­man con­scious­ness,” he ex­plained in his 2011 memoir. “I sim­ply wanted to ad­vance the field of ar­ti­fi­cial in­tel­li­gence so that com­put­ers could do what they do best (or­ga­nize and an­a­lyze in­for­ma­tion) to help peo­ple do what they do best, those inspired leaps of in­tu­ition that fuel orig­i­nal ideas and break­throughs.”

That idea grew into the Allen In­sti­tute for Ar­ti­fi­cial In­tel­li­gence (or AI2 as it is called by its em­ploy­ees), which opened its doors on Jan. 1, 2014, and cur­rently has 43 em­ploy­ees — a num­ber of them re­cruited from places like Google and Ama­zon. Allen hasn’t pub­licly an­nounced the ex­act amount of his in­vest­ment, but Etzioni said it is in the tens of mil­lions of dol­lars and is grow­ing.

Over the past year, Etzioni and his team have cre­ated Aristo. The in­sti­tute’s first dig­i­tal en­tity now is be­ing trained to pass the New York State Re­gents high school bi­ol­ogy exam.

Not only do the engi­neers have to fig­ure out how to rep­re­sent mem­ory, but they have to give this en­tity the abil­ity to parse nat­u­ral lan­guage and make com­plex in­fer­ences. It’s not as easy as it sounds. “It’s para­dox­i­cal that things that are hard for peo­ple are easy for the com­puter, and things that are hard for the com­puter any child can un­der­stand,” Etzioni said. For ex­am­ple, he said, com­put­ers have a dif­fi­cult time un­der­stand­ing sim­ple sen­tences such as “Peo­ple breathe air.” A com­puter might won­der: Does this ap­ply to dead peo­ple? What about peo­ple hold­ing their breath? All the

Imag­ine be­ing able to take a clean sheet of pa­per and repli­cate all the amaz­ing things the hu­man brain does.”

PAUL ALLEN, Mi­crosoft co-founder, who is us­ing his wealth to back phil­an­thropic re­search ef­forts at the in­ter­sec­tion of neu­ro­science and ar­ti­fi­cial in­tel­li­gence

time? Is air one thing? Is it made up of a sin­gle mol­e­cule? And so on. The data that Aristo pos­sesses doesn’t add up to the wis­dom an ele­men­tary school child has ac­cu­mu­lated about breath­ing.

Another test ques­tion would re­quire an AI pro­gram to in­ter­pret this nar­ra­tive: “The ball crashed through the ta­ble. It was made of sty­ro­foam.” A hu­man might grum­ble about pro­noun an­tecedent am­bi­gu­ity but still quickly con­clude that the sec­ond sen­tence de­scribed the ta­ble. Now if the sec­ond sen­tence were changed to “It was made of steel,” the hu­man would con­clude it de­scribed the ball. But that type of logic re­quires a large amount of “com­mon sense” back­ground knowl­edge— about ma­te­ri­als like sty­ro­foam, steel and wood and how they work, fur­ni­ture, how balls roll and so forth— which has to be ex­plic­itly taught to com­put­ers.

So far, Aristo has passed the first-, sec­ond- and third-grade bi­ol­ogy tests and is work­ing his way through the fourth. The last time Aristo took this test, a few­months ago, the grade was about a C. Or, more pre­cisely, 73.5 per­cent..

Etzioni says that’s pretty good — for a com­puter. Sound­ing like a glow­ing par­ent, he said, “We’re very proud he has started to make mea­sur­able progress.”

But he es­ti­mates that Aristo needs at least one more year to get an A on fourth-grade bi­ol­ogy, mostly be­cause the team needs to fig­ure out im­age recog­ni­tion and vis­ual pro­cess­ing so that the com­puter can in­ter­pret the di­a­grams.

Five more to pass the eighth­grade test.

Af­ter that, who knows?


The ar­ti­fi­cial in­tel­li­gence re­searchers and their coun­ter­parts in brain science are in a kind of race, Allen says, and their work one day will con­verge— although to what end he’s not sure.

Koch, who leads the team that is re­verse-en­gi­neer­ing the brain, ex­plained that for Allen, un­der- stand­ing the brain is about crack­ing a code.

“He’s fas­ci­nated by how codes work. What codes are used to process in­for­ma­tion in the cere­bral cor­tex? Is the code dif­fer­ent in a mouse ver­sus a hu­man? It’s the same for pro­gram­ming code. He wants to know, ‘Can you pro­gram in­tel­li­gence in an ar­ti­fi­cial way?’ ” Koch said.

The im­pli­ca­tions of this work are in­cred­i­bly com­plex, and Hawk­ing and Musk — who in Jan­uary an­nounced he would do­nate $10 mil­lion to fund re­searchers who are “work­ing to mit­i­gate ex­is­ten­tial risks fac­ing hu­man­ity” — are hardly the only ones call­ing for re­searchers to slow down and think about the con­se­quences of su­per­in­tel­li­gent ma­chines.

“There’s a huge de­bate right now about whether sim­u­lat­ing the hu­man brain is nec­es­sary to get the kind of AI we want or sim­u­lat­ing the hu­man brain would be the equiv­a­lent of re­pro­duc­ing the brain. No­body knows ex­actly what this means,” said Jonathan Moreno, a bioethi­cist at the Univer­sity of Penn­syl­va­nia.

Eric Horvitz, di­rec­tor of Mi­crosoft Re­search’s main lab in Red­mond, Wash., and a past pres­i­dent of the As­so­ci­a­tion for the Ad­vance­ment of Ar­ti­fi­cial In­tel­li­gence, stepped into the de­bate in De­cem­ber by an­nounc­ing he would fund a ma­jor re­search pro­ject on the po­ten­tial ef­fects of AI on so­ci­ety.

Led by Stan­ford Univer­sity his­to­ri­ans, the study would run for 100 years. The first re­port is sched­uled to be com­pleted in 2015 and sub­se­quent ones will be pub­lished ev­ery five years, con­tain­ing up­dates on tech­no­log­i­cal progress and rec­om­men­da­tions and guide­lines about the law, eco whether nomics, pri­vacy and other is­sues.

“A num­ber of years back we were hear­ing com­plaints about AI as a fail­ure. Now that we’re see­ing more suc­cesses — a pres­ence of ma­chine in­tel­li­gence in prod­ucts and ser­vices — we’re hear­ing some anx­i­eties com­ing out that maybe the progress has been too good,” said Horvitz, who sits on the board of AI2.

He said he hopes the study will help trig­ger thought­ful dis­cus­sion, draft guide­lines and help re­di­rect the fo­cus in the field back to the short-term where he be­lieves the pro­grams can do a lot of good. He cites be­ing able to min­i­mize hos­pi­tal er­rors, help make sense of sci­en­tific publi­ca­tions and im­prove car safety as wor­thy and achiev­able goals. He also said it’s crit­i­cally im­por­tant to think about the im­pli­ca­tions of AI for democ­racy, free­dom and other im­por­tant val­ues in the most ba- sic blue­prints for the ma­chines.

“If we could de­sign them from the ground up to be sup­port­ers of their cre­ators, they could be­come very strong ad­vo­cates of hu­man be­ings and work on their be­half,” Horvitz said.

But could those be­ings ever be­come self-aware?

Koch, the ex­pert on the sub­ject, isn’t sure.

On the one hand, he be­lieves con­scious­ness is a prop­erty of nat­u­ral sys­tems: “The job of the stom­ach is di­ges­tion, the heart to pump blood. Is the job of the brain con­scious­ness?”

“In prin­ci­ple, once I repli­cate this piece of highly or­ga­nized mat­ter I should be able to get all the prop­er­ties as­so­ci­ated with it,” he said. But he said sci­en­tists and philoso­phers aren’t in agree­ment about what is the right way to do this, un­der what cir­cum­stances and whether it should be done at all.

Two iconic works of science fic­tion of the 1950s ad­dress that ques­tion in an omi­nous way. In Isaac Asimov’s “The Last Ques­tion,” hu­mans ask a su­per­com­puter how to save the world un­til they are gone. Only the ma­chine is left when it comes up with the an­swer and in the end it com­mands, “Let there be light . . .” In Fredric Brown’s “An­swer,” a “su­per cal­cu­la­tor” made up of all the ma­chines on 96 bil­lion plan­ets is asked: “Is there a God?” Its an­swer: “Yes, now there is a God.”

“I don’t think we’re build­ing a god by any means,” Etzioni said. “We’re build­ing some­thing on science. The com­puter is an as­sis­tant — not some­one you ask, ‘Solve can­cer and get back to me.’

“I think it’s go­ing to be some­thing very so­phis­ti­cated with vast amounts of in­for­ma­tion, but I still think of it very much as a tool.”



TOP: Di­Jon Hill, an elec­tro phys­i­ol­o­gist, helps pre­pare mouse brain cells for re­search at the Allen In­sti­tute for Brain Science in Seat­tle. ABOVE: At a morn­ing meet­ing held ev­ery day at the Allen In­sti­tute for Ar­ti­fi­cial In­tel­li­gence, engi­neers, re­searchers and other staff mem­bers meet to up­date each other on their work, which in­volves try­ing to cre­ate a brain from scratch.



Mark Schaake, a soft­ware engi­neer, left, and Sam Skjons­berg, a front end engi­neer, work at the Allen In­sti­tute for Ar­ti­fi­cial In­tel­li­gence.

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.