The Tesla founder wor­ries about where we’re headed. And you should too.

GQ (Australia) - - CONTENTS -

ELON MUSK IS FA­MOUS FOR HIS FU­TUR­IS­TIC GAM­BLE S, BUT SIL­I­CON VAL­LEY’ S LAT­EST RUSH TO EM­BRACE AR­TI­FI­CIAL IN­TEL­LI­GENCE S CARES HIM. AND YOU SHOULD BE FRIGHT­ENED, TOO. HERE, HIS EF­FORTS TO SAVE HU­MAN­ITY FROM MACHINELEARNING OVER­LORDS. IT WAS JUST A FRIENDLY LIT­TLE AR­GU­MENT ABOUT THE FATE OF HU­MAN­ITY. Demis Hass­abis, a lead­ing cre­ator of ad­vanced ar­ti­fi­cial in­tel­li­gence, was chat­ting with Elon Musk, a lead­ing doom­sayer, about the per­ils of ar­ti­fi­cial in­tel­li­gence. They are two of the most con­se­quen­tial and in­trigu­ing men in Sil­i­con Val­ley who don’t live there. Hass­abis, a co-founder of the mys­te­ri­ous London lab­o­ra­tory Deep­mind, had come to Musk’s Spacex rocket fac­tory, out­side LA, a few years ago. They were in the can­teen, talk­ing, as a mas­sive rocket part tra­versed over­head. Musk ex­plained that his ul­ti­mate goal at Spacex was the most im­por­tant project in the world – in­ter­plan­e­tary coloni­sa­tion. Hass­abis replied that, in fact, he was work­ing on the most im­por­tant project in the world – de­vel­op­ing ar­ti­fi­cial su­per­in­tel­li­gence. Musk coun­tered that this was one rea­son we needed to colonise Mars, so that we’ll have a bolt-hole if

AI goes rogue and turns on hu­man­ity. Amused, Hass­abis said that AI would sim­ply fol­low hu­mans to Mars. This did noth­ing to soothe Musk’s anx­i­eties. An unas­sum­ing but com­pet­i­tive 40-yearold, Hass­abis is re­garded as the Mer­lin who’ll likely help con­jure our AI chil­dren. The field of AI is rapidly de­vel­op­ing but still far from the pow­er­ful, self-evolv­ing soft­ware that haunts Musk. Face­book uses AI for tar­geted ad­ver­tis­ing, photo tag­ging, and cu­rated news feeds. Mi­crosoft and Ap­ple use AI to power their dig­i­tal as­sis­tants, Cor­tana and Siri. Google’s search en­gine has been de­pen­dent on AI from the be­gin­ning. All of th­ese small ad­vances are part of the chase to even­tu­ally cre­ate flex­i­ble, self-teach­ing AI that will mir­ror hu­man learn­ing. Some in Sil­i­con Val­ley were in­trigued to learn that Hass­abis, a skilled chess player and for­mer video-game de­signer, once came up with a game called Evil Ge­nius, fea­tur­ing a malev­o­lent sci­en­tist who cre­ates a dooms­day de­vice to achieve world dom­i­na­tion. Peter Thiel, the bil­lion­aire ven­ture cap­i­tal­ist and Don­ald Trump ad­viser who co­founded Paypal with Musk and oth­ers – and who in De­cem­ber helped gather scep­ti­cal Sil­i­con Val­ley ti­tans, in­clud­ing Musk, for a meet­ing with the pres­i­dent-elect – told me a story about an in­vestor in Deep­mind who joked as he left a meet­ing that he ought to shoot Hass­abis on the spot, be­cause it was the last chance to save the hu­man race. Elon Musk be­gan warn­ing about the pos­si­bil­ity of AI run­ning amok three years ago. It prob­a­bly hadn’t eased his mind when one of Hass­abis’s part­ners in Deep­mind, Shane Legg, stated flatly, “I think hu­man ex­tinc­tion will prob­a­bly oc­cur, and tech­nol­ogy will likely play a part in this.” Be­fore Deep­mind was gob­bled up by Google in 2014 as part of its AI shop­ping spree, Musk had been an in­vestor in the com­pany. He told me that his in­volve­ment was not about a re­turn on his money but rather to keep a wary eye on the arc of AI: “It gave me more vis­i­bil­ity into the rate at which things were im­prov­ing, and I think they’re re­ally im­prov­ing at an ac­cel­er­at­ing rate, far faster than peo­ple re­alise. Mostly, be­cause in ev­ery­day life you don’t see ro­bots walk­ing around. Maybe your Roomba or some­thing. But Room­bas aren’t go­ing to take over the world.” In a star­tling pub­lic re­proach to his friends and fel­low techies, Musk warned that they could be cre­at­ing the means of their own de­struc­tion. He told Bloomberg’s Ash­lee Vance, au­thor of the biography Elon Musk, that he was afraid that his friend Larry Page, a co-founder of Google and now the CEO of its par­ent com­pany, Al­pha­bet Inc, could have per­fectly good in­ten­tions but still “pro­duce some­thing evil by ac­ci­dent” – in­clud­ing, pos­si­bly, “a fleet of ar­ti­fi­cial in­tel­li­gence– en­hanced ro­bots ca­pa­ble of de­stroy­ing mankind.” At the World Gov­ern­ment Sum­mit in Dubai, in Fe­bru­ary, Musk again cued the scary or­gan mu­sic, evok­ing the plots of clas­sic hor­ror sto­ries when he noted that, “some­times what will hap­pen is a sci­en­tist will get so en­grossed in their work that they don’t re­ally re­alise the ram­i­fi­ca­tions of what they’re do­ing.’’ He said that the way to es­cape hu­man ob­so­les­cence may, in the end, be by “hav­ing some sort of merger of bi­o­log­i­cal in­tel­li­gence and ma­chine in­tel­li­gence.’’ This Vul­can mind-meld could in­volve some­thing called a neu­ral lace – an in­jectable mesh that would lit­er­ally hard­wire your brain to com­mu­ni­cate di­rectly with com­put­ers. “We’re al­ready cy­borgs,” Musk told me in Fe­bru­ary. “Your phone and your com­puter are ex­ten­sions of you, but the in­ter­face is through fin­ger move­ments or speech, which are very slow.” With a neu­ral lace inside your skull you’d flash data from your brain, wire­lessly, to your dig­i­tal de­vices or to vir­tu­ally un­lim­ited com­put­ing power in the cloud. “For a mean­ing­ful par­tial-brain in­ter­face, I think we’re roughly four or five years away.” Musk’s alarm­ing views on the dangers of AI first went vi­ral af­ter he spoke at MIT in 2014 – spec­u­lat­ing (pre-trump) that AI was prob­a­bly hu­man­ity’s “big­gest ex­is­ten­tial threat”. He added that he was in­creas­ingly in­clined to think there should be some na­tional or in­ter­na­tional reg­u­la­tory over­sight – anath­ema to Sil­i­con Val­ley – “to make sure that we don’t do some­thing fool­ish”. He went on: “With AI, we are sum­mon­ing the de­mon. You know all those sto­ries where there’s the guy with the pen­ta­gram and the holy wa­ter and he’s like, yeah, he’s sure he can con­trol the de­mon? Doesn’t work out.” Some AI en­gi­neers found Musk’s the­atri­cal­ity so ab­surdly amus­ing they be­gan echo­ing it. Re­turn­ing to the lab af­ter a break, they’d say, “OK, let’s get back to work sum­mon­ing.” Musk wasn’t laugh­ing. ‘Elon’s cru­sade’ (as one of his friends and fel­low tech big shots calls it) against un­fet­tered AI had be­gun. II. “I AM THE AL­PHA” Elon Musk smiled when I men­tioned to him that he comes across as some­thing of an Ayn Ran­dian hero. “I have heard that be­fore,” he said in his slight South African ac­cent. “She ob­vi­ously has a fairly ex­treme set of views, but she has some good points in there.” But Ayn Rand would do some rewrites on Elon Musk. She would make his eyes gray and his face more gaunt. She would re­fash­ion his pub­lic de­meanor to be less droll, and she would not coun­te­nance his goofy gig­gle. She would cer­tainly get rid of all his non­sense about the “col­lec­tive” good. She would find great ma­te­rial in the 45-year-old’s com­pli­cated per­sonal life: his first wife, the fan­tasy writer Jus­tine Musk, and their five sons (one set of twins, one of triplets), and his much younger sec­ond wife, the Bri­tish ac­tress Talu­lah Ri­ley, who played the bor­ing Ben­net sis­ter in the Keira Knight­ley ver­sion of Pride & Prej­u­dice. Ri­ley and Musk were mar­ried, di­vorced, and then re-mar­ried. They’re now di­vorced again. Last au­tumn, Musk tweeted that Talu­lah “does a great job play­ing a deadly sexbot” on HBO’S West­world, adding a smi­ley-face emoticon. It’s hard for mere mor­tal women to main­tain a re­la­tion­ship with some­one as in­sanely ob­sessed with work as Musk. “How much time does a woman want a week?” he asked Ash­lee Vance. “Maybe 10 hours? That’s kind of the min­i­mum?” Mostly, Rand would savour Musk, a hy­per­log­i­cal, risk-lov­ing in­dus­tri­al­ist. He en­joys cos­tume par­ties, wing-walk­ing, and Ja­panese steam­punk ex­trav­a­gan­zas. Robert Downey Jr used Musk as a model for Iron Man. Marc Mathieu, the chief mar­ket­ing of­fi­cer of Sam­sung USA, who has gone fly-fish­ing in Ice­land with Musk, calls him “a cross be­tween Steve Jobs and Jules Verne.’’ As they danced at their wed­ding re­cep­tion, Jus­tine later re­called, Musk in­formed her, “I am the al­pha in this re­la­tion­ship.” In a tech uni­verse full of skinny guys in hoodies – whip­ping up bots that will chat with you and apps that can study a photo of a dog and tell you what breed it is – Musk is a throw­back to Henry Ford and Hank Rear­den. In At­las Shrugged, Rear­den gives his wife a bracelet made from the first batch of his revo­lu­tion­ary metal, as though it were made of di­a­monds. Musk has a chunk of one of his rock­ets mounted on the wall of his Bel Air house, like a work of art. Musk shoots for the moon – lit­er­ally. He launches cost-ef­fi­cient rock­ets into space and hopes to even­tu­ally in­habit the Red Planet. In Fe­bru­ary, he an­nounced plans to send two space tourists on a flight around the moon as early as next year. He cre­ates sleek bat­ter­ies that could lead to a world pow­ered by cheap so­lar en­ergy. He forges gleam­ing steel into sen­su­ous Tesla elec­tric cars with such el­e­gant lines that even the nit­pick­ing Steve Jobs would have been hard­pressed to find fault. He wants to save time as well as hu­man­ity; he dreamed up the Hy­per­loop, an elec­tro­mag­netic bul­let train in a tube, which may one day whoosh trav­el­ers be­tween LA and

San Fran­cisco at 1100km/h per hour. When Musk vis­ited sec­re­tary of de­fence Ash­ton Carter mid-2016, he mis­chie­vously tweeted that he was at the Pen­tagon to talk about de­sign­ing a Tony Stark–style “fly­ing metal suit.’’ Sit­ting in traf­fic in LA last De­cem­ber, get­ting bored and frus­trated, he tweeted about cre­at­ing the Bor­ing Com­pany to dig tun­nels un­der the city to res­cue the pop­u­lace from “soul-de­stroy­ing traf­fic”. By Jan­uary, ac­cord­ing to Bloomberg Busi­ness­week, Musk had as­signed a se­nior Spacex en­gi­neer to over­see the plan and had started dig­ging his first test hole. His of­ten quixotic ef­forts to save the world have in­spired a par­ody twit­ter ac­count, “Bored Elon Musk,” where a faux Musk spouts ideas such as “Ox­ford com­mas as a ser­vice” and “bunches of ba­nanas ge­net­i­cally en­gi­neered” so that the ba­nanas ripen one at a time. Of course, big dream­ers have big stum­bles. Some Spacex rock­ets have blown up, and last June a driver was killed in a self-driv­ing Tesla whose sen­sors failed to no­tice the trac­tor-trailer cross­ing its path. (An in­ves­ti­ga­tion by the Na­tional High­way Traf­fic Safety Ad­min­is­tra­tion found that Tesla’s Au­topi­lot sys­tem was not to blame.) Musk is stoic about set­backs but all too con­scious of night­mare sce­nar­ios. His views re­flect a dic­tum from At­las Shrugged: “Man has the power to act as his own de­stroyer – and that is the way he has acted through most of his his­tory.” As he told me, “We are the first species ca­pa­ble of self-an­ni­hi­la­tion.” Here’s the nag­ging thought you can’t es­cape as you drive around from glass box to glass box in Sil­i­con Val­ley: the Lords of the Cloud love to yam­mer about turn­ing the world into a bet­ter place as they churn out new al­go­rithms, apps, and in­ven­tions that, it is claimed, will make our lives eas­ier, health­ier, fun­nier, closer, cooler, longer and kinder to the planet. And yet there’s a creepy feel­ing un­der­neath it all, a sense that we’re the mice in their ex­per­i­ments, that they re­gard us hu­mans as Be­ta­maxes or eight-tracks, old tech­nol­ogy that will soon be dis­carded so that they can get on to en­joy­ing their sleek new world. Many peo­ple there have ac­cepted this fu­ture – we’ll live to be 150 years old, but we’ll have ma­chine over­lords. Maybe we al­ready have over­lords. As Musk slyly told Re­code’s an­nual Code Con­fer­ence last year in Ran­cho Pa­los Verdes, Cal­i­for­nia, we could al­ready be play­things in a sim­u­lated-re­al­ity world run by an ad­vanced civil­i­sa­tion. Re­port­edly, two Sil­i­con Val­ley bil­lion­aires are work­ing on an al­go­rithm to break us out of the Ma­trix. Among the en­gi­neers lured by the sweet­ness of solv­ing the next prob­lem, the pre­vail­ing at­ti­tude is that em­pires fall, so­ci­eties change and we’re march­ing to­wards the in­evitable phase ahead. They ar­gue not about “whether” but “how close” we are to repli­cat­ing, and im­prov­ing on, our­selves. Sam Alt­man, the pres­i­dent of Y Com­bi­na­tor, the Val­ley’s top start-up ac­cel­er­a­tor, be­lieves hu­man­ity is on the brink of such in­ven­tion. “The hard part of stand­ing on an ex­po­nen­tial curve is, when you look back­wards, it looks flat, and when you look for­ward, it looks ver­ti­cal,” he says. “And it’s hard to cal­i­brate how much you are mov­ing be­cause it al­ways looks the same.” You’d think that any­time Musk, Stephen Hawk­ing and Bill Gates are all rais­ing the same warn­ing about AI – as all of them are – it would cause con­cern. But, for a long time, the fog of fa­tal­ism over the Bay Area was thick. Musk’s cru­sade was viewed as Sisyphean at best and Lud­dite at worst. The para­dox is this – many tech oli­garchs see ev­ery­thing they are do­ing to help us, and all their benev­o­lent man­i­festos, as street­lamps on the road to a fu­ture where, as Steve Woz­niak says, hu­mans are the fam­ily pets. But Musk is not go­ing gen­tly. He plans on fight­ing this with ev­ery fi­bre of his car­bon-based be­ing. Musk and Alt­man have founded Ope­nai, a bil­lion-dol­lar non­profit com­pany, to work for safer ar­ti­fi­cial in­tel­li­gence. I sat down with the two men when their new ven­ture had only a hand­ful of young en­gi­neers and a makeshift of­fice, an apart­ment in San Fran­cisco’s Mis­sion Dis­trict that be­longs to Greg Brock­man, Ope­nai’s 28-year-old co-founder and chief tech­nol­ogy of­fi­cer. When I went back re­cently, to talk with Brock­man and Ilya Sutskever, the com­pany’s 30-year-old re­search di­rec­tor (and also a co-founder), Ope­nai had moved into an airy of­fice nearby with a ro­bot, the usual com­ple­ment of snacks, and 50 full-time em­ploy­ees. (An­other 10 to 30 are on the way.) Alt­man, in grey T-shirt and jeans, is all wiry, pale in­ten­sity. Musk’s fer­vour is masked by his dif­fi­dent man­ner and rosy coun­te­nance. His eyes are green or blue, light de­pen­dent, and his lips are red. He has an aura of com­mand while re­tain­ing a trace of the gawky South African teenager who im­mi­grated to Canada by him­self at the age of 17. In Sil­i­con Val­ley, a lunchtime meet­ing does not nec­es­sar­ily in­volve that mun­dane fuel known as food. Younger coders are too ab­sorbed in al­go­rithms to linger over meals. Some just chug Soy­lent. Older ones are so ob­sessed with im­mor­tal­ity that some­times they’re just wash­ing down health pills with al­mond milk. At first blush, Ope­nai seemed like a ban­tamweight van­ity project, a bunch of brainy kids tak­ing on the multi­bil­lion-dol­lar ef­forts at Google, Face­book and other com­pa­nies that em­ploy the world’s lead­ing AI ex­perts. But then, play­ing a well-heeled David to Go­liath is Musk’s spe­cialty, and he al­ways does it with style – and some use­ful sen­sa­tion­al­ism. Let oth­ers in Sil­i­con Val­ley fo­cus on their IPO price and rid­ding San Fran­cisco of what they re­gard as its un­sightly home­less pop­u­la­tion. Musk has larger aims, like end­ing global warm­ing and dy­ing on Mars (just not, he says, on im­pact). Musk be­gan to see hu­man­ity’s fate in the galaxy as his per­sonal obli­ga­tion three decades ago, when, as a teenager, he had a full-blown ex­is­ten­tial cri­sis. Musk told me that The Hitch­hiker’s Guide to the Galaxy by Dou­glas Adams was a turn­ing point for him. The book is about aliens de­stroy­ing the earth to make way for a hy­per­space high­way and fea­tures Marvin the Para­noid An­droid and a su­per­com­puter de­signed to an­swer all the mys­ter­ies of the uni­verse. (Musk slipped at least one ref­er­ence to the book into the soft­ware of the Tesla Model S.) As a teenager, Vance wrote in his biography, Musk for­mu­lated a mis­sion state­ment for him­self: “The only thing that makes sense to do is strive for greater col­lec­tive en­light­en­ment.” Ope­nai got un­der way with a vague man­date – which isn’t sur­pris­ing, given peo­ple in the field are still ar­gu­ing over what form AI will take, what it will be able to do, and what can be done about it. So far, pub­lic pol­icy on AI is strangely un­de­ter­mined and soft­ware is largely un­reg­u­lated. The Fed­eral Avi­a­tion Ad­min­is­tra­tion over­sees drones, the Se­cu­ri­ties and Ex­change Com­mis­sion over­sees au­to­mated fi­nan­cial trad­ing, and the De­part­ment of Trans­porta­tion now over­sees self-driv­ing cars. Musk be­lieves that it is bet­ter to try to get su­per-ai first and dis­trib­ute the tech­nol­ogy to the world than to al­low the al­go­rithms to be con­cealed and con­cen­trated in the hands of tech or gov­ern­ment elites – even when the tech elites hap­pen to be his own friends, peo­ple such as Google founders Larry Page and Sergey Brin. “I’ve had many con­ver­sa­tions with Larry about AI and ro­bot­ics – many, many,” Musk says. “And some of them have got­ten quite heated. You know, I think it’s not just Larry, but there are many fu­tur­ists who feel a cer­tain in­evitabil­ity or fa­tal­ism about ro­bots, where we’d have some sort of pe­riph­eral role. The phrase used is ‘We are the bi­o­log­i­cal boot-loader for dig­i­tal su­per­in­tel­li­gence.’ ” (A boot loader is the small pro­gram that launches the op­er­at­ing sys­tem when you first turn on your com­puter.) “Mat­ter can’t or­gan­ise it­self into a chip,” Musk ex­plains. “But it can or­gan­ise it­self into a bi­o­log­i­cal en­tity that gets in­creas­ingly so­phis­ti­cated and ul­ti­mately can cre­ate the chip.”

Musk has no in­ten­tion of be­ing a boot-loader. Page and Brin see them­selves as forces for good, but Musk says the is­sue goes be­yond the mo­ti­va­tions of a few Sil­i­con Val­ley ex­ec­u­tives. “It’s great when the em­peror is Mar­cus Aure­lius,” he says. “It’s not so great when the em­peror is Caligula.” III. THE GOLDEN CALF Af­ter the so-called AI win­ter – the broad, com­mer­cial fail­ure in the late ’80s of an early AI tech­nol­ogy that wasn’t up to snuff – ar­ti­fi­cial in­tel­li­gence got a rep­u­ta­tion as snake oil. Now it’s the hot thing again in this go-go era in the Val­ley. Greg Brock­man, of Ope­nai, be­lieves the next decade will be all about AI, with ev­ery­one throw­ing money at the small num­ber of “wizards” who know the AI “in­can­ta­tions”. Peo­ple who got rich writ­ing code to solve prob­lems like how to pay a stranger for stuff on­line now con­tem­plate a ver­tig­i­nous world where they’re the creators of a new re­al­ity and per­haps a new species. Mi­crosoft’s Jaron Lanier, the com­puter sci­en­tist known as the fa­ther of vir­tual re­al­ity, gave me his view as to why the di­gerati find the “sci­ence-fic­tion fan­tasy” of AI so tan­ta­lis­ing: “It’s say­ing, ‘Oh, you techy peo­ple, you’re like gods; you’re cre­at­ing life; you’re trans­form­ing re­al­ity.’ There’s a nar­cis­sism in it that we’re the peo­ple who can do it. The Pope can’t do it. The pres­i­dent can’t do it. No one else can do it. We are the masters of it… The soft­ware we’re build­ing is our im­mor­tal­ity.” This kind of God-like am­bi­tion isn’t new, he adds. “I read about it once in a story about a golden calf.” He shakes his head. “Don’t get high on your own sup­ply, you know?” Google has gob­bled up al­most ev­ery in­ter­est­ing ro­bot­ics and ma­chine-learn­ing com­pany over the past few years. It bought Deep­mind for $650m, beat­ing off Face­book, and built the Google Brain team to work on AI. It hired Ge­of­frey Hinton, a Bri­tish pi­o­neer in ar­ti­fi­cial neu­ral net­works; and Ray Kurzweil, the ec­cen­tric fu­tur­ist who has pre­dicted that we are only 28 years away from “Sin­gu­lar­ity” – the mo­ment when the spi­ralling ca­pa­bil­i­ties of self-im­prov­ing ar­ti­fi­cial su­per­in­tel­li­gence will far ex­ceed hu­man in­tel­li­gence, and hu­man be­ings will merge with AI to cre­ate the “god-like” hy­brid be­ings of the fu­ture. It’s in Larry Page’s blood and Google’s DNA to be­lieve that AI is the com­pany’s in­evitable destiny – think of that destiny as you will. (“If evil AI lights up,” Ash­lee Vance told me, “it will light up first at Google.”) If Google could get com­put­ers to mas­ter search when search was the most im­por­tant prob­lem in the world, then pre­sum­ably it can get com­put­ers to do ev­ery­thing else. Last March, Sil­i­con Val­ley gulped when a fa­bled South Korean player of the world’s most com­plex board game, was beaten in Seoul by Deep­mind’s Al­phago. Hass­abis, who has said he’s run­ning an Apollo pro­gram for AI, called it a “his­toric mo­ment”, ad­mit­ting that even he was sur­prised it hap­pened so quickly. “I’ve al­ways hoped that AI could help us dis­cover com­pletely new ideas in com­plex sci­en­tific do­mains,” Hass­abis told me in Fe­bru­ary. “This might be one of the first glimpses of that kind of cre­ativ­ity.” More re­cently, Al­phago played 60 games on­line against top play­ers in China, Ja­pan and Korea – and emerged with a record of 60-0. In Jan­uary, in an­other shock to the sys­tem, an AI pro­gram showed that it could bluff. Li­bra­tus, built by two Carnegie Mel­lon re­searchers, was able to crush top poker play­ers at Texas Hold ’Em. Peter Thiel told me about a friend of his who says that the only rea­son peo­ple tol­er­ate Sil­i­con Val­ley is that no one there seems to be hav­ing any sex or any fun. But there are re­ports of sex ro­bots on the way that come with apps that can con­trol their moods and even have a pulse. The Val­ley is skit­tish when it comes to fe­male sex ro­bots – an ob­ses­sion in Ja­pan – be­cause of its no­to­ri­ously male-dom­i­nated cul­ture and its much-pub­li­cised is­sues with sex­ual ha­rass­ment and dis­crim­i­na­tion. But when I asked Musk about this, he replied, “Sex ro­bots? I think those are quite likely.’’ Whether sin­cere or a shrewd PR move, Hass­abis made it a con­di­tion of the Google ac­qui­si­tion that Google and Deep­mind es­tab­lish a joint AI ethics board. At the time, three years ago, form­ing an ethics board was seen as a pre­co­cious move, as if to im­ply that Hass­abis was on the verge of achiev­ing true AI. Now, not so much. Last June, a re­searcher at Deep­mind coau­thored a pa­per out­lin­ing a way to de­sign a ‘big red but­ton’ that could be used as a kill switch to stop AI from in­flict­ing harm. Google ex­ec­u­tives say Larry Page’s view on AI is shaped by his frus­tra­tion about how many sys­tems are sub-op­ti­mal – from sys­tems that book trips to sys­tems that price crops. He be­lieves that AI will im­prove peo­ple’s lives and has said that, when hu­man needs are more eas­ily met, peo­ple will “have more time with their fam­ily or to pur­sue their own in­ter­ests.” Es­pe­cially when a ro­bot throws them out of work. Musk is a friend of Page’s. He at­tended Page’s wed­ding and some­times stays at his house when he’s in the San Fran­cisco area. “It’s not worth hav­ing a house for one or two nights a week,” the 99th-rich­est man in the world ex­plained to me. At times, Musk has ex­pressed con­cern that Page may be naïve about how AI could play out. If Page is in­clined to­ward the phi­los­o­phy that ma­chines are only as good or bad as the peo­ple cre­at­ing them, Musk firmly dis­agrees. Some at Google – per­haps an­noyed that Musk is, in essence, point­ing a fin­ger at them for rush­ing ahead willynilly – dis­miss his dystopic take as a cin­e­matic cliché. Eric Sch­midt, the ex­ec­u­tive chair­man of Google’s par­ent com­pany, put it this way: “Ro­bots are in­vented. Coun­tries arm them. An evil dic­ta­tor turns the ro­bots on hu­mans and all hu­mans will be killed. Sounds like a movie.” Some in Sil­i­con Val­ley ar­gue that Musk is in­ter­ested less in sav­ing the world than in buff­ing his brand, and that he’s ex­ploit­ing a deeply rooted con­flict – the one be­tween man and ma­chine, and our fear that the cre­ation will turn against us. They gripe that his epic good-ver­sus-evil story line is about lur­ing tal­ent at dis­count rates and in­cu­bat­ing his own AI soft­ware for cars and rock­ets. It’s cer­tainly true that the Bay Area has al­ways had a healthy re­spect for mak­ing a buck. As Sam Spade said in The Mal­tese Fal­con, “Most things in San Fran­cisco can be bought or taken.” Musk is without doubt a daz­zling sales­man. Who bet­ter than a guardian of hu­man wel­fare to sell you your new, self-driv­ing Tesla? An­drew Ng – the chief sci­en­tist at Baidu, known as China’s Google – based in Sun­ny­vale, Cal­i­for­nia, writes off Musk’s Manichaean throw-down as “mar­ket­ing ge­nius”. “At the height of the re­ces­sion he per­suaded the US gov­ern­ment to help him build an elec­tric sports car,” Ng re­calls, in­cred­u­lous. The Stan­ford pro­fes­sor is mar­ried to a ro­bot­ics ex­pert, is­sued a ro­bot-themed en­gage­ment an­nounce­ment and keeps a ‘Trust the Ro­bot’ black jacket hang­ing on the back of his chair. He thinks peo­ple who worry about AI go­ing rogue are dis­tracted by “phan­toms,” and re­gards get­ting alarmed now as akin to wor­ry­ing about overpopulation on Mars be­fore we pop­u­late it. “And I think it’s fas­ci­nat­ing,” he says about Musk in par­tic­u­lar, “that in a rather short pe­riod of time he’s in­serted him­self into the con­ver­sa­tion on AI. He sees ac­cu­rately that AI is go­ing to cre­ate tremen­dous amounts of value.” Though he once called Musk a “sci-fi ver­sion of PT Bar­num,” Ash­lee Vance thinks Musk’s con­cern about AI is gen­uine, even if what he can do about it is un­clear. “His [now-ex] wife, Talu­lah, told me they had con­ver­sa­tions about AI at home,” Vance notes. “Elon is bru­tally log­i­cal. The way he tack­les ev­ery­thing is like mov­ing chess pieces around. When he plays this sce­nario out in his head, it doesn’t end well for peo­ple.” Eliezer Yud­kowsky, a co-founder of the Ma­chine In­tel­li­gence Re­search In­sti­tute, in Berke­ley, agrees: “He’s Elon-freak­ing-musk.

He doesn’t need to touch the third rail of the ar­ti­fi­cial-in­tel­li­gence con­tro­versy if he wants to be sexy. He can just talk about Mars coloni­sa­tion.” Some sniff that Musk is not truly part of the white­board cul­ture and that his scary sce­nar­ios miss the fact that we’re liv­ing in a world where it’s hard to get your prin­ter to work. Oth­ers chalk up Ope­nai, in part, to a case of FOMO – Musk sees his friend Page build­ing new-wave soft­ware in a hot field and craves a com­pet­ing army of coders. As Vance sees it, “Elon wants all the toys that Larry has. They’re like th­ese two su­per­pow­ers. They’re friends, but there’s a lot of ten­sion in their re­la­tion­ship.” A ri­valry of this kind might be best summed up by a line from the vain­glo­ri­ous head of the fic­tional tech be­he­moth Hooli, on HBO’S Sil­i­con Val­ley: “I don’t want to live in a world where some­one else makes the world a bet­ter place bet­ter than we do.” Musk’s dis­agree­ment with Page over the po­ten­tial dangers of AI “did af­fect our friend­ship for a while,” says Musk, “but that has since passed. We are on good terms th­ese days.” Musk never had as close a per­sonal con­nec­tion with 32-year-old Mark Zucker­berg, who has be­come an un­likely life­style guru, set­ting a new chal­lenge for him­self ev­ery year. Th­ese have in­cluded wear­ing a tie ev­ery day, learn­ing Man­darin and eat­ing meat only from an­i­mals he killed him­self. In 2016, it was AI’S turn. Zucker­berg has moved his AI ex­perts to desks near his own. Three weeks af­ter Musk and Alt­man an­nounced their ven­ture to make the world safe from ma­li­cious AI, Zucker­berg posted on Face­book that his project for the year was build­ing a help­ful AI to as­sist him in man­ag­ing his home – ev­ery­thing from recog­nis­ing his friends and let­ting them inside to keep­ing an eye on the nurs­ery. “You can think of it kind of like Jarvis in Iron Man,” he wrote. One Face­booker cau­tioned Zucker­berg not to “ac­ci­den­tally cre­ate Skynet,” the mil­i­tary su­per­com­puter that turns against hu­man be­ings in the Ter­mi­na­tor movies. “I think we can build AI so it works for us and helps us,” Zucker­berg replied. And clearly throw­ing shade at Musk, he con­tin­ued: “Some peo­ple fear-mon­ger about how AI is a huge dan­ger, but that seems far­fetched to me and much less likely than disasters due to wide­spread dis­ease, vi­o­lence, etc.” Or, as he de­scribed his phi­los­o­phy at a Face­book de­vel­op­ers’ con­fer­ence in April 2016, in a clear re­jec­tion of warn­ings from Musk and oth­ers he be­lieves to be alarmists: “Choose hope over fear.” In the Novem­ber is­sue of Wired, guest-edited by Barack Obama, Zucker­berg wrote that there is lit­tle ba­sis be­yond sci­ence fic­tion to worry about dooms­day sce­nar­ios: “If we slow down progress in def­er­ence to un­founded con­cerns, we stand in the way of real gains.’’ He com­pared AI jit­ters to early fears about air­planes, not­ing, “We didn’t rush to put rules in place about how air­planes should work be­fore we fig­ured out how they’d fly in the first place.” Zucker­berg in­tro­duced his AI but­ler, Jarvis, right be­fore Christ­mas. With the soothing voice of Mor­gan Free­man, it was able to help with mu­sic, lights and even mak­ing toast. I asked the real-life Iron Man, Musk, about Jarvis when it was in its ear­li­est stages. “I wouldn’t call it AI to have your house­hold func­tions au­to­mated,” Musk said. “It’s re­ally not AI to turn the lights on, set the tem­per­a­ture.” Zucker­berg can be just as dis­mis­sive. Asked in Ger­many whether Musk’s apoc­a­lyp­tic fore­bod­ings were “hys­ter­i­cal” or “valid,” Zucker­berg replied “hys­ter­i­cal”. And when Musk’s Spacex rocket blew up on the launch pad, de­stroy­ing a satel­lite Face­book was leas­ing, Zucker­berg posted he was “deeply dis­ap­pointed.’’ IV. A RUP­TURE IN HIS­TORY Musk and oth­ers who have raised a warn­ing flag on AI have of­ten been treated like drama queens. In Jan­uary 2016, Musk won the an­nual Lud­dite Award, be­stowed by a Wash­ing­ton tech-pol­icy think tank. Still, he has some de­cent wing­men. Stephen Hawk­ing told the BBC, “I think the de­vel­op­ment of full AI could spell the end of the hu­man race.” Bill Gates told Char­lie Rose that AI was po­ten­tially more dan­ger­ous than a nu­clear catas­tro­phe. Nick Bostrom, a 44-yearold Ox­ford phi­los­o­phy pro­fes­sor, warned in his 2014 book Su­per­in­tel­li­gence that “once un­friendly su­per­in­tel­li­gence ex­ists, it would pre­vent us from re­plac­ing it or chang­ing its pref­er­ences. Our fate would be sealed.” And, last year, Henry Kissinger jumped on the peril band­wagon, hold­ing a con­fi­den­tial meet­ing with top AI ex­perts at a pri­vate club in Man­hat­tan, to dis­cuss his con­cern over how smart ro­bots could cause a rup­ture in his­tory and un­ravel the way civil­i­sa­tion works. In Jan­uary 2015, Musk, Bostrom and a Who’s Who of AI rep­re­sent­ing both sides of the split, as­sem­bled in Puerto Rico for a con­fer­ence hosted by Max Teg­mark, a 50-year-old physics pro­fes­sor who runs the Fu­ture of Life In­sti­tute in Bos­ton. “Do you own a house?” asks Teg­mark. “Do you own fire in­surance? The con­sen­sus in Puerto Rico was that we needed fire in­surance. When we got fire and messed up with it, we in­vented the fire ex­tin­guisher. When we got cars and messed up, we in­vented the seat belt, air bag and traf­fic light. But with nu­clear weapons and AI, we don’t want to learn from our mis­takes. We want to plan ahead.” (Musk re­minded Teg­mark that a pre­cau­tion as sen­si­ble as seat belts had pro­voked fierce op­po­si­tion from the au­to­mo­bile in­dus­try.) Musk, who has kick­started the fund­ing of re­search into avoid­ing AI’S pit­falls, said he would give the Fu­ture of Life In­sti­tute “10 mil­lion rea­sons” to pur­sue the sub­ject, do­nat­ing $10m. Teg­mark promptly gave $1.5m to Bostrom’s group in Ox­ford, the Fu­ture of Hu­man­ity In­sti­tute. Ex­plain­ing why it was cru­cial to be “proactive and not re­ac­tive,” Musk says it was pos­si­ble to “con­struct sce­nar­ios where the re­cov­ery of hu­man civil­i­sa­tion does not oc­cur.” Six months af­ter the Puerto Rico con­fer­ence, Musk, Hawk­ing, Hass­abis, Ap­ple co-founder Steve Woz­niak and Stu­art Rus­sell, a com­put­er­science pro­fes­sor at Berke­ley who co-au­thored the stan­dard text­book on ar­ti­fi­cial in­tel­li­gence, along with 1,000 other prom­i­nent fig­ures, signed a let­ter call­ing for a ban on of­fen­sive au­ton­o­mous weapons. “In 50 years, this 18-month pe­riod we’re in now will be seen as be­ing cru­cial for the fu­ture of the AI com­mu­nity,” says Rus­sell. “It’s when the AI com­mu­nity fi­nally woke up and took it­self se­ri­ously and thought about what to do to make the fu­ture bet­ter.” Last Septem­ber, the coun­try’s big­gest tech com­pa­nies cre­ated the Part­ner­ship on Ar­ti­fi­cial In­tel­li­gence to ex­plore the is­sues aris­ing from AI, in­clud­ing the eth­i­cal ones. (Musk’s Ope­nai joined this ef­fort.) Mean­while, the Euro­pean Union has been look­ing into le­gal is­sues aris­ing from the ad­vent of ro­bots and AI – such as whether ro­bots have “per­son­hood” or (as one Fi­nan­cial Times con­trib­u­tor won­dered) should be con­sid­ered like slaves in Ro­man law. At Teg­mark’s sec­ond AI safety con­fer­ence, last Jan­uary, the topic was not so con­tentious. Larry Page, who wasn’t at the Puerto Rico con­fer­ence, was at Asilo­mar – and Musk notes that their “con­ver­sa­tion was no longer heated”. While it may have been “a com­ing-out party for AI safety’’ as one at­tendee put it – part of “a sea change” in the last year or so, as Musk says – there’s still a long way to go. “The top tech­nol­o­gists in Sil­i­con Val­ley now take AI far more se­ri­ously – they do ac­knowl­edge it as a risk,’’ he ob­serves. “But I’m not sure that they yet ap­pre­ci­ate the sig­nif­i­cance of the risk.” Steve Woz­niak has won­dered whether he is des­tined to be a fam­ily pet for ro­bot over­lords. “We started feed­ing our dog filet,” he of­fers over lunch. “Once you start think­ing you could be one, that’s how you want them treated.” He’s de­vel­oped a pol­icy of ap­pease­ment to­wards ro­bots and any AI masters. “Why do we want to set our­selves up...

“As long as they’re good – and we can only con­trol the ones we con­trol, peo­ple won’t get tired of them.” A few hours later, were on a golf course. Night has set in, as has a gen­tle breeze. Mosquitos weave in and out of our group – so do the ever-rushed crew. Crick­ets sing loudly. Perched by the club­house, sport­ing some box-fresh Adi­das slides, is Tom Hol­land. He gets into a har­ness that hangs over the driv­ing range. He flips and spins and lands grace­fully, as a crew mem­ber sup­ports him. It’s a stunt re­hearsal. But in fact, it’s not Hol­land – it’s his in­dis­tin­guish­able stunt dou­ble. The real Hol­land ap­pears around the cor­ner, equally lean and mo­men­tar­ily bro-ing out with his dop­pel­ganger. The two work­shop var­i­ous ways to take and land a jump. A roll. A pivot. A Spidey web right out of it, per­haps. “No – no way could you land that,” says the su­per­vi­sor. Fake Tom loses an en­ergy gel out of his pocket on a walk­through. It’s 10:42pm and the shoot is run­ning late. No­body is in cos­tume. We kill 90 min­utes work­ing though the im­pres­sive on-site cater­ing. There are end­less mounds of end­less va­ri­eties of candy, choco­late, gum, crisps, jerky, soda and juice. Like Willy Wonka’s fac­tory, but with a savoury aisle. Fi­nally, a cute run­ner bolts by with a tray full of Star­bucks. It’s our cue to watch the fi­nal scene. We stand be­hind a mon­i­tor, the ac­tion just in front of us. A man ad­justs some bushes in the fore­ground. The golf course’s car park is stand­ing in for a sub­ur­ban back­yard – a bas­ket­ball ring ar­ranged in place, a child’s push­bike rest­ing just so. An as­sis­tant tweaks a knob to en­sure a ra­zor-sharp im­age is per­fectly mar­ried to the chore­og­ra­phy. We’re rolling. The cam­era pans up. And for the first time, Spi­der-man ap­pears right in front of us. He scut­tles around the back­yard, even­tu­ally find­ing the item of his pur­suit – a glow­ing crys­tal-like ob­ject that is un­doubt­edly of great MCU sig­nif­i­cance. Spidey’s mo­bile starts ring­ing – it’s a friend. Crew mem­bers pro­vide the di­a­logue for the per­son on the other end of the line. They re­ally com­mit. “When I say ‘pe­nis’, you say ‘Parker’! ‘Pe­nis’!” “Parker!”’ “Pe­nis!” “Parker!” Hol­land takes the eyes out of his Spidey suit be­tween takes. The suit looks like a kid’s toy – and maybe not even the most ex­pen­sive one in the shop. It’s coated with strong reds and blues, rather than the cool, metal­lic colours of Maguire-era Spidey. Though Hol­land doesn’t re­ally have to flex any act­ing mus­cles in the scene, it’s clear he’s al­ready in­hab­ited the char­ac­ter. He’s ef­fort­less, af­fa­ble, charm­ing and cheeky – much like he’s been all day. He is, sim­ply, Spi­der-man. The di­rec­tor asks him to de­liver his soli­tary line with a sigh – “more frus­tra­tion”. Hol­land does. He nails it. Scene. An hour passes as we at­tempt to get time with di­rec­tor Jon Watts, who al­legedly has Spi­der-man tat­tooed on his chest. But Watts never ap­pears. We’re of­fered a con­so­la­tion photo with Hol­land. He’s de-suited and back in his slides. Fi­nally, with each of the jour­nal­ists ap­peased and ev­ery va­ri­ety of selfie cap­tured, Hol­land sees his open­ing. He re­cruits two crew friends and they make their move. Hol­land picks up a bucket full of golf balls and a seven iron. One of the crew looks at his phone. “Oh, man, you gotta get changed.” A scene is about to film. Hol­land pauses. He looks around. He thinks about it. Yes, this may be one of the north­ern hemi­sphere sum­mer’s big­gest block­busters. There are hun­dreds of mil­lions – likely a bil­lion – rid­ing on it. But Tom Hol­land is also 20 years old. Spi­der-man can wait. The Mar­vel Cin­e­matic Uni­verse can wait. The three friends dash into the shad­ows of the golf course, pick their spot and start belt­ing balls. n Spi­der-man: Home­com­ing is in cin­e­mas July 6 as the en­emy when they might over­power us? It should be a joint part­ner­ship. All we can do is seed them with a strong cul­ture where they see hu­mans as their friends.’’ At Peter Thiel’s San Fran­cisco of­fice, Thiel, one of the orig­i­nal donors to Ope­nai and a com­mit­ted con­trar­ian, says he wor­ries that Musk’s re­sis­tance could be ac­cel­er­at­ing AI re­search be­cause his end-of-the-world warn­ings are in­creas­ing in­ter­est. “Full-on AI is on the or­der of mag­ni­tude of ex­trater­res­tri­als land­ing,” says Thiel. “There are some tricky ques­tions… If you re­ally push on how do we make AI safe, peo­ple have no clue. We don’t even know what AI is. It’s hard to know how it would be con­trol­lable.” He went on: “There’s some sense in which the AI ques­tion en­cap­su­lates all of peo­ple’s hopes and fears about the com­puter age. Peo­ple’s in­tu­itions do just re­ally break down when they’re pushed to th­ese lim­its be­cause we’ve never dealt with en­ti­ties that are smarter than hu­mans on this planet.”


Try­ing to puzzle out who is right on AI, I drive to San Ma­teo to meet Ray Kurzweil for cof­fee at the restau­rant Three. Kurzweil is the au­thor of The Sin­gu­lar­ity Is Near, a Utopian vi­sion of what an AI fu­ture holds. ( When I men­tioned to An­drew Ng that I was go­ing to be talk­ing to Kurzweil, he rolled his eyes. “When­ever I read Kurzweil’s Sin­gu­lar­ity, my eyes nat­u­rally do that,” he said.) Kurzweil ar­rived with a Whole Foods bag for me, brim­ming with his books and two do­cos about him. He was wear­ing khakis, a green-and-red flan­nel shirt and sev­eral rings, in­clud­ing one made with a 3-D prin­ter with an ‘S’ for his Sin­gu­lar­ity Uni­ver­sity. Com­put­ers are al­ready “do­ing many at­tributes of think­ing,” says Kurzweil. “Just a few years ago, AI couldn’t even tell the dif­fer­ence be­tween a dog and cat. Now it can.” Kurzweil has a keen in­ter­est in cats and keeps a col­lec­tion of 300 cat fig­urines in his home. At the restau­rant, he asks for al­mond milk but couldn’t get any. The 69-year-old eats strange health con­coc­tions and takes 90 pills a day, ea­ger to achieve im­mor­tal­ity – or, “in­def­i­nite ex­ten­sions to the ex­is­tence of our mind file” – which means merg­ing with ma­chines. He has such an urge to merge that he some­times uses the word “we” when talk­ing about su­per­in­tel­li­gent fu­ture be­ings – a far cry from Musk’s “they.” I men­tion that Musk told me he was be­wil­dered that Kurzweil doesn’t seem to have “even one per cent doubt” about the haz­ards of our “mind chil­dren,” as ro­bot­ics ex­pert Hans Mo­ravec calls them. “That’s just not true. I’m the one who ar­tic­u­lated the dangers,” says Kurzweil. “The prom­ise and peril are deeply in­ter­twined,” he con­tin­ues. “Fire kept us warm and cooked our food and also burned down our houses... Fur­ther­more, there are strate­gies to con­trol the peril, as there have been with biotechnology guide­lines.” He sum­marises the three stages of the hu­man re­sponse to new tech­nol­ogy as Wow!, Uh-oh, and What Other Choice Do We Have but to Move For­ward? “The list of things hu­mans can do bet­ter than com­put­ers is get­ting smaller and smaller. But we cre­ate th­ese tools to ex­tend our long reach.” Just as, 200 mil­lion years ago, mam­malian brains de­vel­oped a neo­cor­tex that even­tu­ally en­abled hu­mans to “in­vent lan­guage and sci­ence and art and tech­nol­ogy,” by the 2030s, Kurzweil pre­dicts, we’ll be cy­borgs, with nanobots the size of blood cells con­nect­ing us to syn­thetic neo­cor­tices in the cloud, giv­ing us ac­cess to vir­tual re­al­ity and aug­mented re­al­ity from within our own ner­vous sys­tems. “We will be fun­nier; we will be more mu­si­cal; we will in­crease our wis­dom,” he says, ul­ti­mately pro­duc­ing a herd of Beethovens and Ein­steins as I un­der­stand it. Nanobots in our veins and ar­ter­ies will cure dis­eases and heal our bod­ies from the inside.

He al­lows that Musk’s bête noire could come true. He notes that our AI prog­eny “may or may not be friendly” and that “if it’s not friendly, we may have to fight it.” And the way to fight would be get­ting an AI that’s even smarter. Kurzweil tells me he was sur­prised that Stu­art Rus­sell had “jumped on the peril band­wagon,” so I reached out to Rus­sell and met with him in his sev­enth-floor of­fice in Berke­ley. The 54-yearold Bri­tish-amer­i­can ex­pert on AI tells me that his think­ing had evolved and that he now “vi­o­lently” dis­agrees with Kurzweil and oth­ers who feel that ced­ing the planet to su­per-in­tel­li­gent AI is just fine. Rus­sell doesn’t give a fig whether AI might en­able more Ein­steins. One more Al­bert doesn’t bal­ance the risk of de­stroy­ing hu­man­ity. “As if some­how in­tel­li­gence was the thing that mat­tered and not the qual­ity of hu­man ex­pe­ri­ence,” he says, with ex­as­per­a­tion. “If we re­placed our­selves with ma­chines that as far as we know would have no con­scious ex­is­tence, no mat­ter how many amaz­ing things they in­vented, that would be the big­gest pos­si­ble tragedy.” Nick Bostrom has called the idea of a so­ci­ety of tech­no­log­i­cal awe­some­ness with no hu­man be­ings a “Dis­ney­land without chil­dren.” “There are peo­ple who be­lieve that if the ma­chines are more in­tel­li­gent than we are, then they should just have the planet and we should go away,” says Rus­sell. “Then there are peo­ple who say, ‘Well, we’ll up­load our­selves into the ma­chines, so we’ll still have con­scious­ness but we’ll be ma­chines.’ Which I would find, well, com­pletely im­plau­si­ble.” Rus­sell takes ex­cep­tion to the views of Yann Le­cun, who de­vel­oped the fore­run­ner of the con­vo­lu­tional neu­ral nets used by Al­phago and is Face­book’s di­rec­tor of AI re­search. Le­cun told the BBC that there would be no Ex Machina or Ter­mi­na­tor sce­nar­ios, be­cause ro­bots would not be built with hu­man drives – hunger, power, re­pro­duc­tion, self-preser­va­tion. “Yann Le­cun keeps say­ing there’s no rea­son why ma­chines would have any self-preser­va­tion in­stinct,” says Rus­sell. “And it’s math­e­mat­i­cally false. I mean, it’s so ob­vi­ous that a ma­chine will have self-preser­va­tion even if you don’t pro­gram it in be­cause if you say, ‘Fetch the cof­fee,’ it can’t fetch the cof­fee if it’s dead. So if you give it any goal, it has a rea­son to pre­serve its own ex­is­tence to achieve that goal. And if you threaten it on your way to get­ting cof­fee, it’s go­ing to kill you be­cause any risk to the cof­fee has to be coun­tered. Peo­ple have ex­plained this to Le­cun in very sim­ple terms.” Rus­sell also de­bunks the two most com­mon ar­gu­ments for why we shouldn’t worry: “One is: it’ll never hap­pen, which is like say­ing we are driv­ing to­wards the cliff but we’re bound to run out of gas be­fore we get there. And that doesn’t seem like a good way to man­age the af­fairs of the hu­man race. The other is: Not to worry – we’ll build ro­bots that col­lab­o­rate with us and we’ll be in hu­man-ro­bot teams. Which begs the ques­tion: if ro­bots don’t agree with your ob­jec­tives, how do you form a team?” Last year, Mi­crosoft shut down its AI chat­bot, Tay, af­ter Twit­ter users – who were sup­posed to make “her” smarter “through ca­sual and play­ful con­ver­sa­tion”, as Mi­crosoft put it – in­stead taught her how to re­ply with racist, misog­y­nis­tic and anti-semitic slurs. “bush did 9/11, and Hitler would have done a bet­ter job than the mon­key we have now,” Tay tweeted. “don­ald trump is the only hope we’ve got.” In re­sponse, Musk tweeted, “Will be in­ter­est­ing to see what the mean time to Hitler is for th­ese bots. Only took Mi­crosoft’s Tay a day.” With Trump now pres­i­dent, Musk finds him­self walk­ing a fine line. His com­pa­nies count on the US gov­ern­ment for busi­ness and sub­si­dies, re­gard­less of whether Mar­cus Aure­lius or Caligula is in charge. Musk’s com­pa­nies joined the am­i­cus brief against Trump’s ex­ec­u­tive or­der re­gard­ing im­mi­gra­tion and Musk him­self tweeted against the or­der. At the same time, un­like Uber’s Travis Kalan­ick, Musk has hung in there as a mem­ber of Trump’s Strate­gic and Pol­icy Fo­rum. “It’s very Elon,’’ says Ash­lee Vance. “He’s go­ing to do his own thing no mat­ter what peo­ple grum­ble about.’’ I asked Musk about the flak he re­ceived for as­so­ci­at­ing with Trump. In the pho­to­graph of tech ex­ec­u­tives with Trump, he had looked gloomy and there was a weary tone in his voice when he talked about it. “It’s bet­ter to have voices of moder­a­tion in the room with the pres­i­dent. There are a lot of peo­ple, kind of the hard left, who es­sen­tially want to iso­late – and not have any voice. Very un­wise.”


Eliezer Yud­kowsky is a highly re­garded 37-year-old re­searcher who is try­ing to fig­ure out whether it’s pos­si­ble, in prac­tice, to point AI in any di­rec­tion, let alone a good one. I met him at a Ja­panese restau­rant in Berke­ley. “How do you en­code the goal func­tions of an AI such that it has an Off switch and it wants there to be an Off switch and it won’t try to elim­i­nate the Off switch and it will let you press the Off switch, but it won’t jump ahead and press the Off switch it­self?” he asks. “And if it self-mod­i­fies, will it self-mod­ify in such a way as to keep the Off switch? We’re try­ing to work on that. It’s not easy.” I bab­ble about the heirs of Klaatu, HAL and Ul­tron tak­ing over the in­ter­net and get­ting con­trol of our bank­ing, trans­porta­tion and mil­i­tary. What about the repli­cants in Blade Run­ner, who con­spire to kill their cre­ator? Yud­kowsky holds his head in his hands, then pa­tiently ex­plains: “The AI doesn’t have to take over the whole in­ter­net. It doesn’t need drones. It’s not dan­ger­ous be­cause it has guns. It’s dan­ger­ous be­cause it’s smarter than us. Sup­pose it can solve the sci­ence tech­nol­ogy of pre­dict­ing pro­tein struc­ture from DNA. Then it just needs to send some emails to the labs that syn­the­sise cus­tomised pro­teins. Soon it has its own molec­u­lar ma­chin­ery, build­ing even more so­phis­ti­cated molec­u­lar ma­chines. If you want a pic­ture of AI gone wrong, don’t imag­ine march­ing hu­manoid ro­bots with glow­ing red eyes. Imag­ine tiny in­vis­i­ble syn­thetic bac­te­ria made of di­a­mond, with tiny on­board com­put­ers, hid­ing inside your blood­stream. And then, si­mul­ta­ne­ously, they re­lease one mi­cro­gram of bo­tulinum toxin. You’ll just fall over dead. “Only it won’t ac­tu­ally hap­pen like that. It’s im­pos­si­ble for me to pre­dict ex­actly how we’d lose, be­cause the AI will be smarter than I am. When you’re build­ing some­thing smarter than you, you have to get it right on the first try.” I think back to my con­ver­sa­tion with Musk and Alt­man. Don’t get side­tracked by the idea of killer ro­bots, Musk had said, not­ing, “The thing about AI is that it’s not the ro­bot – it’s the com­puter al­go­rithm in the Net. So the ro­bot would just be an end ef­fec­tor, just a se­ries of sen­sors and ac­tu­a­tors. AI is in the Net. The im­por­tant thing is that if we do get some sort of run­away al­go­rithm, then the hu­man AI col­lec­tive can stop the run­away al­go­rithm. But if there’s large, cen­tralised AI that de­cides, then there’s no stop­ping it.” Alt­man had ex­panded upon the sce­nario: “An agent that had full con­trol of the In­ter­net could have far more ef­fect on the world than an agent that had full con­trol of a so­phis­ti­cated ro­bot. Our lives are so de­pen­dent on the In­ter­net that an agent that had no body what­so­ever, but could use the net well, would be far more pow­er­ful.” Even ro­bots with a be­nign task could in­dif­fer­ently harm us. “Let’s say you cre­ate a self-im­prov­ing AI to pick straw­ber­ries,” Musk said, “and it gets bet­ter and bet­ter at pick­ing straw­ber­ries and picks more and more and it is self-im­prov­ing, so all it re­ally wants to do is pick straw­ber­ries. So then it would have all the world be straw­berry fields. Straw­berry fields forever.” No room for hu­man be­ings. But can they ever de­velop a kill switch? “I’m not sure I’d want to be the one hold­ing it for some su­per­pow­ered AI – you’d be the first thing it kills,” replied Musk. Alt­man tried to cap­ture the chill­ing gran­deur of what’s at stake: “It’s an ex­cit­ing time to be alive, be­cause in a few decades we are ei­ther go­ing to head to­wards self­de­struc­tion or hu­man de­scen­dants even­tu­ally colonis­ing the uni­verse.” “Right,” said Musk, “If you think the end is the heat death of the uni­verse, it’s all about the jour­ney.” The man who is so wor­ried about ex­tinc­tion chuck­led at his own joke. As HP Love­craft once wrote, “From even the great­est of hor­rors, irony is sel­dom ab­sent.”

Newspapers in English

Newspapers from Australia

© PressReader. All rights reserved.