Be­yond AlphaGo, what’s next for AI?

HWM (Malaysia) - - THINK -

“Yes, now there is a God.” That was the su­per­com­puter’s an­swer when asked if there was a God in`An­swer,’ Fredric Brown’s 1954 short story about AI, be­fore strik­ing down a fear­ful man with a bolt of light­ning.

AI is a very real threat, ac­cord­ing to some of the world’s sharpest minds. In his book Su­per­in­tel­li­gence: Paths, Dangers, Strate­gies, Nick Bostrom, a philoso­pher at Ox­ford Univer­sity, warns that true AI might lead to hu­man­ity’s ex­tinc­tion.

Lead­ing fig­ures like Elon Musk and Sam Alt­man con­cur. Last year, the duo were among many other Sil­i­con Val­ley in­vestors 42 HWM | M AY 2 0 1 6 who cre­ated OpenAI, a bil­lion-dol­lar non-profit or­ga­ni­za­tion ded­i­cated to open-sourc­ing AI, and en­sur­ing that we don’t end up birthing a mon­ster that swal­lows us whole.

The end of the path

Musk has called AI our “big­gest ex­is­ten­tial threat”, even tweet­ing that it could po­ten­tially be “more dan­ger­ous than nukes”. If that sounds alarmist to you, con­sider the fact that Stephen Hawk­ing, a bril­liant the­o­ret­i­cal physi­cist, once told the BBC, “The devel­op­ment of full AI could spell the end of the hu­man race.” These peo­ple are hardly Lud­dites re­sist­ing tech­no­log­i­cal change. In­stead, they rep­re­sent some of the bright­est minds to­day that are ac­tively com­mit­ted to ad­vance­ment in AI, but also fear that the world is not do­ing enough to con­tain the po­ten­tial risk.

Bostrom, Musk and Hawk­ing are all re­fer­ring to a spec­u­la­tive event known as the tech­no­log­i­cal sin­gu­lar­ity. The sin­gu­lar­ity posits a fu­ture where AI is able to im­prove it­self with­out hu­man help, and rapidly ex­ceeds hu­man in­tel­lec­tual ca­pac­ity by or­ders of mag­ni­tude, so much so that its su­per­in­tel­li­gence ex­ceeds our abil­ity to even un­der­stand it.

AlphaGo gave us a taste of this mys­te­ri­ous su­per­in­tel­li­gence in its game with Lee So-dol. Dur­ing their sec­ond Go match, AlphaGo made a move that made no sense to Lee, or any of the hu­man ex­perts watch­ing the game. Move 37 was so un­prece­dented that Lee had to leave the match room for 15 min­utes to think of a re­sponse. And AlphaGo won.

The worry then is that an all-pow­er­ful in­tel­lect would be out­side our wit, and thus our con­trol, en­tirely. After all, his­tory isn’t ex­actly rife with ex­am­ples of more pow­er­ful be­ings be­ing sub­servient to weaker ones. Or hu­man­ity could suc­cumb to a flawed AI that is so in­tel­li­gent, but con­stricted by its faulty pro­gram­ming that it does what we ask it to, with­out con­sid­er­ing what it is that we really want.

Con­sider Nick Bostrom’s ‘pa­per­clip max­i­mizer,’ a thought ex­per­i­ment that asks you to imag­ine a su­per­in­tel­li­gent AI whose goal is to max­i­mize the num­ber of pa­per­clips in its col­lec­tion. If taken to its log­i­cal con­clu­sion, this en­tity might just con­vert most of the mat­ter in the universe – which in­cludes us – into pa­per­clips.

Newspapers in English

Newspapers from Malaysia

© PressReader. All rights reserved.