— Now they are the cham­pi­ons

Cosmos - - Contents -

Self- taught ar­ti­fi­cial in­tel­li­gence draws closer to game per­fec­tion. HAD YOU ASKED ANY se­ri­ous chess player on 5 De­cem­ber 2017 what the strong­est com­mer­cially avail­able chess soft­ware on the mar­ket was, mostly likely you would have heard names like Hou­dini, Ko­modo or Stock­fish. The cor­rect an­swer hap­pened to be Stock­fish, but all three pro­grams cer­tainly play chess bet­ter than any hu­man, in­clud­ing cur­rent world cham­pion Mag­nus Carlsen.

On 6 De­cem­ber that all changed. Deep­mind, a Bri­tish com­pany now owned by Google that spe­cialises in ar­ti­fi­cial in­tel­li­gence, pub­lished a pa­per de­tail­ing the ex­plo­sive en­trance of a new cham­pion in the com­puter chess arena. Ac­cord­ing to Deep­mind, its Al­p­hazero neu­ral net­work was taught only the rules of chess, then al­lowed to play against it­self for a mere four hours. With that, Al­p­hazero had learned enough to oblit­er­ate Stock­fish. In a 100-game match, Al­p­hazero scored 28 wins and 72 draws, a staggering achieve­ment even for ad­vanced AI.

Tra­di­tional chess en­gines have long de­pended on mas­sive open­ing the­ory ‘books’ and endgame ‘table­bases’ that the soft­ware con­sults at ap­pro­pri­ate points dur­ing a game. Mid­dlegame de­ci­sions are made us­ing a process known as a search tree, look­ing ahead to see mil­lions of pos­si­ble can­di­date moves and then nu­mer­i­cally eval­u­at­ing and rank­ing them. The cri­te­ria an en­gine uses to de­cide its best move in a given po­si­tion is pro­grammed into the soft­ware by hu­mans.

Al­p­hazero used nei­ther open­ing data­bases nor endgame ta­bles, and noth­ing about the game was pre-pro­grammed. It sim­ply ‘taught’ it­self chess. In a few hours, play­ing through (pre­sum­ably) mil­lions of games against it­self, the AI re­mem­bered its suc­cesses as well as its fail­ures, con­tin­u­ously up­dat­ing its knowl­edge of the game.

While Deep­mind hasn’t re­leased enough in­for­ma­tion to fully cal­cu­late Al­p­hazero’s chess-play­ing strength, it ap­pears to be vastly su­pe­rior to any­thing car­bon-based. Chess prow­ess is mea­sured us­ing the Elo rat­ing. A be­gin­ner who has just learned the rules might have an Elo rat­ing of 400 to 700. A player with a few months’ ex­pe­ri­ence could play at about 1,000. An ex­pert player is rated 1,800 to 2,000. Grand­mas­ters are 2,500 and higher, with the top play­ers in the world rated 2,700 to 2,800. The best rat­ings ever achieved by a hu­man are in the 2,880 range. Stock­fish was es­ti­mated to be in the 3,300 range, as it rou­tinely trounced all hu­man op­po­nents with ease. Al­p­hazero, when fi­nally as­sessed prop­erly, could well be in the 4,000 range.

Chess isn’t the first an­cient strat­egy game Deep­mind has turned up­side down. In 2016 its Al­phago pro­gram de­feated the reign­ing world Go cham­pion, Lee Sedol. AI ex­perts had pre­vi­ously pre­dicted a pro­gram ca­pa­ble of beat­ing a 9-dan (the high­est pos­si­ble rank­ing) Go pro­fes­sional was at least a decade away.

When Go supremacy was wrested away from hu­man be­ings, it joined an ev­er­grow­ing list of strat­egy games now played bet­ter by com­put­ers.

In the chess world, Garry Kas­parov fa­mously lost a match un­der nor­mal chess time con­trols to IBM’S Deep Blue in 1997. Backgam­mon soft­ware was play­ing at or near world-cham­pion level as far back as the late 1980s. Check­ers, or 8x8 draughts, fell to the ma­chines in 1995 when the Univer­sity of Al­berta’s Chi­nook pro­gram de­feated then world cham­pion Don Laf­ferty. Chi­nook would go on to ‘solve’ check­ers in 2007 by prov­ing the game would al­ways end in a draw with per­fect play from both sides.

As re­cently as last year, a poker-play­ing pro­gram spe­cial­is­ing in heads-up no-limit hold ’em, called Li­bra­tus, soundly de­feated a team of four world-class hold ’em ex­perts dur­ing a multi-day tour­na­ment in which more than 120,000 hands were dealt. A slightly sim­pler ver­sion of the game, limit hold ’em, had been solved two years be­fore (again by re­searchers at the Univer­sity of Al­berta).

Other solved board games in­clude Con­nect Four, in which the first player can al­ways force a win. Othello is, tech­ni­cally, not yet solved, but proper play by both sides will al­most cer­tainly re­sult in a draw.

Chess and Go, due to the com­plex­ity of the two games, are not ex­pected to be fully solved for years to come. The pre­dic­tion for chess is a draw with per­fect play, al­though some ex­perts claim a win for white (with its first-move ad­van­tage) may be in­evitable. Go is still too com­plex for any mean­ing­ful guesses as to a solved state.

At least we hu­mans still have ta­ble ten­nis, right? Well, we did.

At the 2018 Con­sumer Elec­tron­ics Show in Las Ve­gas, Ja­panese tech­nol­ogy com­pany Om­ron un­veiled For­pheus, a ta­ble-ten­nis ro­bot us­ing ad­vanced cam­eras and ar­ti­fi­cial in­tel­li­gence to track and re­turn any ball hit its way. By in­ter­pret­ing body lan­guage, For­pheus could even pre­dict when its op­po­nent in­tended to ‘smash’ the ball back over the net. I heard no re­ports of it los­ing a sin­gle game.

Chess isn’t the first an­cient strat­egy game turned up­side down.

Newspapers in English

Newspapers from Australia

© PressReader. All rights reserved.