The Ar­ti­fi­cial In­tel­li­gence revo­lu­tion: Is it fact of fic­tion?

Norway-Asia Business Review - - Contents - HENRI VIIRALT

Over the past few years, ar­ti­fi­cial in­tel­li­gence (AI) has been con­sis­tently in the head­lines of ma­jor news out­lets as it slowly but surely keeps per­me­at­ing var­i­ous in­dus­tries and ma­noeu­vring into our daily lives. Hailed as the next tech­no­log­i­cal fron­tier, it is seen as some­thing that can fun­da­men­tally al­ter the way we per­ceive and in­ter­act with tech­nol­ogy, but how much of what is writ­ten is based on hype and how much of it is ac­tu­ally rooted in re­al­ity?

AI is no novel con­cept. In the 1950s, a group of sci­en­tists united with a com­mon goal to build ma­chines as in­tel­li­gent as hu­mans. Since its in­cep­tion, AI has been a mul­ti­dis­ci­plinary field, en­com­pass­ing com­puter vi­sion, speech pro­cess­ing, ro­bot­ics and ma­chine learn­ing – the process of an AI sift­ing through large sets of data to un­cover pat­terns and pre­dict phe­nom­ena, per­formed by an al­go­rithm, with no hu­man guid­ance.

“From the out­set, there were two schools of thought re­gard­ing how un­der­stand­able, or ex­plain­able, AI ought to be. Many thought it made the most sense to build ma­chines that rea­soned ac­cord­ing to rules and logic, mak­ing their in­ner work­ings trans­par­ent to any­one who cared to ex­am­ine some code. Oth­ers felt that in­tel­li­gence would more eas­ily emerge if ma­chines took in­spi­ra­tion from bi­ol­ogy, and learned by ob­serv­ing and ex­pe­ri­enc­ing. This meant turn­ing com­puter pro­gram­ming on its head. In­stead of a pro­gram­mer writ­ing the com­mands to solve a prob­lem, the pro­gram gen­er­ates its own al­go­rithm based on ex­am­ple data and a de­sired out­put. The ma­chine-learn­ing tech­niques that would later evolve into to­day’s most pow­er­ful AI sys­tems fol­lowed the lat­ter path: the ma­chine es­sen­tially pro­grams it­self,” writes Mr Will Knight in an ar­ti­cle for the MIT Tech­nol­ogy Re­view.

The field of AI re­mained at the fringes of the sci­en­tific community un­til the com­put­er­i­sa­tion era that trans­formed nearly all in­dus­tries and brought with it an emer­gence of large data sets. This, in turn, in­spired the rise of evermore pow­er­ful ma­chine learn­ing tech­niques, such as the ar­ti­fi­cial neu­ral net­work, which re­sem­bles an in­ter­con­nected group of nodes, mim­ick­ing the vast net­work of neu­rons found in a func­tional, bi­o­log­i­cal brain.

“It was not un­til the start of this decade, af­ter sev­eral clever tweaks and re­fine­ments, that very large—or “deep”— neu­ral net­works demon­strated dra­matic im­prove­ments in au­to­mated per­cep­tion. Deep learn­ing is re­spon­si­ble for to­day’s ex­plo­sion of AI. It has given com­put­ers ex­tra­or­di­nary pow­ers, like the abil­ity to rec­og­nize spo­ken words al­most as well

as a per­son could, a skill too com­plex to code into the ma­chine by hand. Deep learn­ing has trans­formed com­puter vi­sion and dra­mat­i­cally im­proved ma­chine trans­la­tion. It is now be­ing used to guide all sorts of key de­ci­sions in medicine, fi­nance, man­u­fac­tur­ing—and be­yond,” writes Mr. Knight.

One of the most widely re­ported sto­ries as of late to demon­strate the po­ten­tial of AI has been on Google’s Al­phaGo – an AI de­vel­oped to take on the an­cient Chi­nese game of Go, ar­guably the most de­mand­ing strat­egy game in ex­is­tence – which bested Go’s top ranked hu­man player, Ke Jie, by 3-0, in a se­ries hosted in China this May.

It took Al­phaGo a mere year and a half top­ple the grand­est of grand­mas­ters - some­thing even its cre­ators didn’t be­lieve would hap­pen in an­other 5-10 years, and it did it by tire­lessly play­ing game af­ter game against it­self, all the while analysing and op­ti­mis­ing its strat­egy.

Al­phaGo usu­ally does this un­der strict time lim­its, with sec­onds or mil­lisec­onds slot­ted for each move, although it has also played games that un­folded over sev­eral hours, much like pro­fes­sional matches played by its hu­man coun­ter­parts.

“These are beau­ti­ful games, with moves no one has seen,” said Fan Hui, the Euro­pean Go cham­pion who helped train Al­phaGo, at a press con­fer­ence af­ter the event in China.

Af­ter dom­i­nat­ing Ke Jie, Al­phaGo CEO, Demis Hass­abis, an­nounced the AI’s re­tire­ment from Go to tackle new chal­lenges.

“The re­search team be­hind Al­phaGo will now throw their en­ergy into the next set of grand chal­lenges, de­vel­op­ing ad­vanced gen­eral al­go­rithms that could one day help sci­en­tists as they tackle some of our most com­plex prob­lems, such as find­ing new cures for dis­eases, dra­mat­i­cally re­duc­ing en­ergy con­sump­tion, or in­vent­ing rev­o­lu­tion­ary new ma­te­ri­als,” Mr Hass­abis wrote in a state­ment on the com­pany’s web­site.

Deep learn­ing has al­ready been suc­cess­fully de­ployed in image cap­tion­ing, voice recog­ni­tion, and lan­guage trans­la­tion, and there is hope that the same tech­niques could even­tu­ally be ap­plied to di­ag­nos­ing deadly dis­eases, mak­ing high level trad­ing de­ci­sions, as well as other com­plex tasks. How­ever, there are sig­nif­i­cant chal­lenges ahead be­fore that be­comes a re­al­ity.

In 2015, a re­search group at Mount Si­nai Hospi­tal in New York de­cided to use deep learn­ing to process pa­tient data that can be used to pre­dict the de­vel­op­ment of dis­eases. The project, dubbed Deep Pa­tient, in­volved ex­tract­ing elec­tronic health records from a data ware­house and ag­gre­gat­ing them by pa­tient. The data was both struc­tured - in the form of lab tests, med­i­ca­tions, and pro­ce­dures - but also clin­i­cal notes and de­mo­graphic data on age, gen­der, and race.

Deep Pa­tient was trained – the process of pro­vid­ing data for the al­go­rithm to build bet­ter, less er­ro­neous mod­els, e.g. study­ing mil­lions of images to dis­tin­guish cats from dogs (in the case of image clas­si­fi­ca­tion) – on data from around 700,000 in­di­vid­u­als.

With­out any ex­pert in­struc­tion, Deep Pa­tient was able to un­cover hid­den pat­terns in the hospi­tal data to pre­dict, with pin­point ac­cu­racy, when a pa­tient was likely to de­velop can­cer or de­tect the on­set of psy­chi­atric dis­or­der like schizophre­nia - a feat which is no­to­ri­ously dif­fi­cult, even for trained physi­cians.

The real kicker is this: it isn’t clear how Deep Pa­tient de­vel­oped its di­ag­noses. This is be­cause the in­ner work­ings of any ma­chine learn­ing tech­nol­ogy are in­her­ently opaque, even to its cre­ators since there is no de­bug­ging fea­ture that can be used for any hand­writ­ten code.

“It is a prob­lem that is al­ready rel­e­vant, and it’s go­ing to be much more rel­e­vant in the fu­ture,” says Tommi Jaakkola, a pro­fes­sor at MIT who works on ap­pli­ca­tions of ma­chine learn­ing. “Whether it’s an in­vest­ment de­ci­sion, a med­i­cal de­ci­sion, or maybe a mil­i­tary de­ci­sion, you don’t want to just rely on a ‘black box’ method.”

Ac­cord­ing to Mr Knight. there’s al­ready an ar­gu­ment that be­ing able to in­ter­ro­gate an AI sys­tem about how it reached its con­clu­sions is a fun­da­men­tal le­gal right. Start­ing in the sum­mer of 2018, the Euro­pean Union may re­quire that com­pa­nies be able to give users an ex­pla­na­tion for de­ci­sions that au­to­mated sys­tems reach.

“This might be im­pos­si­ble, even for sys­tems that seem rel­a­tively sim­ple on the sur­face, such as the apps and web­sites that use deep learn­ing to serve ad­ver­tise­ments or rec­om­mend songs. The com­put­ers that run those ser­vices have pro­grammed them­selves, and they have done it in ways we can­not un­der­stand. Even the engi­neers who build these apps can­not fully ex­plain their be­hav­iour,” writes Mr. Knight.

AI sys­tems are cur­rently de­vel­op­ing much faster than any­one could’ve pre­dicted even five years ago and we sim­ply don’t know what their true po­ten­tial is yet. But one thing is cer­tain – it would be ir­re­spon­si­ble to scale AI tech­nolo­gies to a point where we hand over the de­ci­sion-mak­ing power of truly com­plex is­sues to an AI be­fore we de­velop ways for these sys­tems to be­come more ac­count­able and un­der­stand­able.

Iron­i­cally, in our quest to build these al­go­rithms to mimic how our brains func­tion, it be­comes ques­tion­able whether an AI will be able to ex­plain its rea­son­ing in de­tail, much like their hu­man cre­ators.

“Even if some­body can give you a rea­son­able-sound­ing ex­pla­na­tion [for his or her ac­tions], it prob­a­bly is in­com­plete, and the same could very well be true for AI,” says Jeff Clune, one of the fore­most AI sci­en­tists and an as­sis­tant pro­fes­sor at the Univer­sity of Wy­oming. “It might just be part of the na­ture of in­tel­li­gence that only part of it is ex­posed to ra­tio­nal ex­pla­na­tion. Some of it is just in­stinc­tual, or sub­con­scious, or in­scrutable.”

PHOTO: SHUTTERSTOCK

Newspapers in English

Newspapers from Norway

© PressReader. All rights reserved.