Rise of the Ma­chines

Artificial in­tel­li­gence has al­ready per­me­ated many sec­tors of so­ci­ety, and there is no stop­ping this tech­nol­ogy from com­pletely rev­o­lu­tion­is­ing the world.

SLOW Magazine - - Contents - Text: Gary Muir Im­age © istockphoto.com

No longer rel­e­gated to the realm of science-fic­tion, ar­ti­fi­cial in­tel­li­gence (AI) has be­come an ac­cepted part of the real world. The tech­nol­ogy has been seen do­ing a plethora of im­pres­sive things, from read­ing lips bet­ter than ex­perts to play­ing – and win­ning – poker tour­na­ments against skilled hu­man op­po­nents. A rather more far-fetched sce­nario has even been dis­cussed re­cently: Mi­cro­scopic nanoma­chines be­ing in­jected into peo­ple’s blood­streams, search­ing for and erad­i­cat­ing dis­ease, and re­pair­ing cells us­ing AI.

Jeff Dean, a se­nior fel­low at Google and the tech­ni­cal ge­nius be­hind no less than five gen­er­a­tions of Google’s crawl­ing in­dex­ing sys­tem among nu­mer­ous other tech­no­log­i­cal mar­vels, be­lieves the idea of nanoma­chines is per­fectly plau­si­ble. Dean is cur­rently work­ing on sev­eral AI projects to­gether with a team of Google en­gi­neers. As far as the AI revo­lu­tion goes, Google is among those lead­ing the charge. In­deed, the tech gi­ant has big plans for this tech­nol­ogy.

AI has very real and in­deed valu­able ap­pli­ca­tions in the med­i­cal field. The hope is that ma­chine learn­ing – where AI sys­tems learn by them­selves, with min­i­mal hu­man coach­ing – might make pre­ven­ta­tive medicine a re­al­is­tic prospect in the de­vel­op­ing world, where qual­i­fied and ex­pe­ri­enced doc­tors are in short sup­ply. One physi­cian and sci­en­tist at Google Re­search, for­mer nano-sci­en­tist and bio­engi­neer Lily Peng, has de­vel­oped an AI sys­tem able to di­ag­nose di­a­betic retinopa­thy – a big cause of vi­sion loss among di­a­bet­ics. In de­vel­op­ing coun­tries, where oph­thal­mol­o­gists are par­tic­u­larly few and far be­tween, such tech­nol­ogy would be life-chang­ing. Peng has also re­searched the use of such tech­nol­ogy in di­ag­nos­ing breast cancer, where ma­chines study­ing mam­mo­gra­phies would be able to high­light ar­eas where they sus­pect cancer, al­low­ing doc­tors to make faster di­ag­noses and take im­me­di­ate ac­tion re­gard­ing treat­ment.

Ma­chine learn­ing is be­ing used more and more, and in wider ap­pli­ca­tions. Con­sider, for ex­am­ple, that Google has used ma­chine learn­ing to au­to­mat­i­cally build cap­tions for more than one bil­lion Youtube videos – in 10 lan­guages, no less. Or that a Ja­panese baby-food man­u­fac­turer is test­ing ma­chine learn­ing to vis­ually in­spect diced veg­eta­bles for dis­coloura­tion or other warn­ing signs. Mean­while, in New Zealand, Vic­tor An­ton, a doc­toral re­searcher from Victoria Univer­sity, has tried us­ing ma­chine learn­ing to iden­tify na­tive bird calls. An­other in­trigu­ing use of ma­chine learn­ing is be­ing done by Sto­ry­fit, which is us­ing AI to ex­am­ine movie scripts to iden­tify gen­der bias, pre­dict con­tent mar­ketabil­ity, im­prove dis­cov­ery, and drive sales for pub­lish­ers and stu­dios.

De­spite the progress made in the realm of ma­chine learn­ing and AI in re­cent years, and the plen­ti­ful ben­e­fits the tech presents, many ex­perts ar­gue that th­ese sys­tems are still in­fe­rior to hu­mans when it comes to tasks such as in­ter­act­ing with the phys­i­cal world and per­ceiv­ing nat­u­ral sig­nals.

To be even some­what in­tel­li­gent, ma­chines need to mimic the ways that hu­mans learn and un­der­stand – a process that be­gins or­gan­i­cally from birth. Much of the time, hu­mans learn with­out su­per­vi­sion or out­side in­ter­ven­tion. For in­stance, babies learn to nav­i­gate the world by ab­sorb­ing the abun­dant in­for­ma­tion to which they’re ex­posed – data which they then process to un­der­stand, learn­ing con­tin­u­ously along the way. It is a nat­u­ral process for which hu­mans re­quire no train­ing – it sim­ply hap­pens. For ma­chines, the si­t­u­a­tion is dif­fer­ent. They learn from the top down, rather than from the bot­tom up, as hu­mans do.

Igal Raichel­gauz, founder of Is­raeli com­pany Cor­tica, which re­lies heav­ily on AI tech­nol­ogy for au­ton­o­mous plat­forms as part of its busi­ness of­fer­ing, says that AI sys­tems are sim­ply pow­er­ful com­put­ing ma­chines with mis­lead­ing ti­tles. Their top­down ap­proach to learn­ing pro­hibits them from do­ing any­thing on their own, he says. This is be­cause, in top-down ap­proaches, the sys­tem first un­der­goes train­ing. Its al­go­rithm de­vel­ops through ob­serv­ing vast num­bers of la­belled data sets, un­til it can suc­cess­fully ex­trap­o­late knowl­edge for it­self. Deep-learn­ing ma­chines use lay­ered al­go­rithms to process data us­ing many lev­els of ab­strac­tion. Raichel­gauz ar­gues that such a re­liance on train­ing makes th­ese ma­chines com­plex, but not in­tel­li­gent. For AI to achieve an equal level of in­tel­li­gence to hu­mans, it must ex­cel in the same fun­da­men­tal tasks that hu­mans mas­tered thou­sands of years ago, such as vis­ual un­der­stand­ing and the abil­ity to in­tel­li­gently nav­i­gate the phys­i­cal world. Only once AI has been de­vel­oped to mimic hu­man pro­cesses, Raichel­gauz says, will we per­haps see it sur­pass hu­man in­tel­li­gence.

Like it or not, AI is here to stay, and its myr­iad of ap­pli­ca­tions have far more po­ten­tial for good than harm. As for those who fear that in­tel­li­gent ma­chines may rise up to de­stroy their cre­ators, sci­en­tists from across the world are work­ing on a range of philo­soph­i­cal rules and norms. Pro­pos­als in place call for a safety switch or a “big red but­ton” which en­ables the pro­gram­mers to stop “bad” be­hav­iour. The ques­tion at the fore­front of the de­bate is: Who de­ter­mines which be­hav­iours are bad, and who gets to stop them?

66

Newspapers in English

Newspapers from South Africa

© PressReader. All rights reserved.