What’s the Dif­fer­ence be­tween and AI? Deep Learn­ing, Ma­chine Learn­ing

AI, deep learn­ing and ma­chine learn­ing are of­ten mis­taken for each other. Take a look at how they dif­fer in this in­ter­est­ing ar­ti­cle.

OpenSource For You - - For U & Me - By: Swap­neel Me­hta The au­thor has worked at Mi­crosoft Re­search, CERN and at star­tups in AI and cy­ber se­cu­rity. He is an open source en­thu­si­ast who en­joys spend­ing time or­gan­is­ing soft­ware de­vel­op­ment work­shops for school and col­lege stu­dents. You can cont

From be­ing dis­missed as science fic­tion to be­com­ing an in­te­gral part of mul­ti­ple, wildly pop­u­lar movie se­ries, es­pe­cially the one star­ring Arnold Sch­warzeneg­ger, ar­ti­fi­cial in­tel­li­gence has been a part of our life for longer than we re­alise. The idea of ma­chines that can think has widely been at­trib­uted to a Bri­tish math­e­ma­ti­cian and

WWII code-breaker, Alan Tur­ing. In fact, the Tur­ing Test, of­ten used for bench­mark­ing the ‘in­tel­li­gence’ in ar­ti­fi­cial in­tel­li­gence, is an in­ter­est­ing process in which AI has to con­vince a hu­man, through a con­ver­sa­tion, that it is not a robot. There have been a num­ber of other tests de­vel­oped to ver­ify how evolved AI is, in­clud­ing Go­ertzel’s Cof­fee Test and Nils­son’s Em­ploy­ment Test that com­pare a robot’s per­for­mance in dif­fer­ent hu­man tasks.

As a field, AI has prob­a­bly seen the most ups and downs over the past 50 years. On the one hand it is hailed as the fron­tier of the next tech­no­log­i­cal revo­lu­tion, while on the other, it is viewed with fear, since it is be­lieved to have the po­ten­tial to sur­pass hu­man in­tel­li­gence and hence achieve world dom­i­na­tion! How­ever, most sci­en­tists agree that we are in the nascent stages of de­vel­op­ing AI that is ca­pa­ble of such feats, and re­search con­tin­ues un­fet­tered by the fears.

Ap­pli­ca­tions of AI

Back in the early days, the goal of re­searchers was to con­struct com­plex ma­chines ca­pa­ble of ex­hibit­ing some sem­blance of hu­man in­tel­li­gence, a con­cept we now term ‘general in­tel­li­gence’. While it has been a pop­u­lar con­cept in movies and in science fic­tion, we are a long way from de­vel­op­ing it for real.

Spe­cialised ap­pli­ca­tions of AI, how­ever, al­low us to use im­age clas­si­fi­ca­tion and fa­cial recog­ni­tion as well as smart per­sonal as­sis­tants such as Siri and Alexa. These usu­ally lever­age mul­ti­ple al­go­rithms to pro­vide this func­tion­al­ity to the end user, but may broadly be clas­si­fied as AI.

Ma­chine learn­ing (ML)

Ma­chine learn­ing is a sub­set of prac­tices com­monly ag­gre­gated un­der AI tech­niques. The term was orig­i­nally used to de­scribe the process of lever­ag­ing al­go­rithms to parse data, build mod­els that could learn from it, and ul­ti­mately make pre­dic­tions us­ing these learnt pa­ram­e­ters. It en­com­passed var­i­ous strate­gies in­clud­ing de­ci­sion trees, clus­ter­ing, re­gres­sion, and Bayesian ap­proaches that didn’t quite achieve the ul­ti­mate goal of ‘general in­tel­li­gence’.

While it be­gan as a small part of AI, bur­geon­ing in­ter­est has pro­pelled ML to the fore­front of re­search and it is now used across do­mains. Grow­ing hard­ware sup­port as well as im­prove­ments in al­go­rithms, es­pe­cially pat­tern recog­ni­tion, has led to ML be­ing ac­ces­si­ble for a much larger au­di­ence, lead­ing to wider adop­tion.

Ap­pli­ca­tions of ML

Ini­tially, the pri­mary ap­pli­ca­tions of ML were lim­ited to the field of com­puter vi­sion and pat­tern recog­ni­tion. This was prior to the stel­lar suc­cess and ac­cu­racy it en­joys to­day. Back then, ML seemed a pretty tame field, with its scope lim­ited to ed­u­ca­tion and aca­demics.

To­day we use ML with­out even be­ing aware of how de­pen­dent we are on it for our daily ac­tiv­i­ties. From Google’s search team try­ing to re­place the PageRank al­go­rithm with an im­proved ML al­go­rithm named RankBrain, to Facebook au­to­mat­i­cally sug­gest­ing friends to tag in a pic­ture, we are sur­rounded by use cases for ML al­go­rithms.

Deep learn­ing (DL)

A key ML ap­proach that re­mained dor­mant for a few decades was ar­ti­fi­cial neu­ral net­works. This even­tu­ally gained wide ac­cep­tance when im­proved pro­cess­ing ca­pa­bil­i­ties be­came avail­able. A neu­ral net­work sim­u­lates the ac­tiv­ity of a brain’s neu­rons in a lay­ered fash­ion, and the prop­a­ga­tion of data oc­curs in a sim­i­lar man­ner, en­abling ma­chines to learn more about a given set of ob­ser­va­tions and make ac­cu­rate pre­dic­tions.

These neu­ral net­works that had un­til re­cently been ig­nored, save for a few re­searchers led by Ge­of­frey Hin­ton, have to­day demon­strated an ex­cep­tional po­ten­tial for han­dling large volumes of data and en­hanc­ing the prac­ti­cal ap­pli­ca­tions of ma­chine learn­ing. The ac­cu­racy of these mod­els al­lows re­li­able ser­vices to be of­fered to end users, since the false pos­i­tives have been elim­i­nated al­most en­tirely.

Ap­pli­ca­tions of DL

DL has large scale busi­ness ap­pli­ca­tions be­cause of its ca­pac­ity to learn from mil­lions of ob­ser­va­tions at once. Al­though com­pu­ta­tion­ally in­ten­sive, it is still the pre­ferred al­ter­na­tive be­cause of its un­par­al­leled ac­cu­racy. This en­com­passes a num­ber of im­age recog­ni­tion ap­pli­ca­tions that con­ven­tion­ally re­lied on com­puter vi­sion prac­tices un­til the emer­gence of DL. Au­ton­o­mous ve­hi­cles and rec­om­men­da­tion sys­tems (such as those used by Net­flix and Ama­zon) are among the most pop­u­lar ap­pli­ca­tions of DL al­go­rithms.

Com­par­ing AI, ML and DL

Com­par­ing the tech­niques: The term AI was de­fined in the Dart­mouth Conference (1956) as fol­lows: “Ev­ery as­pect of learn­ing or any other fea­ture of in­tel­li­gence can in prin­ci­ple

be so pre­cisely de­scribed that a ma­chine can be made to sim­u­late it.” It is a broad def­i­ni­tion that cov­ers use cases that range from a game-play­ing bot to a voice recog­ni­tion sys­tem within Siri, as well as converting text to speech and vice versa. It is con­ven­tion­ally thought to have three cat­e­gories: Nar­row AI spe­cialised for a spe­cific task

Ar­ti­fi­cial general in­tel­li­gence (AGI) that can sim­u­late hu­man think­ing

Su­per-in­tel­li­gent AI, which im­plies a point where AI sur­passes hu­man in­tel­li­gence en­tirely

ML is a sub­set of AI that seems to rep­re­sent its most suc­cess­ful busi­ness use cases. It en­tails learn­ing from data in or­der to make in­formed de­ci­sions at a later point, and en­ables AI to be ap­plied to a broad spec­trum of prob­lems. ML al­lows sys­tems to make their own de­ci­sions fol­low­ing a learn­ing process that trains the sys­tem to­wards a goal. A num­ber of tools have emerged that al­low a wider au­di­ence ac­cess to the power of ML al­go­rithms, in­clud­ing Python li­braries such as scikit-learn, frame­works such as MLib for Apache Spark, soft­ware such as RapidMiner, and so on.

A fur­ther sub-di­vi­sion and sub­set of AI would be DL, which har­nesses the power of deep neu­ral net­works in or­der to train mod­els on large data sets, and make ac­cu­rate pre­dic­tions in the fields of im­age, face and voice recog­ni­tion, among oth­ers. The low trade-off be­tween train­ing time and com­pu­ta­tion er­rors makes it a lu­cra­tive op­tion for many busi­nesses to switch their core prac­tices to DL or in­te­grate these al­go­rithms into their sys­tem.

Clas­si­fy­ing ap­pli­ca­tions: There are very fuzzy bound­aries that dis­tin­guish the ap­pli­ca­tions of AI, ML and DL. How­ever, since there is a de­mar­ca­tion of the scope, it is pos­si­ble to iden­tify which sub­set a spe­cific ap­pli­ca­tion be­longs to. Usu­ally, we clas­sify per­sonal as­sis­tants and other forms of bots that aid with spe­cialised tasks, such as play­ing games, as AI due to their broader na­ture. These in­clude the ap­pli­ca­tions of search ca­pa­bil­i­ties, fil­ter­ing and short-list­ing, voice recog­ni­tion and text-to-speech con­ver­sion bun­dled into an agent.

Prac­tices that fall into a nar­rower cat­e­gory such as those in­volv­ing Big Data an­a­lyt­ics and data min­ing, pat­tern recog­ni­tion and the like, are placed un­der the spec­trum of ML al­go­rithms. Typ­i­cally, these in­volve sys­tems that ‘learn’ from data and ap­ply that learn­ing to a spe­cialised task.

Fi­nally, ap­pli­ca­tions be­long­ing to a niche cat­e­gory, which en­com­passes a large cor­pus of text or im­age-based data utilised to train a model on graph­ics pro­cess­ing units (GPUs) in­volve the use of DL al­go­rithms. These of­ten in­clude spe­cialised im­age and video recog­ni­tion tasks ap­plied to a broader usage, such as au­ton­o­mous driv­ing and nav­i­ga­tion.

Fig­ure 1: Con­ven­tional un­der­stand­ing of AI [Im­age credit: Geeky-Gad­gets]

Fig­ure 2: Ma­chine learn­ing work­flow [Im­age credit: TomBone’s Com­puter Vi­sion Blog]

Fig­ure 3: Ar­ti­fi­cial neu­ral net­works [Im­age credit: Shutterstock]

Fig­ure 5: Ap­pli­ca­tions of ar­ti­fi­cial in­tel­li­gence [Im­age credit: The Tele­graph, UK]

Fig­ure 6: Deep learn­ing for iden­ti­fy­ing dogs [Im­age credit: Data­ma­tion]

Fig­ure 4: A com­par­i­son of tech­niques [Im­age credit: NVIDIA]

Newspapers in English

Newspapers from India

© PressReader. All rights reserved.