The Story Be­hind the All-per­va­sive AI

The in­ven­tion of the dig­i­tal com­puter has been one of the defin­ing mo­ments of the mod­ern era. It all started off by build­ing a ma­chine that could fol­low com­mands to the let­ter. We have come a long way since then. This ar­ti­cle talks about a few other tech

OpenSource For You - - Contents -

In 1950, a man called Alan Tur­ing posed the ques­tion, ‘Can ma­chines think?’ in his pa­per ti­tled, ‘Com­put­ing Ma­chin­ery and In­tel­li­gence’, and the world has never been the same ever since. The gen­eral con­sen­sus is that this was the first step into the world of ar­ti­fi­cial in­tel­li­gence (AI). It was in this pa­per that Tur­ing posed his now pop­u­lar Tur­ing Test, also known as the Im­i­ta­tion Game (there’s now a pop­u­lar movie by that ti­tle). The term ‘ar­ti­fi­cial in­tel­li­gence’ how­ever, was yet to be coined and widely used.

Time rolled by and the lan­guage called C was in­vented be­tween 1969 and 1973 in Bell Labs. This led to a new kind of rev­o­lu­tion. We could now give ma­chines a step-by-step list of in­struc­tions, which would faith­fully be car­ried out by them. This was the pe­riod dur­ing which the In­ter­net was born and nour­ished. These events led to the pro­gram­ming pro­fes­sion evolv­ing to the state it is in to­day.

The task of a pro­gram­mer is to un­der­stand a real-world sit­u­a­tion, de­fine in­puts to a pro­gram (along with the pro­gram it­self) and then write out that pro­gram in some pro­gram­ming lan­guage. As long as you can write down a list of in­struc­tions in the se­quence in which tasks need to be per­formed, a com­puter can fol­low those in­struc­tions. Some­time later, John McCarthy came onto the scene with the Lisp lan­guage and coined the term ‘ar­ti­fi­cial in­tel­li­gence’. Lisp was a dif­fer­ent kind of pro­gram­ming lan­guage al­to­gether. Read­ers who have the time could read up more about this lan­guage.

Soon, peo­ple be­gan in­ves­ti­gat­ing the kind of prob­lems that pro­gram­ming could solve. Prob­lems that needed in­tel­li­gent de­ci­sions to ar­rive at a so­lu­tion came to be known as AI. This field grew, in­cor­po­rat­ing func­tions like search, plan­ning, pat­tern recog­ni­tion, clas­si­fi­ca­tion, causal­ity in­fer­ence and so on. It was thus that AI came to be a field to be stud­ied, whose im­ple­men­ta­tion on dig­i­tal com­put­ers would ac­com­plish tasks that were con­sid­ered ‘in­tel­li­gent’. The evo­lu­tion of this branch of tech­nol­ogy was amaz­ing, as it was pro­gramma­bil­ity and re­li­a­bil­ity that put hu­mans into space

(apart from other tech devel­op­ments that played ma­jor roles).

The prob­lem arose when the task to be ac­com­plished by a ma­chine was some­thing that hu­mans did, but which did not seem to fol­low a step-by-step pro­ce­dure. Take sight, for ex­am­ple; a nor­mal hu­man be­ing is able to de­tect ob­jects, iden­tify them and also lo­cate them, all with­out ever be­ing ‘taught’ how to ‘see’. This chal­lenge of get­ting ma­chines to see has evolved into the field of com­puter vi­sion. Tasks for which the steps to reach com­ple­tion were not ob­vi­ous, were dif­fi­cult for ma­chines to per­form. This was be­cause of the na­ture of pro­gram­ming. Pro­gram­mers needed to break down a task into a se­ries of se­quen­tial steps be­fore be­ing able to write down in­struc­tions for the com­puter to fol­low in some lan­guages like C. Ma­chine learn­ing (ML) was a big break from con­ven­tional pro­gram­ming. You no longer needed to know the steps to solve a prob­lem. All you needed were ex­am­ples of the task be­ing done, and the sys­tem would de­velop its own steps. This was amaz­ing! As long as you could find the right in­puts to feed to the sys­tem, it would dis­cover a way to get the tar­get out­puts.

This model was ap­plied to a lot of places over time. Rule based sys­tems came to be pop­u­lar in med­i­cal sys­tems. Bayes The­o­rem dom­i­nated the In­ter­net sales busi­ness. Sup­port Vec­tor Ma­chines were beau­ti­ful ma­chines, which worked like magic. Hid­den Markov Mod­els were very use­ful for stock mar­kets and other ‘time se­ries-like’ things. All in all, the world ben­e­fited from these devel­op­ments.

There was still a prob­lem though. The ad­vances were few and far be­tween. Take for ex­am­ple the task of recog­nis­ing ob­jects in im­ages. Be­fore ML, all ef­forts were be­ing put into de­vel­op­ing a bet­ter se­quence of steps for recog­nis­ing ob­jects in im­ages. When AlexNet burst onto the scene, the best per­form­ing al­go­rithms had er­rors of ap­prox­i­mately 26 per cent. AlexNet, how­ever, had an er­ror of al­most 16 per cent, which was a ma­jor leap for­ward. Ob­ject recog­ni­tion from im­ages has now reached su­per hu­man per­for­mance lev­els. What changed was that in­stead of ask­ing a com­puter to fol­low a cer­tain list of steps (a pro­gram), the com­puter was asked to find its own steps (a neu­ral net­work, an SVM or a ran­dom for­est are pro­grams in some senses) af­ter be­ing given a lot of ex­am­ples of what it was sup­posed to do. The prob­lem was with find­ing the right set of in­puts.

Con­tin­u­ing our dis­cus­sion of the im­age recog­ni­tion task, peo­ple were feed­ing a bunch of fea­tures to clas­si­fiers like SVMs and lo­gis­tic re­gres­sion mod­els. But these fea­tures were gen­er­ated by hu­mans and were not good enough for the task at hand. Pro­grams like SIFT, HOG, Canny edges, etc, were de­vel­oped to work around this, but even these were not good enough. What AlexNet in­tro­duced was the abil­ity to learn the cor­rect rep­re­sen­ta­tions based on some task from the most ba­sic in­put avail­able, namely, the pix­els. This was deep learn­ing — the abil­ity to build rep­re­sen­ta­tions and use them to build other rep­re­sen­ta­tions. Deep learn­ing is not limited to neu­ral net­works, as a lot of peo­ple be­lieve. Deep SVMs (ar­c­co­sine ker­nel) have been de­vel­oped along with Deep Ran­dom Forests (gcFor­est). In any task, if you need to em­ploy deep learn­ing, first ask your­self if there is a low level in­put that you can pro­vide? In the case of lan­guage based tasks, it’s words; for im­ages, it is pix­els; for au­dio, it is a raw sig­nal and so on.

There are still many mis­con­cep­tions about these fields in the pub­lic mind, es­pe­cially be­cause of mis­in­ter­pre­ta­tions by the pop­u­lar me­dia. One of the main rea­sons is that, typ­i­cally, re­porters either don’t bother to read the tech papers that are pub­lished be­fore re­port­ing on them, or fail to un­der­stand them com­pletely. This leads to un­for­tu­nate ru­mours like Face­book’s AI scare (the Deal or no Deal pa­per). Again, we only hear of the ma­jor changes in the field from the news chan­nels and not the slow build-up to­wards a par­tic­u­lar break­through. Some­times, the peo­ple claim­ing to be ‘ex­perts’ on the sub­ject and brought in to dis­cuss is­sues on news chan­nels may not have kept up with devel­op­ments in the field. They cover up their out­dated knowl­edge by use­less ro­man­ti­ci­sa­tion of the things be­ing dis­cussed. This fur­ther ham­pers the pub­lic mind’s abil­ity to grasp the true devel­op­ments in the field of AI.

Most pro­fes­sion­als in the AI field have started out by work­ing with the var­i­ous tools even be­fore they have had a chance to un­der­stand and learn the al­go­rithms be­hind AI or ML. This has led to the learn­ing of mis­lead­ing con­cepts and, in many cases, out­right wrong prac­tices. The In­dian in­dus­try suf­fers from the clas­sic case of re­sume­build­ing in this field. Peo­ple who have used the tools once or twice claim to be pro­fi­cient in the field, whereas it takes much more than that to mas­ter the ba­sics here. There is no doubt about the ad­van­tages AI, ML and DL bring to the ta­ble. What is in doubt is the abil­ity to train peo­ple who can use these tech­nolo­gies.

As of writ­ing this ar­ti­cle, the best place to start is Cours­era’s AI/ML cour­ses. Most, if not all, con­tent there is world class. If that does not slake your thirst, MIT OpenCourseWare on YouTube is also a won­der­ful learn­ing place. Then there is our very own NPTEL avail­able on YouTube which of­fers cour­ses on AI. All things con­sid­ered, if one gets the op­por­tu­nity to learn about AI from the peo­ple who in­vented it, one must grab it with both hands.

Newspapers in English

Newspapers from India

© PressReader. All rights reserved.