An In­tro­duc­tion to Deep (Ma­chine) Learn­ing

For years, hu­mans have tried to get com­put­ers to repli­cate the think­ing pro­cesses of the hu­man brain. To a limited ex­tent, this has been pos­si­ble by us­ing deep learn­ing and deep neu­ral net­works. This ar­ti­cle pro­vides in­sights into deep or ma­chine learn­ing

OpenSource For You - - Contents -

When surf­ing the Net or while on so­cial me­dia, you must have won­dered how you au­to­mat­i­cally get pop-ups of things that in­ter­est you. Did you know that there are lots of things hap­pen­ing be­hind the scenes? In fact, lots of com­pu­ta­tions and al­go­rithms are run­ning in the back­ground to au­to­mat­i­cally find and dis­play things that in­ter­est you, based on your search his­tory… And this is where deep learn­ing be­gins.

Deep learn­ing is one of the hottest top­ics nowa­days. If you do a Google search, you will come across a lot that’s hap­pen­ing in this field and it is get­ting bet­ter ev­ery day, as one can gauge when read­ing up on, ‘Ar­ti­fi­cial in­tel­li­gence: Google’s Al­phaGo beats Go mas­ter Lee Se-dol’ at

In this ar­ti­cle, we will look at how we can prac­ti­cally im­ple­ment a three-layer net­work for deep learn­ing and the ba­sics to un­der­stand the net­work.


It all started with ma­chine learn­ing – a process by which we hu­mans wanted to train ma­chines to learn as we do. Deep learn­ing is one of the ways of mov­ing ma­chine learn­ing closer to its orig­i­nal goal—ar­ti­fi­cial in­tel­li­gence.

As we are deal­ing with com­put­ers here, in­puts to these are data such as im­ages, sound and text. Prob­lem state­ments in­clude im­age recog­ni­tion, speech recog­ni­tion, and so on. We will fo­cus on the im­age recog­ni­tion prob­lem here.


When hu­mans in­vented com­put­ers, sci­en­tists started work­ing on ma­chine learn­ing by defin­ing the prop­er­ties of ob­jects. For in­stance, the im­age of a cup was de­fined as cylin­dri­cal and semi-cir­cu­lar ob­jects placed close to each other. But in this uni­verse, there are so many ob­jects and many of them have sim­i­lar prop­er­ties. So ex­per­tise was needed in each field to de­fine the prop­er­ties of the ob­jects. This seemed to be a log­i­cally in­cor­rect method as its com­plex­ity would un­doubt­edly in­crease with an in­crease in the num­ber of ob­jects.

This trig­gered new ways of ma­chine learn­ing whereby ma­chines be­came ca­pa­ble of learn­ing by them­selves, which in turn led to deep learn­ing.


This is a new area of re­search and there have been many ar­chi­tec­tures pro­posed till now. These are:

1. Deep neu­ral net­works

2. Deep be­lief net­works

3. Con­vo­lu­tional neu­ral net­works

4. Con­vo­lu­tional deep be­lief net­works

5. Large mem­ory stor­age and re­trieval (LAMSTAR) neu­ral net­works

6. Deep stack­ing net­works

Deep neu­ral net­works (DNNs)

Let us now look at how deep neu­ral net­works work.

The word ‘neu­ral’ in DNN is re­lated to bi­ol­ogy. Ac­tu­ally, the soul of these net­works is how the bi­o­log­i­cal neu­ral sys­tem works. So, let’s take a brief look at how two bi­o­log­i­cal neu­rons com­mu­ni­cate.

There are three main parts in a bi­o­log­i­cal neu­ron as shown in Fig­ure 1.

1. Den­drite: This acts as in­put to the neu­ron from an­other neu­ron.

2. Axon: This passes in­for­ma­tion from one neu­ron to an­other.

3. Sy­nap­tic con­nec­tion: This acts as a con­nec­tion be­tween two neu­rons. If the strength of the re­ceived sig­nal is higher than some thresh­old, it ac­ti­vates an­other neu­ron.

Neu­ron types

Let us try to ex­press hu­man de­ci­sions and bi­o­log­i­cal neu­ral net­works math­e­mat­i­cally so that com­put­ers can com­pre­hend them.

Let’s sup­pose that you want to go from city A to city B. Prior to mak­ing the jour­ney, there are three fac­tors that will in­flu­ence your travel de­ci­sion. These are: a. If the weather (x1) is good (rep­re­sented by 1) or bad (rep­re­sented by 0) has a weight of (w1)

b. If your leave (x2) is ap­proved (rep­re­sented by 1), or not (rep­re­sented by 0) has a weight of (w2)

c. If trans­port (x3) is avail­able (rep­re­sented by 1) or not (rep­re­sented by 0) has a weight of (w3)

And you will de­cide as fol­lows: Ir­re­spec­tive of whether your leave is ap­proved or not and trans­port is avail­able or not, you will go if the weather is good. This prob­lem state­ment can be drawn as shown in Fig­ure 2.

Ac­cord­ing to the fig­ure, if the sum of the prod­uct of the in­puts (xi) and their re­spec­tive weights (wi) is greater than some thresh­old (T), then you will go (1), else you will not (0).

As your in­put and out­put is fixed, you have to choose weights and thresh­olds to sat­isfy the equa­tion.

For ex­am­ple, let us choose w1=6, w2=2, w3=2 and T=5. You will be able to make a cor­rect de­ci­sion if we choose the above values for equa­tion (1), i.e., if your leave is not ap­proved (0) and trans­port is not avail­able (0) but the weather is good (1), then you should be go­ing.

Sim­i­larly, you can check other con­di­tions as well.

It will be easy to man­u­ally cal­cu­late these weights and thresh­olds for small de­ci­sion-mak­ing prob­lems but as com­plex­ity in­creases, we need to find other ways – and this is where math­e­mat­ics and al­go­rithms come in.

f(y), the func­tion rep­re­sented in Fig­ure 3, pro­duces out­put in terms of 0 and 1. This says that for a small change in in­put, the change in out­put is high — rep­re­sented by a step func­tion. This can cause prob­lems in many cases. Hence, we need to de­fine a func­tion f(y) such that for small changes in in­put, changes in the out­put are small — rep­re­sented by the Sig­moid func­tion.

Depend­ing upon these con­di­tions, there are two neu­ron types de­fined, as shown in Fig­ure 3.

Fig­ure 1: Bi­o­log­i­cal neu­ral net­work (Source: wiki/Bi­o­log­i­cal_neu­ral_net­work)

Fig­ure 2: Sim­ple hu­man de­ci­sion in math­e­mat­i­cal form

Fig­ure 3: Neu­ron types

Newspapers in English

Newspapers from India

© PressReader. All rights reserved.