OpenSource For You

An Introducti­on to Deep (Machine) Learning

For years, humans have tried to get computers to replicate the thinking processes of the human brain. To a limited extent, this has been possible by using deep learning and deep neural networks. This article provides insights into deep or machine learning

-

When surfing the Net or while on social media, you must have wondered how you automatica­lly get pop-ups of things that interest you. Did you know that there are lots of things happening behind the scenes? In fact, lots of computatio­ns and algorithms are running in the background to automatica­lly find and display things that interest you, based on your search history… And this is where deep learning begins.

Deep learning is one of the hottest topics nowadays. If you do a Google search, you will come across a lot that’s happening in this field and it is getting better every day, as one can gauge when reading up on, ‘Artificial intelligen­ce: Google’s AlphaGo beats Go master Lee Se-dol’ at www.bbc.com.

In this article, we will look at how we can practicall­y implement a three-layer network for deep learning and the basics to understand the network.

Definition

It all started with machine learning – a process by which we humans wanted to train machines to learn as we do. Deep learning is one of the ways of moving machine learning closer to its original goal—artificial intelligen­ce.

As we are dealing with computers here, inputs to these are data such as images, sound and text. Problem statements include image recognitio­n, speech recognitio­n, and so on. We will focus on the image recognitio­n problem here.

History

When humans invented computers, scientists started working on machine learning by defining the properties of objects. For instance, the image of a cup was defined as cylindrica­l and semi-circular objects placed close to each other. But in this universe, there are so many objects and many of them have similar properties. So expertise was needed in each field to define the properties of the objects. This seemed to be a logically incorrect method as its complexity would undoubtedl­y increase with an increase in the number of objects.

This triggered new ways of machine learning whereby machines became capable of learning by themselves, which in turn led to deep learning.

Architectu­re

This is a new area of research and there have been many architectu­res proposed till now. These are:

1. Deep neural networks

2. Deep belief networks

3. Convolutio­nal neural networks

4. Convolutio­nal deep belief networks

5. Large memory storage and retrieval (LAMSTAR) neural networks

6. Deep stacking networks

Deep neural networks (DNNs)

Let us now look at how deep neural networks work.

The word ‘neural’ in DNN is related to biology. Actually, the soul of these networks is how the biological neural system works. So, let’s take a brief look at how two biological neurons communicat­e.

There are three main parts in a biological neuron as shown in Figure 1.

1. Dendrite: This acts as input to the neuron from another neuron.

2. Axon: This passes informatio­n from one neuron to another.

3. Synaptic connection: This acts as a connection between two neurons. If the strength of the received signal is higher than some threshold, it activates another neuron.

Neuron types

Let us try to express human decisions and biological neural networks mathematic­ally so that computers can comprehend them.

Let’s suppose that you want to go from city A to city B. Prior to making the journey, there are three factors that will influence your travel decision. These are: a. If the weather (x1) is good (represente­d by 1) or bad (represente­d by 0) has a weight of (w1)

b. If your leave (x2) is approved (represente­d by 1), or not (represente­d by 0) has a weight of (w2)

c. If transport (x3) is available (represente­d by 1) or not (represente­d by 0) has a weight of (w3)

And you will decide as follows: Irrespecti­ve of whether your leave is approved or not and transport is available or not, you will go if the weather is good. This problem statement can be drawn as shown in Figure 2.

According to the figure, if the sum of the product of the inputs (xi) and their respective weights (wi) is greater than some threshold (T), then you will go (1), else you will not (0).

As your input and output is fixed, you have to choose weights and thresholds to satisfy the equation.

For example, let us choose w1=6, w2=2, w3=2 and T=5. You will be able to make a correct decision if we choose the above values for equation (1), i.e., if your leave is not approved (0) and transport is not available (0) but the weather is good (1), then you should be going.

Similarly, you can check other conditions as well.

It will be easy to manually calculate these weights and thresholds for small decision-making problems but as complexity increases, we need to find other ways – and this is where mathematic­s and algorithms come in.

f(y), the function represente­d in Figure 3, produces output in terms of 0 and 1. This says that for a small change in input, the change in output is high — represente­d by a step function. This can cause problems in many cases. Hence, we need to define a function f(y) such that for small changes in input, changes in the output are small — represente­d by the Sigmoid function.

Depending upon these conditions, there are two neuron types defined, as shown in Figure 3.

 ??  ??
 ??  ?? Figure 1: Biological neural network (Source: https://en.wikipedia.org/ wiki/Biological_neural_network)
Figure 1: Biological neural network (Source: https://en.wikipedia.org/ wiki/Biological_neural_network)
 ??  ?? Figure 2: Simple human decision in mathematic­al form
Figure 2: Simple human decision in mathematic­al form
 ??  ?? Figure 3: Neuron types
Figure 3: Neuron types
 ??  ??

Newspapers in English

Newspapers from India