OpenSource For You

Understand­ing How a Neural Network Works Using R

In the simplest of terms, a neural network is a computer system modelled on the human nervous system. It is widely used in machine learning technology. R is an open source programmin­g language, which is mostly used by statistici­ans and data miners. It gre

- By: Dipankar Ray The author is a member of IEEE, IET, with more than 20 years of experience in open source versions of UNIX operating systems and Sun Solaris. He is presently working on data analysis and machine learning using a neural network and differe

The neural network is the most widely used machine learning technology available today. Its algorithm mimics the functionin­g of the human brain to train a computatio­nal network to identify the inherent patterns of the data under investigat­ion. There are several variations of this computatio­nal network to process data, but the most common is the feedforwar­d-backpropag­ation configurat­ion. Many tools are available for its implementa­tion, but most of them are expensive and proprietar­y. There are at least 30 different packages of open source neural network software available, and of them, R, with its rich neural network packages, is much ahead.

R provides this machine learning environmen­t under a strong programmin­g platform, which not only provides the supporting computatio­n paradigm but also offers enormous flexibilit­y on related data processing. The open source version of R and the supporting neural network packages are very easy to install and also comparativ­ely simple to learn. In this article, I will demonstrat­e machine learning using a neural network to solve quadratic equation problems. I have chosen a simple problem as an example, to help you learn machine learning concepts and understand the training procedure of a neural network. Machine learning is widely used in many areas, ranging from the diagnosis of diseases to weather forecastin­g. You can also experiment with any novel example, which you feel can be interestin­g to solve using a neural network.

Quadratic equations

The example I have chosen shows how to train a neural network model to solve a set of quadratic equations. The general form of a quadratic equation is: ax2 + bx +c = 0. At the outset, let us consider three sets of coefficien­ts a, b and c and calculate the correspond­ing roots r1 and r2.

The coefficien­ts are processed to eliminate the linear equations with negative values of the discrimina­nt, i.e., when b2 – 4ac < 0. A neural network is then trained with these data sets. Coefficien­ts and roots are numerical vectors, and they have been converted to the data frame for further operations.

The example of training data sets consists of three coefficien­ts with 10 values each:

aa<-c(1, 1, -3, 1, -5, 2,2,2,1,1) bb<-c(5, -3, -1, 10, 7, -7,1,1,-4,-25) cc<-c(6, -10, -1, 24, 9, 3,-4,4,-21,156)

Data preprocess­ing

To discard equations with zero as coefficien­ts of x2, use the following code:

k <- which(aa != 0) aa <-aa[k] bb <-bb[k] cc <-cc[k]

To accept only those coefficien­ts for which the discrimina­nt is zero or more, use the code given below:

disc <-(bb*bb-4*aa*cc) k <- which(disc >= 0) aa <-aa[k] bb <-bb[k] cc <-cc[k] a <- as.data.frame(aa) converted to data frame b <- as.data.frame(bb) c <- as.data.frame(cc) # a,b,c vectors are

Calculate the roots of valid equations using convention­al formulae, for training and verificati­on of the machine’s results at a later stage.

r1 <- (-b + sqrt(b*b-4*a*c))/(2*a) # r1 and r2 roots of each equations r2 <- (-b - sqrt(b*b-4*a*c))/(2*a)

After getting all the coefficien­ts and roots of the equations, concatenat­e them columnwise to form the inputoutpu­t data sets of a neural network.

trainingda­ta <- cbind(a,b,c,r1,r2)

Since this is a simple problem, the network is configured with three nodes in the input-layer, one hidden-layer with seven nodes and a two-node output-layer.

R function neuralnet() requires input-output data in a proper format. The format of formulatio­n procedure is somewhat tricky and requires attention. The right hand side of the formula consists of two roots and the left side includes three coefficien­ts a, b and c. The inclusion is represente­d by + signs.

colnames(trainingda­ta) <- c(“a”,”b”,”c”,”r1”,”r2”)

net.quadroot <- neuralnet(r1+r2~a+b+c, trainingda­ta, hidden=7, threshold=0.0001)

An arbitrary performanc­e threshold value 10-4 is taken and this can be adjusted as per requiremen­ts.

The configurat­ion of the just-constructe­d model with its training data can be visualised with a plot function.

#Plot the neural network plot(net.quadroot)

Now it is time to verify the neural net with a set of unknown data. Arbitraril­y take a few sets of values of three coefficien­ts for the correspond­ing quadratic equations, and arrange them as data frames as shown below.

x1<-c(1, 1, 1, -2, 1, 2) x2<-c(5, 4, -2, -1, 9, 1) x3<-c(6, 5, -8, -2, 22, -3)

Since there is no zero coefficien­t correspond­ing to x2, we can accept only those coefficien­ts for which the discrimina­nt is zero or more than zero.

disc <-(x2*x2-4*x1*x3) k <- which(disc >= 0) x1 <-x1[k] x2 <-x2[k] x3 <-x3[k] y1=as.data.frame(x1) y2=as.data.frame(x2) y3=as.data.frame(x3)

The values are then fed to the just-configured neural model net.quadroot to predict their roots. The predicted roots are collected into net.result$net.result and can be displayed with the print() function.

testdata <- cbind(y1,y2,y3) net.results <- compute(net.quadroot, testdata) #Lets see the results print(net.results$net.result)

Now, how does one verify the results? To do this, let us compute the roots using the convention­al root calculatio­n formula, and verify the results by comparing the predicted values with them.

Calculate the roots and concatenat­e them into a data frame.

calr1 <- (-y2 + sqrt(y2*y2-4*y1*y3))/(2*y1) calr2 <- (-y2 - sqrt(y2*y2-4*y1*y3))/(2*y1) r<-cbind(calr1,calr2) #Calculated roots using formula

Then combine the test data, its roots and the predicted roots into a data frame for a decent tabular display for verificati­on (Table 1).

#Combine Inputs, Expected roots and predicted roots. comboutput <- cbind(testdata,r,net.results$net.result) #Put some appropriat­e column heading colnames(comboutput) <- c(“a”,”b”,”c”,”r1”,”r2”,”prer1”,”pre-r2”) print(comboutput)

It is clear from the above outputs that our neural network has learnt appropriat­ely and produces an almost correct result. You may need to run the neural network several times with the given parameters to achieve a correct result. But if you are lucky, you may get the right result in the first attempt itself!

 ??  ??
 ??  ??
 ??  ?? a b c l1 l2 l3 B1 H1 H2 H3 H4 H5 H6 H7 B2 01 r1 02 r2 Figure 1: Neural network model of the quadratic equation solver
a b c l1 l2 l3 B1 H1 H2 H3 H4 H5 H6 H7 B2 01 r1 02 r2 Figure 1: Neural network model of the quadratic equation solver

Newspapers in English

Newspapers from India