Deep learn­ing

OpenSource For You - - DEVELOPERS -

The ANN has a lay­ered ap­proach. It has one in­put layer and one out­put layer. There may be one or more hid­den lay­ers. When the con­structed net­work be­comes deep (with a higher num­ber of hid­den lay­ers), it is called a deep net­work or deep learn­ing net­work. The pop­u­lar­ity of deep learn­ing can be un­der­stood from the fact that lead­ing IT com­pa­nies like Google, Face­book, Mi­crosoft and Baidu have in­vested in it in do­mains like speech, im­age and be­hav­iour modelling. Deep learn­ing has var­i­ous char­ac­ter­is­tics as listed below: In the tra­di­tional ap­proach, the fea­tures of ma­chine learn­ing or shal­low learn­ing have to be iden­ti­fied by hu­mans whereas in deep learn­ing, the fea­tures are not hu­man con­structed. Deep learn­ing in­volves ANN with a greater num­ber of lay­ers. Deep learn­ing han­dles end-to-end com­po­si­tional mod­els. The hi­er­ar­chy of rep­re­sen­ta­tions with dif­fer­ent data is han­dled. For ex­am­ple, with speech, the hi­er­ar­chi­cal rep­re­sen­ta­tion is: Au­dio -> Band -> Phone -> Word. vi­sion, it is: Pixel -> Mo­tif -> Part -> Ob­ject. There are plenty of re­sources avail­able on the Web to un­der­stand deep learn­ing. How­ever, at times it leads to the prob­lem of plenty. The lecture de­liv­ered by Dr An­drew Ng at the GPU Tech­nol­ogy Con­fer­ence 2015 is one of the best to start with, in or­der to get a clear un­der­stand­ing of deep learn­ing and its po­ten­tial ap­pli­ca­tions ( http://www.us­tream.tv/ recorded/60113824).

For

Newspapers in English

Newspapers from India

© PressReader. All rights reserved.