Ten Pop­u­lar Tools and Frame­works for Ar­ti­fi­cial In­tel­li­gence

This ar­ti­cle high­lights ten tools and frame­works that fea­ture on the ‘hot list’ for ar­ti­fi­cial in­tel­li­gence. A short de­scrip­tion along with fea­tures and links is given for each tool or frame­work.

OpenSource For You - - Con­tents -

Let’s go on an ex­cit­ing jour­ney, dis­cov­er­ing ex­actly why the fol­low­ing tools and frame­works are ranked so high.

1) Ten­sorFlow: An open source soft­ware li­brary for ma­chine in­tel­li­gence

Ten­sorFlow is an open source soft­ware li­brary that was orig­i­nally de­vel­oped by re­searchers and en­gi­neers work­ing on the Google

Brain Team. Ten­sorFlow is used for nu­mer­i­cal com­pu­ta­tion with data flow graphs. Nodes in the graph rep­re­sent math­e­mat­i­cal op­er­a­tions, while the graph edges rep­re­sent the mul­ti­di­men­sional data ar­rays (ten­sors) com­mu­ni­cat­ing be­tween them. The flex­i­ble ar­chi­tec­ture al­lows you to de­ploy com­pu­ta­tion to one or more CPUs or GPUs in a desk­top, server or mo­bile de­vice, with a sin­gle API.

Ten­sorFlow pro­vides mul­ti­ple APIs. The low­est level API—Ten­sorFlow Core—pro­vides you with com­plete pro­gram­ming con­trol. The higher-level APIs are built on top of Ten­sorFlow Core and are typ­i­cally eas­ier to learn and use than Ten­sorFlow Core. In ad­di­tion, the higher-level APIs make repet­i­tive tasks eas­ier and more con­sis­tent be­tween dif­fer­ent users. A high-level API like tf.es­ti­ma­tor helps you man­age data sets, es­ti­ma­tors, train­ing and in­fer­ence.

The cen­tral unit of data in Ten­sorFlow is the ten­sor, which con­sists of a set of prim­i­tive val­ues shaped into an ar­ray of any num­ber of di­men­sions. A ten­sor’s rank is its num­ber of di­men­sions.

A few Google ap­pli­ca­tions us­ing Ten­sorFlow are listed be­low.

RankBrain: A large-scale de­ploy­ment of deep neu­ral nets for search rank­ing on www.google.com.

In­cep­tion im­age clas­si­fi­ca­tion model: This is a base­line model, the re­sult of on­go­ing re­search into highly ac­cu­rate com­puter vi­sion models, start­ing with the model that won the 2014 Ima­genet im­age clas­si­fi­ca­tion chal­lenge.

SmartRe­ply: A deep LSTM model to au­to­mat­i­cally gen­er­ate email re­sponses.

Mas­sive multi-task net­works for drug dis­cov­ery: A deep neu­ral net­work model for iden­ti­fy­ing promis­ing drug can­di­dates – built by Google in as­so­ci­a­tion with Stan­ford Univer­sity.

On-de­vice com­puter vi­sion for

OCR: An on-de­vice com­puter vi­sion model for op­ti­cal char­ac­ter recog­ni­tion to en­able real-time trans­la­tion.

Use­ful links

Ten­sorflow home: https://www. ten­sorflow.org

GitHub: https://github.com/ten­sorflow Get­ting started: https://www.ten­sorflow. org/get_s­tarted/get_s­tarted

2) Apache Sys­temML: An optimal work­place for ma­chine learn­ing us­ing Big Data

Sys­temML is the ma­chine learn­ing tech­nol­ogy cre­ated at IBM. It ranks among the top-level projects at the Apache Soft­ware Foun­da­tion. It’s a flex­i­ble, scal­able ma­chine learn­ing sys­tem. Im­por­tant char­ac­ter­is­tics

1. Al­go­rithm cus­tomis­abil­ity via

R-like and Python-like lan­guages

2. Mul­ti­ple ex­e­cu­tion modes, in­clud­ing Spark MLCon­text, Spark Batch, Hadoop Batch, Stand­alone and JMLC (Java Ma­chine Learn­ing Con­nec­tor)

3. Au­to­matic op­ti­mi­sa­tion based on data and clus­ter char­ac­ter­is­tics to en­sure both efficiency and scal­a­bil­ity Sys­temML is con­sid­ered as the SQL for ma­chine learn­ing. The lat­est ver­sion (1.0.0) of Sys­temML sup­ports Java 8+, Scala 2.11+, Python 2.7/3.5+, Hadoop 2.6+ and Spark 2.1+.

It can be run on top of Apache Spark, where it au­to­mat­i­cally scales your data, line by line, de­ter­min­ing whether your code should be run on the driver or an Apache Spark clus­ter. Fu­ture Sys­temML de­vel­op­ments in­clude ad­di­tional deep learn­ing with GPU ca­pa­bil­i­ties, such as im­port­ing and run­ning neu­ral net­work ar­chi­tec­tures and pre-trained models for train­ing.

Java Ma­chine Learn­ing Con­nec­tor (JMLC) for Sys­temML

The Java Ma­chine Learn­ing Con­nec­tor (JMLC) API is a pro­gram­matic in­ter­face for in­ter­act­ing with Sys­temML in an em­bed­ded fash­ion. The pri­mary pur­pose of JMLC is that of a scor­ing API, whereby your scor­ing func­tion is ex­pressed us­ing Sys­temML’s DML (Declar­a­tive Ma­chine Learn­ing) lan­guage. In ad­di­tion to scor­ing, em­bed­ded Sys­temML can be used for tasks such as un­su­per­vised learn­ing (like clus­ter­ing) in the con­text of a larger ap­pli­ca­tion run­ning on a sin­gle ma­chine.

Use­ful links

Sys­temML home: https://sys­temml. apache.org/

GitHub: https://github.com/apache/sys­temml

3) Caffe: A deep learn­ing frame­work made with ex­pres­sion, speed and mod­u­lar­ity in mind

The Caffe pro­ject was ini­ti­ated by Yangqing Jia dur­ing the course of his Ph.D at UC Berke­ley, and later de­vel­oped fur­ther by Berke­ley AI Re­search (BAIR) and com­mu­nity con­trib­u­tors. It mostly fo­cuses on con­vo­lu­tional net­works for com­puter vi­sion ap­pli­ca­tions. Caffe is a solid, pop­u­lar choice for com­puter vi­sion­re­lated tasks, and you can down­load many suc­cess­ful models made by Caffe users from the Caffe Model Zoo (link be­low) for out-of-the-box use.

Caffe’s ad­van­tages

1) Ex­pres­sive ar­chi­tec­ture en­cour­ages ap­pli­ca­tion and in­no­va­tion. Models and op­ti­mi­sa­tion are de­fined by con­fig­u­ra­tion with­out hard cod­ing. Users can switch be­tween CPU and GPU by set­ting a sin­gle flag to train on a GPU ma­chine, and then de­ploy to com­mod­ity clus­ters or mo­bile de­vices.

2) Ex­ten­si­ble code fos­ters ac­tive de­vel­op­ment. In Caffe’s first year, it was forked by over

1,000 de­vel­op­ers and had many sig­nif­i­cant changes contributed back.

3) Speed makes Caffe per­fect for re­search ex­per­i­ments and in­dus­try de­ploy­ment. Caffe can process over 60 mil­lion images per day with a sin­gle NVIDIA K40 GPU. 4) Com­mu­nity: Caffe al­ready pow­ers aca­demic re­search projects, startup pro­to­types, and even large-scale in­dus­trial ap­pli­ca­tions in vi­sion, speech and mul­ti­me­dia. Use­ful links

Caffe home: http://caffe.berke­leyvi­sion.org/GitHub:https://github.com/BVLC/caffe Caffe user group: https://groups.google.com/fo­rum/#!fo­rum/caffe-users

Tu­to­rial pre­sen­ta­tion of the frame­work and a full-day crash course: https://docs.google.com/pre­sen­ta­tion/d/1UeKXVgRvvxg9OUdh_ UiC5G71UMscNPlvArsWER41PsU/edit#slide=id.p

Caffe Model Zoo: https://github.com/BVLC/caffe/wiki/ModelZoo

4) Apache Ma­hout: A distributed lin­ear al­ge­bra frame­work and math­e­mat­i­cally ex­pres­sive Scala DSL

Ma­hout is de­signed to let math­e­ma­ti­cians, statis­ti­cians and data sci­en­tists quickly im­ple­ment their own al­go­rithms. Apache Spark is the rec­om­mended out-of-the-box distributed back-end or can be ex­tended to other distributed back-ends. Its fea­tures in­clude the fol­low­ing: It is a math­e­mat­i­cally ex­pres­sive Scala DSL

Of­fers sup­port for mul­ti­ple distributed back-ends (in­clud­ing Apache Spark)

Has mod­u­lar na­tive solvers for CPU, GPU and CUDA ac­cel­er­a­tion Apache Ma­hout cur­rently im­ple­ments col­lab­o­ra­tive fil­ter­ing (CF), clus­ter­ing and cat­e­gori­sa­tion.

Fea­tures and ap­pli­ca­tions

Taste CF: Taste is an open source pro­ject for CF (col­lab­o­ra­tive fil­ter­ing) started by Sean Owen on SourceForge and do­nated to Ma­hout in 2008

Sev­eral Map-Re­duce en­abled clus­ter­ing im­ple­men­ta­tions, in­clud­ing k-Means, fuzzy k-Means, Canopy, Dirich­let and Mean-Shift Distributed Naive Bayes and Com­ple­men­tary Naive Bayes

clas­si­fi­ca­tion im­ple­men­ta­tions Distributed fit­ness func­tion ca­pa­bil­i­ties for evo­lu­tion­ary pro­gram­ming

Matrix and vec­tor li­braries Ex­am­ples of all the above al­go­rithms

Use­ful links

Ma­hout home: http://ma­hout. apache.org/

GitHub: https://github.com/apache/ ma­hout

An in­tro­duc­tion to Ma­hout by Grant Inger­soll: https://www.ibm.com/ de­vel­op­er­works/li­brary/j-ma­hout/

5) OpenNN: An open source class li­brary writ­ten in C++ to im­ple­ment neu­ral net­works

OpenNN (Open Neu­ral Net­works Li­brary) was for­merly known as Flood and is based on the

Ph.D the­sis of R. Lopez, called ‘Neu­ral Net­works for Vari­a­tional Prob­lems in Engi­neer­ing’, at the Tech­ni­cal Univer­sity of Cat­alo­nia, 2008.

OpenNN im­ple­ments data min­ing meth­ods as a bun­dle of func­tions. Th­ese can be em­bed­ded in other soft­ware tools us­ing an ap­pli­ca­tion pro­gram­ming in­ter­face (API) for the in­ter­ac­tion be­tween the soft­ware tool and the pre­dic­tive an­a­lyt­ics tasks.

The main ad­van­tage of OpenNN is its high per­for­mance. It is de­vel­oped in C++ for bet­ter mem­ory man­age­ment and higher pro­cess­ing speed. It im­ple­ments CPU par­al­leli­sa­tion by means of OpenMP and GPU ac­cel­er­a­tion with CUDA.

The pack­age comes with unit test­ing, many ex­am­ples and ex­ten­sive doc­u­men­ta­tion. It pro­vides an ef­fec­tive frame­work for the re­search and de­vel­op­ment of neu­ral net­works al­go­rithms and ap­pli­ca­tions. Neu­ral De­signer is a pro­fes­sional pre­dic­tive an­a­lyt­ics tool that uses OpenNN, which means that the neu­ral en­gine of Neu­ral De­signer has been built us­ing OpenNN.

OpenNN has been de­signed to learn from both data sets and math­e­mat­i­cal models.

Data sets

Func­tion re­gres­sion Pat­tern recog­ni­tion Time series pre­dic­tion Math­e­mat­i­cal models Optimal con­trol Optimal shape de­sign

Data sets and math­e­mat­i­cal models In­verse prob­lems

Use­ful links

OpenNN home: http://www.opennn.net/ OpenNN Artel­nics GitHub: https:// github.com/Artel­nics/OpenNN

Neu­ral De­signer: https://neu­ralde­signer. com/

6) Torch: An open source ma­chine learn­ing li­brary, a sci­en­tific com­put­ing frame­work, and a script lan­guage based on the Lua pro­gram­ming lan­guage

Torch pro­vides a wide range of al­go­rithms for deep ma­chine learn­ing. It uses the script­ing lan­guage LuaJIT, and an un­der­ly­ing C/CUDA im­ple­men­ta­tion. The core pack­age of Torch is torch. It pro­vides a flex­i­ble N-di­men­sional ar­ray or ten­sor, which sup­ports ba­sic rou­tines for in­dex­ing, slic­ing, trans­pos­ing, type-cast­ing, re­siz­ing, shar­ing stor­age and cloning. The nn pack­age is used for build­ing neu­ral net­works.


It is a pow­er­ful N-di­men­sional ar­ray Has lots of rou­tines for in­dex­ing, slic­ing and trans­pos­ing

Has an amaz­ing in­ter­face to C, via LuaJIT

Lin­ear al­ge­bra rou­tines

Neu­ral net­work and en­ergy-based models

Nu­meric op­ti­mi­sa­tion rou­tines

Fast and ef­fi­cient GPU sup­port Embed­dable, with ports to iOS and An­droid back-ends

Torch is used by the Face­book AI Re­search Group, IBM, Yan­dex and the Idiap Re­search In­sti­tute. It has been ex­tended for use on An­droid and iOS. It has been used to build hard­ware im­ple­men­ta­tions for data flows like those found in neu­ral net­works. Face­book has re­leased a set of ex­ten­sion mod­ules as open source soft­ware.

PyTorch is an open source ma­chine learn­ing li­brary for Python, used for ap­pli­ca­tions such as nat­u­ral lan­guage pro­cess­ing. It is pri­mar­ily de­vel­oped by Face­book’s ar­ti­fi­cial in­tel­li­gence re­search group, and Uber’s Pyro soft­ware for prob­a­bilis­tic pro­gram­ming has been built upon it.

Use­ful links

Torch Home: http://torch.ch/ GitHub: https://github.com/torch

7) Neu­roph: An ob­ject-ori­ented neu­ral net­work frame­work writ­ten in Java

Neu­roph can be used to cre­ate and train neu­ral net­works in Java pro­grams. It pro­vides a Java class li­brary as well as a GUI tool called easyNeu­rons for creat­ing and train­ing neu­ral net­works. Neu­roph is a light­weight Java neu­ral net­work, as well as a frame­work to de­velop com­mon neu­ral net­work ar­chi­tec­tures. It con­tains a wellde­signed, open source Java li­brary with a small num­ber of ba­sic classes that cor­re­spond to ba­sic NN con­cepts. It also has a nice GUI neu­ral net­work edi­tor to quickly cre­ate Java neu­ral net­work com­po­nents. It has been re­leased as open source un­der the Apache 2.0 li­cence.

Neu­roph’s core classes cor­re­spond

Newspapers in English

Newspapers from India

© PressReader. All rights reserved.