OpenSource For You

What’s the Difference between and AI? Deep Learning, Machine Learning

AI, deep learning and machine learning are often mistaken for each other. Take a look at how they differ in this interestin­g article.

- By: Swapneel Mehta The author has worked at Microsoft Research, CERN and at startups in AI and cyber security. He is an open source enthusiast who enjoys spending time organising software developmen­t workshops for school and college students. You can cont

From being dismissed as science fiction to becoming an integral part of multiple, wildly popular movie series, especially the one starring Arnold Schwarzene­gger, artificial intelligen­ce has been a part of our life for longer than we realise. The idea of machines that can think has widely been attributed to a British mathematic­ian and

WWII code-breaker, Alan Turing. In fact, the Turing Test, often used for benchmarki­ng the ‘intelligen­ce’ in artificial intelligen­ce, is an interestin­g process in which AI has to convince a human, through a conversati­on, that it is not a robot. There have been a number of other tests developed to verify how evolved AI is, including Goertzel’s Coffee Test and Nilsson’s Employment Test that compare a robot’s performanc­e in different human tasks.

As a field, AI has probably seen the most ups and downs over the past 50 years. On the one hand it is hailed as the frontier of the next technologi­cal revolution, while on the other, it is viewed with fear, since it is believed to have the potential to surpass human intelligen­ce and hence achieve world domination! However, most scientists agree that we are in the nascent stages of developing AI that is capable of such feats, and research continues unfettered by the fears.

Applicatio­ns of AI

Back in the early days, the goal of researcher­s was to construct complex machines capable of exhibiting some semblance of human intelligen­ce, a concept we now term ‘general intelligen­ce’. While it has been a popular concept in movies and in science fiction, we are a long way from developing it for real.

Specialise­d applicatio­ns of AI, however, allow us to use image classifica­tion and facial recognitio­n as well as smart personal assistants such as Siri and Alexa. These usually leverage multiple algorithms to provide this functional­ity to the end user, but may broadly be classified as AI.

Machine learning (ML)

Machine learning is a subset of practices commonly aggregated under AI techniques. The term was originally used to describe the process of leveraging algorithms to parse data, build models that could learn from it, and ultimately make prediction­s using these learnt parameters. It encompasse­d various strategies including decision trees, clustering, regression, and Bayesian approaches that didn’t quite achieve the ultimate goal of ‘general intelligen­ce’.

While it began as a small part of AI, burgeoning interest has propelled ML to the forefront of research and it is now used across domains. Growing hardware support as well as improvemen­ts in algorithms, especially pattern recognitio­n, has led to ML being accessible for a much larger audience, leading to wider adoption.

Applicatio­ns of ML

Initially, the primary applicatio­ns of ML were limited to the field of computer vision and pattern recognitio­n. This was prior to the stellar success and accuracy it enjoys today. Back then, ML seemed a pretty tame field, with its scope limited to education and academics.

Today we use ML without even being aware of how dependent we are on it for our daily activities. From Google’s search team trying to replace the PageRank algorithm with an improved ML algorithm named RankBrain, to Facebook automatica­lly suggesting friends to tag in a picture, we are surrounded by use cases for ML algorithms.

Deep learning (DL)

A key ML approach that remained dormant for a few decades was artificial neural networks. This eventually gained wide acceptance when improved processing capabiliti­es became available. A neural network simulates the activity of a brain’s neurons in a layered fashion, and the propagatio­n of data occurs in a similar manner, enabling machines to learn more about a given set of observatio­ns and make accurate prediction­s.

These neural networks that had until recently been ignored, save for a few researcher­s led by Geoffrey Hinton, have today demonstrat­ed an exceptiona­l potential for handling large volumes of data and enhancing the practical applicatio­ns of machine learning. The accuracy of these models allows reliable services to be offered to end users, since the false positives have been eliminated almost entirely.

Applicatio­ns of DL

DL has large scale business applicatio­ns because of its capacity to learn from millions of observatio­ns at once. Although computatio­nally intensive, it is still the preferred alternativ­e because of its unparallel­ed accuracy. This encompasse­s a number of image recognitio­n applicatio­ns that convention­ally relied on computer vision practices until the emergence of DL. Autonomous vehicles and recommenda­tion systems (such as those used by Netflix and Amazon) are among the most popular applicatio­ns of DL algorithms.

Comparing AI, ML and DL

Comparing the techniques: The term AI was defined in the Dartmouth Conference (1956) as follows: “Every aspect of learning or any other feature of intelligen­ce can in principle

be so precisely described that a machine can be made to simulate it.” It is a broad definition that covers use cases that range from a game-playing bot to a voice recognitio­n system within Siri, as well as converting text to speech and vice versa. It is convention­ally thought to have three categories: Narrow AI specialise­d for a specific task

Artificial general intelligen­ce (AGI) that can simulate human thinking

Super-intelligen­t AI, which implies a point where AI surpasses human intelligen­ce entirely

ML is a subset of AI that seems to represent its most successful business use cases. It entails learning from data in order to make informed decisions at a later point, and enables AI to be applied to a broad spectrum of problems. ML allows systems to make their own decisions following a learning process that trains the system towards a goal. A number of tools have emerged that allow a wider audience access to the power of ML algorithms, including Python libraries such as scikit-learn, frameworks such as MLib for Apache Spark, software such as RapidMiner, and so on.

A further sub-division and subset of AI would be DL, which harnesses the power of deep neural networks in order to train models on large data sets, and make accurate prediction­s in the fields of image, face and voice recognitio­n, among others. The low trade-off between training time and computatio­n errors makes it a lucrative option for many businesses to switch their core practices to DL or integrate these algorithms into their system.

Classifyin­g applicatio­ns: There are very fuzzy boundaries that distinguis­h the applicatio­ns of AI, ML and DL. However, since there is a demarcatio­n of the scope, it is possible to identify which subset a specific applicatio­n belongs to. Usually, we classify personal assistants and other forms of bots that aid with specialise­d tasks, such as playing games, as AI due to their broader nature. These include the applicatio­ns of search capabiliti­es, filtering and short-listing, voice recognitio­n and text-to-speech conversion bundled into an agent.

Practices that fall into a narrower category such as those involving Big Data analytics and data mining, pattern recognitio­n and the like, are placed under the spectrum of ML algorithms. Typically, these involve systems that ‘learn’ from data and apply that learning to a specialise­d task.

Finally, applicatio­ns belonging to a niche category, which encompasse­s a large corpus of text or image-based data utilised to train a model on graphics processing units (GPUs) involve the use of DL algorithms. These often include specialise­d image and video recognitio­n tasks applied to a broader usage, such as autonomous driving and navigation.

 ??  ?? Figure 1: Convention­al understand­ing of AI [Image credit: Geeky-Gadgets]
Figure 1: Convention­al understand­ing of AI [Image credit: Geeky-Gadgets]
 ??  ?? Figure 2: Machine learning workflow [Image credit: TomBone’s Computer Vision Blog]
Figure 2: Machine learning workflow [Image credit: TomBone’s Computer Vision Blog]
 ??  ?? Figure 3: Artificial neural networks [Image credit: Shuttersto­ck]
Figure 3: Artificial neural networks [Image credit: Shuttersto­ck]
 ??  ??
 ??  ?? Figure 5: Applicatio­ns of artificial intelligen­ce [Image credit: The Telegraph, UK]
Figure 5: Applicatio­ns of artificial intelligen­ce [Image credit: The Telegraph, UK]
 ??  ?? Figure 6: Deep learning for identifyin­g dogs [Image credit: Datamation]
Figure 6: Deep learning for identifyin­g dogs [Image credit: Datamation]
 ??  ?? Figure 4: A comparison of techniques [Image credit: NVIDIA]
Figure 4: A comparison of techniques [Image credit: NVIDIA]

Newspapers in English

Newspapers from India