The different branches of Artificial Intelligence
Artificial intelligence encapsulates a broad set of computer science for perception, logic and learning. One method of AI is machine learning – programs that perform better over time and with more data input. Deep learning is among the most promising approaches to machine learning. It uses algorithms based on neural networks – a way to connect inputs and outputs based on a model of how we think the brain works – that find the best way to solve problems by themselves, as opposed to by the programmer or scientist writing them. Training is how deep learning applications are “programmed” – feeding them more input and tuning them. Inference is how they run, to perform analysis or make decisions.
Artificial Intelligence Intel Fellow Pradeep Dubey calls artificial intelligence “a simple vision where computers become indistinguishable between humans.” It has also been defined as simply as “making sense of data,” which very much reflects how companies are using AI today. In general, AI is an umbrella term for a range of computer algorithms and approaches that allow machines to sense, reason, act and adapt like humans do – or in ways beyond our abilities. The human- like capabilities include things like apps that recognise your face in photos, robots that can navigate hotels and factory floors, and devices capable of having ( somewhat) natural conversations with you.
The beyond- human functions could include identifying potentially dangerous storms before they form, predicting equipment failures before they happen, or detecting malware – tasks that are difficult, or impossible, for people to perform.
Work in AI dates back to at least the 1950s, followed since by several boomandbust cycles of research and investment. There are four big reasons that we’re in a new AI spring today: more compute, more data, better algorithms and broad investment.
Machine Learning AI encompasses a whole set of different computing methods, a major subset of which is called “machine learning.” As Intel’s Dubey explains it, machine learning “is a program where performance improves over time,” and that also gets better with more data input. In other words, the machine gets smarter, and the more it “studies,” the smarter it gets. A more formal definition of machine learning used at Intel is: “the construction and study of algorithms that can learn from data to make predictions or decisions.” Wired magazine declared “the end of code” in describing how machine learning is changing programming:
Using machine learning, a major eye hospital in China was able to improve detection of potential causes of blindness, typically 70 to 80 per cent for clinicians, to 93 per cent.
Neural Networks and Deep Learning
Neural networks and deep learning are very closely related and often used interchangeably, but there is a distinction. Most simply, deep learning is a specific method of machine learning, and it’s based primarily on the use of neural networks.
“In traditional supervised machine learning, systems require an expert to use his or her domain knowledge to specify the information ( called features) in the input data that will best lead to a well- trained system,” wrote a team of Intel AI engineers and data scientists in a recent blog. Deep learning is different.
Training and Inference Training is the part of machine learning in which you’re building your algorithm, shaping it with data to do what you want it to do. “Training is the process by which our system finds patterns in data,” wrote the Intel AI team. “During training, we pass data through the neural network, error- correct after each sample and iterate until the best network parametrisation is achieved.”
Interference is the act or process of deriving logical conclusions from premises known or assumed to be true. In the software analogy, training is writing the program, while inference is using it. “Inference is the process of using the trained model to make predictions about data we have not previously seen,” wrote those savvy Intel folks.