The Artificial Intelligence revolution: Is it fact of fiction?
Over the past few years, artificial intelligence (AI) has been consistently in the headlines of major news outlets as it slowly but surely keeps permeating various industries and manoeuvring into our daily lives. Hailed as the next technological frontier, it is seen as something that can fundamentally alter the way we perceive and interact with technology, but how much of what is written is based on hype and how much of it is actually rooted in reality?
AI is no novel concept. In the 1950s, a group of scientists united with a common goal to build machines as intelligent as humans. Since its inception, AI has been a multidisciplinary field, encompassing computer vision, speech processing, robotics and machine learning – the process of an AI sifting through large sets of data to uncover patterns and predict phenomena, performed by an algorithm, with no human guidance.
“From the outset, there were two schools of thought regarding how understandable, or explainable, AI ought to be. Many thought it made the most sense to build machines that reasoned according to rules and logic, making their inner workings transparent to anyone who cared to examine some code. Others felt that intelligence would more easily emerge if machines took inspiration from biology, and learned by observing and experiencing. This meant turning computer programming on its head. Instead of a programmer writing the commands to solve a problem, the program generates its own algorithm based on example data and a desired output. The machine-learning techniques that would later evolve into today’s most powerful AI systems followed the latter path: the machine essentially programs itself,” writes Mr Will Knight in an article for the MIT Technology Review.
The field of AI remained at the fringes of the scientific community until the computerisation era that transformed nearly all industries and brought with it an emergence of large data sets. This, in turn, inspired the rise of evermore powerful machine learning techniques, such as the artificial neural network, which resembles an interconnected group of nodes, mimicking the vast network of neurons found in a functional, biological brain.
“It was not until the start of this decade, after several clever tweaks and refinements, that very large—or “deep”— neural networks demonstrated dramatic improvements in automated perception. Deep learning is responsible for today’s explosion of AI. It has given computers extraordinary powers, like the ability to recognize spoken words almost as well
as a person could, a skill too complex to code into the machine by hand. Deep learning has transformed computer vision and dramatically improved machine translation. It is now being used to guide all sorts of key decisions in medicine, finance, manufacturing—and beyond,” writes Mr. Knight.
One of the most widely reported stories as of late to demonstrate the potential of AI has been on Google’s AlphaGo – an AI developed to take on the ancient Chinese game of Go, arguably the most demanding strategy game in existence – which bested Go’s top ranked human player, Ke Jie, by 3-0, in a series hosted in China this May.
It took AlphaGo a mere year and a half topple the grandest of grandmasters - something even its creators didn’t believe would happen in another 5-10 years, and it did it by tirelessly playing game after game against itself, all the while analysing and optimising its strategy.
AlphaGo usually does this under strict time limits, with seconds or milliseconds slotted for each move, although it has also played games that unfolded over several hours, much like professional matches played by its human counterparts.
“These are beautiful games, with moves no one has seen,” said Fan Hui, the European Go champion who helped train AlphaGo, at a press conference after the event in China.
After dominating Ke Jie, AlphaGo CEO, Demis Hassabis, announced the AI’s retirement from Go to tackle new challenges.
“The research team behind AlphaGo will now throw their energy into the next set of grand challenges, developing advanced general algorithms that could one day help scientists as they tackle some of our most complex problems, such as finding new cures for diseases, dramatically reducing energy consumption, or inventing revolutionary new materials,” Mr Hassabis wrote in a statement on the company’s website.
Deep learning has already been successfully deployed in image captioning, voice recognition, and language translation, and there is hope that the same techniques could eventually be applied to diagnosing deadly diseases, making high level trading decisions, as well as other complex tasks. However, there are significant challenges ahead before that becomes a reality.
In 2015, a research group at Mount Sinai Hospital in New York decided to use deep learning to process patient data that can be used to predict the development of diseases. The project, dubbed Deep Patient, involved extracting electronic health records from a data warehouse and aggregating them by patient. The data was both structured - in the form of lab tests, medications, and procedures - but also clinical notes and demographic data on age, gender, and race.
Deep Patient was trained – the process of providing data for the algorithm to build better, less erroneous models, e.g. studying millions of images to distinguish cats from dogs (in the case of image classification) – on data from around 700,000 individuals.
Without any expert instruction, Deep Patient was able to uncover hidden patterns in the hospital data to predict, with pinpoint accuracy, when a patient was likely to develop cancer or detect the onset of psychiatric disorder like schizophrenia - a feat which is notoriously difficult, even for trained physicians.
The real kicker is this: it isn’t clear how Deep Patient developed its diagnoses. This is because the inner workings of any machine learning technology are inherently opaque, even to its creators since there is no debugging feature that can be used for any handwritten code.
“It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who works on applications of machine learning. “Whether it’s an investment decision, a medical decision, or maybe a military decision, you don’t want to just rely on a ‘black box’ method.”
According to Mr Knight. there’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach.
“This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve advertisements or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behaviour,” writes Mr. Knight.
AI systems are currently developing much faster than anyone could’ve predicted even five years ago and we simply don’t know what their true potential is yet. But one thing is certain – it would be irresponsible to scale AI technologies to a point where we hand over the decision-making power of truly complex issues to an AI before we develop ways for these systems to become more accountable and understandable.
Ironically, in our quest to build these algorithms to mimic how our brains function, it becomes questionable whether an AI will be able to explain its reasoning in detail, much like their human creators.
“Even if somebody can give you a reasonable-sounding explanation [for his or her actions], it probably is incomplete, and the same could very well be true for AI,” says Jeff Clune, one of the foremost AI scientists and an assistant professor at the University of Wyoming. “It might just be part of the nature of intelligence that only part of it is exposed to rational explanation. Some of it is just instinctual, or subconscious, or inscrutable.”