Rise of the Machines
Artificial intelligence has already permeated many sectors of society, and there is no stopping this technology from completely revolutionising the world.
No longer relegated to the realm of science-fiction, artificial intelligence (AI) has become an accepted part of the real world. The technology has been seen doing a plethora of impressive things, from reading lips better than experts to playing – and winning – poker tournaments against skilled human opponents. A rather more far-fetched scenario has even been discussed recently: Microscopic nanomachines being injected into people’s bloodstreams, searching for and eradicating disease, and repairing cells using AI.
Jeff Dean, a senior fellow at Google and the technical genius behind no less than five generations of Google’s crawling indexing system among numerous other technological marvels, believes the idea of nanomachines is perfectly plausible. Dean is currently working on several AI projects together with a team of Google engineers. As far as the AI revolution goes, Google is among those leading the charge. Indeed, the tech giant has big plans for this technology.
AI has very real and indeed valuable applications in the medical field. The hope is that machine learning – where AI systems learn by themselves, with minimal human coaching – might make preventative medicine a realistic prospect in the developing world, where qualified and experienced doctors are in short supply. One physician and scientist at Google Research, former nano-scientist and bioengineer Lily Peng, has developed an AI system able to diagnose diabetic retinopathy – a big cause of vision loss among diabetics. In developing countries, where ophthalmologists are particularly few and far between, such technology would be life-changing. Peng has also researched the use of such technology in diagnosing breast cancer, where machines studying mammographies would be able to highlight areas where they suspect cancer, allowing doctors to make faster diagnoses and take immediate action regarding treatment.
Machine learning is being used more and more, and in wider applications. Consider, for example, that Google has used machine learning to automatically build captions for more than one billion Youtube videos – in 10 languages, no less. Or that a Japanese baby-food manufacturer is testing machine learning to visually inspect diced vegetables for discolouration or other warning signs. Meanwhile, in New Zealand, Victor Anton, a doctoral researcher from Victoria University, has tried using machine learning to identify native bird calls. Another intriguing use of machine learning is being done by Storyfit, which is using AI to examine movie scripts to identify gender bias, predict content marketability, improve discovery, and drive sales for publishers and studios.
Despite the progress made in the realm of machine learning and AI in recent years, and the plentiful benefits the tech presents, many experts argue that these systems are still inferior to humans when it comes to tasks such as interacting with the physical world and perceiving natural signals.
To be even somewhat intelligent, machines need to mimic the ways that humans learn and understand – a process that begins organically from birth. Much of the time, humans learn without supervision or outside intervention. For instance, babies learn to navigate the world by absorbing the abundant information to which they’re exposed – data which they then process to understand, learning continuously along the way. It is a natural process for which humans require no training – it simply happens. For machines, the situation is different. They learn from the top down, rather than from the bottom up, as humans do.
Igal Raichelgauz, founder of Israeli company Cortica, which relies heavily on AI technology for autonomous platforms as part of its business offering, says that AI systems are simply powerful computing machines with misleading titles. Their topdown approach to learning prohibits them from doing anything on their own, he says. This is because, in top-down approaches, the system first undergoes training. Its algorithm develops through observing vast numbers of labelled data sets, until it can successfully extrapolate knowledge for itself. Deep-learning machines use layered algorithms to process data using many levels of abstraction. Raichelgauz argues that such a reliance on training makes these machines complex, but not intelligent. For AI to achieve an equal level of intelligence to humans, it must excel in the same fundamental tasks that humans mastered thousands of years ago, such as visual understanding and the ability to intelligently navigate the physical world. Only once AI has been developed to mimic human processes, Raichelgauz says, will we perhaps see it surpass human intelligence.
Like it or not, AI is here to stay, and its myriad of applications have far more potential for good than harm. As for those who fear that intelligent machines may rise up to destroy their creators, scientists from across the world are working on a range of philosophical rules and norms. Proposals in place call for a safety switch or a “big red button” which enables the programmers to stop “bad” behaviour. The question at the forefront of the debate is: Who determines which behaviours are bad, and who gets to stop them?