TH3 1MITA7I0N G4ME
IT’S CHANGING THE WORLD AS WE KNOW IT, BUT ARE THE MACHINES REALLY TAKING OVER? TECHLIFE INVESTIGATES THE HYPE AND HOPE OF ARTIFICIAL INTELLIGENCE.
BACK IN 1950, you could count the number of computers worldwide on one hand — and that included Australia’s own CSIRAC, completed the previous year by Trevor Pearcey. Forget having a screen, keyboard or mouse — those early computers were hand-made, programmed by flicking a maze of switches and mostly displayed results on a series of low-power lightbulbs or modified tele-printer. It might seem archaic by today’s standards, but it was still a time of incredible advances, as some of the world’s brightest minds thought about the future these new-fangled machines could introduce. Mathematician Alan Turing had already brought his considerable intellect to bear on WWII, solving the riddle of the Enigma cipher machine. In 1950, he completed the UK’s first stored-program computer called the ‘Automatic Computing Engine’. But it’s a research paper Turing wrote that same year that would go on to have far greater influence. The paper entitled ‘Computing Machinery and Intelligence’ is widely regarded as the dawn of a concept now rapidly changing our world, one that US computer scientist John McCarthy in 1956 would name ‘Artificial Intelligence’ or AI.
WHAT IS AI?
Today, AI is a broad collection of ideas that revolve around creating intelligent systems or, as is popularly thought, to make machines ‘think for themselves’ (although the two aren’t necessarily the same thing). But it was this question of whether or not a machine can think that Turing originally asked in his influential paper. His test of a computer’s ability to think was could it fool a human into thinking he/she was talking to another human rather than a computer. Turing called it ‘The Imitation Game’.
Essentially, for a computer to pass ‘the Turing Test’, it would have to understand human communication, human intelligence and, to some degree, have intelligence of its own. But what constitutes ‘intelligence’ continues to raise arguments within academia today.
In some ways, artificial intelligence and ‘machine learning’ are two sides of the same coin — AI considers the approach of creating intelligence in machines, while machine learning puts AI into practical applications, although you’ll probably hear these terms thrown about interchangeably. Also included in this AI soup is data mining or ‘knowledge discovery’, which we looked at a couple of issues ago — the science of taking mountains of seemingly unrelated data and extracting useful and actionable information from it.
Whichever way you slice it, much of this learning happens through software programs or algorithms — there’s quite a bit of crossover here between data science and AI, as both fields use this concept of ‘learning’ algorithms.
One of the most fundamental algorithms to AI is the artificial neural network (ANN). In short, it tries to mathematically mimic what our brains do, by creating a multi-layered system of interconnected processing units or ‘neurons’, each carrying a scale factor or ‘weight’ that's adjusted based on previous neuron results. These neuron layers run in parallel and can incorporate feedback to further modify the weights. How that weighting is implemented creates different forms of ANNs available, such as ‘multi-layered perceptions’ that have no feedback, and ‘recurrent neural networks’ that do.
As with much of artificial intelligence, neural networks aim to imitate the human brain.
OUR CHALLENGING FUTURE
But it’s a computing machine’s ability to imitate humans in other ways that’s creating plenty of tension today, particularly if you’re trying to hold down a job. Last year, the CSIRO outlined in its ‘Tomorrow’s Digitally Enabled Workforce’ report that "44% of jobs in Australia are potentially at high risk of computerisation and automation" ( PDF, p8). That’s commonly quoted as around five million Australian jobs at risk — not exactly the sort of news you want to hear if you’re paying off a mortgage. What’s more, there seem to be few job categories likely to escape the carnage — from
lawyers to truck drivers, supermarket checkout operators to GPs, the coming together of ‘repetitive process automation’ (RPA) and artificial intelligence could be about to create a perfect storm for employment over the next 20 years. On the one hand, jobs that consist of highly repetitive tasks are at risk from the rise in industrial-scale robotics, but on the other, jobs that rely on fixed rules are at risk of redundancy in the face of ever-improving artificial intelligence.
A RACE TO THE BOTTOM?
Automation and AI are often primarily weapons used to lower costs and there are fears that they will add further pressure on wages and continue the wage stagnation we’ve seen in Australia and much of the Western world over the last couple of years. The Economist reported last year of concerns that automation was already seeing middle-level jobs dissolve and replaced with more high- and low-level roles ( tinyurl.com/zq5qk5f).
However, the other issue is that it could disproportionately affect entry-level jobs, further adding pressure on new graduates and those seeking entry-level roles. That’s the view put forward at last year’s AI Summit in London. Although development of AI will eventually replace more complex jobs, it remains the lowerhanging fruit are always the first to be picked.
AI SURROUNDS US
The future risks to employment from AI may have been highlighted recently through high-profile reports, but the reality is that we’re knee-deep in AI every day. Every internet search goes through layers of AI and machine learning. Google and Microsoft are both ‘post-CPU’ using hardware-programmable chips called ‘field-programmable gate arrays’ (FPGA) to embed artificial intelligence into their respective search engines.
Voice-enabled assistants such as Cortana, Siri, Amazon Echo and Google Assistant all rely on AI to turn human voice into distinct actionable commands.
In fact, if you want to see where the high-technology brands are heading in the future, look at where they’re deploying artificial intelligence right now. Facebook posted something of an ‘Artificial Intelligence 101’ guide late last year on its Code blog page ( tinyurl.
com/yc6ooplf). It quickly highlights some of the every-day life experiences that are now punctuated with AI, from online banking and shopping to more specific applications, such as medical imaging and self-driving cars. It also includes a series of online videos to help explain how AI works.
BEWARE THE AI FUTURE
However, what has many of the world’s leading scientists worried is the potential for AI to be misused. In opening Cambridge University’s Leverhulme Centre for the Future of Intelligence last year, Professor Stephen Hawking declared AI will be "either the best, or the worst thing, ever to happen to humanity" ( tinyurl.com/jdtblhd).
While AI offers incredible potential to solve what ails us, there are many, from Hawking to Elon Musk to Bill Gates, who fear its dangers if we don’t get a decent handle on it. One of the areas feared is autonomous weapons (think Iron Man 2). As Hawking put it, they give "new ways for the few to oppress the many".
But those dangers already exist — YouTube has numerous videos of handguns being attached to hobby-built quadcopters/drones and fired. As recently as last year, the NSW Government was reviewing laws to prevent the possibility of this happening here ( tinyurl.com/yan3rga6).
AI SAVING THE REEF
But while AI clearly has potential for misuse, it’s by no means all ‘bad news’. AI is already making its presence positively felt in many areas of endeavour, not least of which being the environment. COTSbot is an autonomous underwater vehicle developed by the Queensland University of Technology (QUT) targeting the crown of thorns starfish (COTS) that’s swallowing up the coral of the Great Barrier Reef. It’s also a pretty awesome example of AI done right. This bot can prowl around the reefs for up to eight hours and is clever enough to recognise a COTS on its own, manoeuvre into position and deliver a lethal dose of vinegar (yep, household vinegar). Within 48 hours, the starfish is terminal and the bot has a payload of up to 200 doses.
It’s also a great example for understanding how AI works. COTSbot runs on-board low-power computers similar to the Raspberry Pi that monitor the reef beneath through attached cameras, looking for crown of thorns starfish and using machine learning techniques to identify its distinctive form.
In development, the bot was initially fed with over 3,000 underwater images with and without COTS in various conditions and lighting to enable it to create a mathematical model or set of rules that identifies what a COTS looks like. New images from the on-board cameras are then tested against this mathematical model and a decision made on whether it’s seeing a COTS or not. According to the university, COTSbot’s latest machine-learning techniques can now achieve a
hit accuracy of over 99%. You can read the original research paper at the QUT website ( tinyurl.com/yamjpwjw, PDF).
JOBS OF TOMORROW
If we’re honest, the scariest bit coming out of the boom in AI is the potential loss of jobs — and with some of the numbers being thrown around, it’s hardly surprising. While the 40% of jobs lost is scary enough, the news is even worse for regional areas, with the potential for up to 60% of jobs at risk being mentioned ( tinyurl.com/ybc6tj3f).
It’s leading many to question where the jobs of tomorrow will come from. If you look over the history of the various industrial revolutions in the last 250 years, the arrival of new technologies — such as the railways and the telephone — have inevitably seen the demise of previous-era jobs. It’s cold-comfort to those who lose their jobs, but new technologies have also heralded the arrival of new opportunities.
Right now, those opportunities rely more than ever on greater knowledge in the areas of science, technology, engineering and mathematics — the so-called ‘STEM’ areas. That’s why recent government data showing STEM subject enrolments at 20-year lows in Australian schools ( tinyurl.com/hy585p6, PDF, p15) is so alarming — the very skills future careers will require for success are the ones being spurned now.
In 2003, Australia students ranked fifth in mathematics in the OECD’s Programme for International Student Assessment (PISA). By 2012, we’d crashed to 17th. In 2006, we came fourth in science and by 2012, we’d dropped to 8th ( tinyurl.com/y8rx6wd9, PDF).
The savvy ones, though, are recognising this — the Law Society of NSW’s Future of Law and Innovation in the Profession (FLIP) report for 2017 identifies the need for future graduates to have greater training in technological areas ( tinyurl.com/y9bd58mr, PDF).
If we’re to stay ahead of the curve on automation, AI and machine learning, we have to embrace STEM learning — but we also need greater collaboration between industry and research. According to the September 2014 STEM: Australia’s Future report released by the Australian Government Office of the Chief Scientist, an analysis on business-research collaboration in 33 countries showed that for small to medium businesses, Australia ranked 32nd. For large companies, we came in 33rd ( tinyurl.com/hy585p6, PDF, p10). It doesn’t get much worse than that.
THE SOLUTIONS
The simplest step we can all take is to throw off any complacency about technology. When a website or service is curating your news, for example, ask yourself, what is doing the curating — is it a human or a computer algorithm? When you next sign up for a ‘free’ online service, know that you’ll be paying for it through the data trail you leave behind — everything from what you buy to what you search — being turned into ads.
Next, learn to code — coding not only teaches you problem-solving skills, it certainly won’t do your employment prospects any harm, either. If you’ve never coded anything in your life, start with Python ( www.python.org) and go from there (see our Learn to Code Python masterclass in our sister publication, APC).
Beyond that, take a serious interest in the developments at technology companies like Google and Facebook, Microsoft and Apple — look at the new technology services they reveal, whether it’s chatbots or autonomous cars.
More broadly, there’s also the recent idea floated by Microsoft founder Bill Gates of taxing robots that take human jobs, although it’s largely receiving mixed reviews.
THE FUTURE
When the world’s brightest minds begin saying that AI will either be the making or the demise of humankind, we need to wake up and start watching the algorithms. AI has the incredible potential to offer breakthroughs in many areas of life, but equally, if we don’t take care, AI could well become our undoing.
And if we can’t encourage future generations to take greater steps in understanding the technology involved in artificial intelligence and how it works by taking STEM subjects at school and beyond, it’s not hard to imagine where we might end up...