AI: tool, partner or master?
MENTION the words “Artificial Intelligence” around your family dinner table and talk inevitably turns to a Terminator-like future, where machines become our masters and quantum computers dictate our lives. Many people still think of AI as a semi-futuristic concept, the premise of science fiction shows and YouTube videos of clever robots.
However, if you live in a modern, connected society, chances are your life is already touched by AI. Whether it’s your GPS determining the best route to take home, music app suggesting playlists based on your listening habits, or receiving targeted adverts on social media, AI sits behind it – an entrenched part of daily life. So where is AI headed, and how do we prevent a world where mankind is ruled by machines?
AI has developed in fits and bursts over the past half-century, with periods of excitement and development interspersed with frequent “AI winters”, where governments pulled funding and exploration ceased. These innovation troughs are typically due to complex and somewhat over-reaching goals – projects centred around neural networks fuelled by ideas of perfecting AI consciousness to create human-equivalent machines.
There is still no complete substitute for human checks to verify data and minimise errors and ensure safer AI.
In the event of an autonomous car accident, who holds the blame? Is it the manufacturer, the organisation who bought or used the machine, the people who built the machine or AI, or the machine itself? Placing blame on an AI machine such as an autonomous car, means assigning rights and responsibilities to the machine.
It’s clear that laws and governance standards need to be created to clarify the roles and responsibilities of both people involved in and using AI, and the AI devices themselves.
Autonomy also requires a human level of judgement, ensuring a safety check is in place so that reactions aren’t triggered by accidental actions such as an autonomous weapon firing because an alarm was accidentally set off.
There was recently an incident where an AI facial recognition program failed to process a woman of colour, but had no trouble distinguishing white males. What happens when we depend upon an AI program designed for parole boards to predict a criminal’s chances of reoffending which gives a wrong prediction based on biased data? This type of bias needs to be worked out of AI, but with flawed data it’s not an easy task.
It’s easy to see that we have years of work ahead before we are capable of creating an AI human equivalent that is robust, free of bias and capable of rational autonomy.
In the meantime, we are building on AI, working towards systems that make life easier and perform the tasks that don’t require much human intervention. There is still a huge concern around the potential for unemployment – for jobs being replaced by AI. After all, global research and advisory firm Gartner predicted job losses of an estimated 1.8 million by 2020. Fear of job losses could push AI back into another winter.
However, the same prediction also states that AI will create more than 2.3 million jobs by 2020, far outweighing the loss of employment. Many people will be retrained and reskilled to move into AI careers, giving them opportunities to become part of the future.
Technology – AI – can solve so many problems. It has the potential to solve issues such as climate change, population control, food creation, and even grave diseases such as cancer and HIV. We need to embrace it, or risk going backwards, but we need to do so with caution and preparation, an ethical approach to AI.