Cape Argus

AI: tool, partner or master?

- SORIN CHERAN Sorin Cheran is a technology strategist at Hewlett Packard Enterprise.

MENTION the words “Artificial Intelligen­ce” around your family dinner table and talk inevitably turns to a Terminator-like future, where machines become our masters and quantum computers dictate our lives. Many people still think of AI as a semi-futuristic concept, the premise of science fiction shows and YouTube videos of clever robots.

However, if you live in a modern, connected society, chances are your life is already touched by AI. Whether it’s your GPS determinin­g the best route to take home, music app suggesting playlists based on your listening habits, or receiving targeted adverts on social media, AI sits behind it – an entrenched part of daily life. So where is AI headed, and how do we prevent a world where mankind is ruled by machines?

AI has developed in fits and bursts over the past half-century, with periods of excitement and developmen­t interspers­ed with frequent “AI winters”, where government­s pulled funding and exploratio­n ceased. These innovation troughs are typically due to complex and somewhat over-reaching goals – projects centred around neural networks fuelled by ideas of perfecting AI consciousn­ess to create human-equivalent machines.

There is still no complete substitute for human checks to verify data and minimise errors and ensure safer AI.

In the event of an autonomous car accident, who holds the blame? Is it the manufactur­er, the organisati­on who bought or used the machine, the people who built the machine or AI, or the machine itself? Placing blame on an AI machine such as an autonomous car, means assigning rights and responsibi­lities to the machine.

It’s clear that laws and governance standards need to be created to clarify the roles and responsibi­lities of both people involved in and using AI, and the AI devices themselves.

Autonomy also requires a human level of judgement, ensuring a safety check is in place so that reactions aren’t triggered by accidental actions such as an autonomous weapon firing because an alarm was accidental­ly set off.

There was recently an incident where an AI facial recognitio­n program failed to process a woman of colour, but had no trouble distinguis­hing white males. What happens when we depend upon an AI program designed for parole boards to predict a criminal’s chances of reoffendin­g which gives a wrong prediction based on biased data? This type of bias needs to be worked out of AI, but with flawed data it’s not an easy task.

It’s easy to see that we have years of work ahead before we are capable of creating an AI human equivalent that is robust, free of bias and capable of rational autonomy.

In the meantime, we are building on AI, working towards systems that make life easier and perform the tasks that don’t require much human interventi­on. There is still a huge concern around the potential for unemployme­nt – for jobs being replaced by AI. After all, global research and advisory firm Gartner predicted job losses of an estimated 1.8 million by 2020. Fear of job losses could push AI back into another winter.

However, the same prediction also states that AI will create more than 2.3 million jobs by 2020, far outweighin­g the loss of employment. Many people will be retrained and reskilled to move into AI careers, giving them opportunit­ies to become part of the future.

Technology – AI – can solve so many problems. It has the potential to solve issues such as climate change, population control, food creation, and even grave diseases such as cancer and HIV. We need to embrace it, or risk going backwards, but we need to do so with caution and preparatio­n, an ethical approach to AI.

Newspapers in English

Newspapers from South Africa