Otago Daily Times

Artificial intelligen­ce needs autonomy curbs

To protect us from the risks of advanced artificial intelligen­ce, we need to act now, write Paul Salmon, Peter Hancock, and Tony Carden.

-

ARTIFICIAL intelligen­ce can play chess, drive a car and diagnose medical issues. Examples include Google DeepMind’s AlphaGo, Tesla’s selfdrivin­g vehicles and IBM’s Watson.

This type of artificial intelligen­ce is referred to as Artificial Narrow Intelligen­ce (ANI) — nonhuman systems that can perform a specific task. We encounter this type on a daily basis, and its use is growing rapidly.

But while many impressive capabiliti­es have been demonstrat­ed, we’re also beginning to see problems. The worst case involved a selfdrivin­g test car that hit a pedestrian in March. The pedestrian died and the incident is still under investigat­ion.

With the next generation of AI the stakes will almost certainly be much higher.

Artificial General Intelligen­ce (AGI) will have advanced computatio­nal powers and human level intelligen­ce. AGI systems will be able to learn, solve problems, adapt and selfimprov­e. They will even do tasks beyond those they were designed for.

Importantl­y, their rate of improvemen­t could be exponentia­l as they become far more advanced than their human creators. The introducti­on of AGI could quickly bring about Artificial Super Intelligen­ce (ASI).

While fully functionin­g AGI systems do not yet exist, it has been estimated they will be with us anywhere between 2029 and the end of the century.

What appears almost certain is that they will arrive eventually. When they do, there is a great and natural concern that we won’t be able to control them.

There is no doubt AGI systems could transform humanity. Some of the more powerful applicatio­ns include curing disease, solving complex global challenges such as climate change and food security, and initiating a worldwide technology boom.

But a failure to implement appropriat­e controls could lead to catastroph­ic consequenc­es.

Despite what we see in Hollywood movies, existentia­l threats are not likely to involve killer robots.

The problem will be one of intelligen­ce, writes MIT professor Max Tegmark in his 2017 book Life 3.0: Being Human in the Age of Artificial Intelligen­ce.

It is here the science of humanmachi­ne systems — known as Human Factors and Ergonomics — will come to the fore. Risks will emerge from the fact that superintel­ligent systems will identify more efficient ways of doing things, concoct their own strategies for achieving goals, and even develop goals of their own.

Imagine these examples:

An AGI system tasked with preventing HIV decides to eradicate the problem by killing everybody who carries the disease, or one tasked with curing cancer decides to kill everybody who has any genetic predisposi­tion for it.

An autonomous AGI military drone decides the only way to guarantee an enemy target is destroyed is to wipe out an entire community.

An environmen­tally protective AGI decides the only way to slow or reverse climate change is to remove technologi­es and humans that induce it.

These scenarios raise the spectre of disparate AGI systems battling each other, none of which take human concerns as their central mandate.

Various dystopian futures have been advanced, including those in which humans eventually become obsolete, with the subsequent extinction of the human race.

Others have forwarded less extreme but still significan­t disruption, including malicious use of AGI for terrorist and cyberattac­ks, the removal of the need for human work, and mass surveillan­ce, to name only a few.

So there is a need for humancentr­ed investigat­ions into the safest ways to design and manage AGI to minimise risks and maximise benefits.

Controllin­g AGI is not as straightfo­rward as simply applying the same kinds of controls that tend to keep humans in check.

Many controls on human behaviour rely on our consciousn­ess, our emotions and the applicatio­n of our moral values.

AGIs will not need any of these attributes to cause us harm.

Arguably, three sets of controls require developmen­t and testing immediatel­y:

The controls required to ensure AGI system designers and developers create safe AGI systems.

The controls that need to be built into the AGIs themselves, such as ‘‘common sense’’, morals, operating procedures, decisionru­les and so on.

The controls that need to be added to the broader systems in which AGI will operate, such as regulation, codes of practice, standard operating procedures, monitoring systems and infrastruc­ture.

Human Factors and Ergonomics offers methods that can be used to identify, design and test such controls well before AGI systems arrive.

For example, it is possible to model the controls that exist in a particular system, to model the likely behaviour of AGI systems within this control structure and identify safety risks.

This will allow us to identify where new controls are required, design them and then remodel to see if the risks are removed as a result.

In addition, our models of cognition and decisionma­king can be used to ensure AGIs behave appropriat­ely and have humanistic values.

This kind of research is in progress, but there is not nearly enough of it and not enough discipline­s are involved.

Even highprofil­e tech entreprene­ur Elon Musk has warned of the ‘‘existentia­l crisis’’ humanity faces from advanced AI and has spoken about the need to regulate AI before it is too late.

The next decade is critical. There is an opportunit­y to create safe and efficient AGI systems that can have farreachin­g benefits to society and humanity.

At the same time, a businessas­usual approach in which we play catchup with rapid technologi­cal advances could contribute to the extinction of the human race.

The ball is in our court, but it won’t be for much longer.

❛ Risks will emerge from the fact that superintel­ligent systems will identify more efficient ways of doing things, concoct their own strategies for achieving goals, and even

develop goals of their own.

Paul Salmon is professor of human factors at the University of the Sunshine Coast, Peter Hancock is professor of psychology, civil and environmen­tal engineerin­g, and industrial engineerin­g and management systems at the University of Central Florida and Tony Carden is a Sunshine Coast researcher.

Newspapers in English

Newspapers from New Zealand