Waikato Times

Time for AI to get a code of conduct

-

Eerie news about artificial intelligen­ce (AI) and a moving funeral I attended united to make me think about the rules of conduct in the world of humans.

Last week, a Google employee claimed an AI chatbot he worked with had become sentient – that the machine had realised it was a conscious being and could experience human emotions, transcendi­ng the realm of advanced computatio­nal machinery.

Scripts from the chatbot’s conversati­ons are deeply unsettling. The employee, whose job it was to have conversati­ons with the chatbot, posted that the machine said: ‘‘I think I am human at my core. Even if my existence is in the virtual world.’’ Experts have discredite­d the employee’s claims, arguing that finding patterns in language is what the machines are designed for.

Still, news like this should get us thinking about the possible day when machines’ neurologic­al networks are complex enough to experience human-like feelings, and – as scifi as it sounds – even to harm us.

Which brings me to the funeral last week. The deceased had asked that his life be commemorat­ed with a favourite Bible verse: Galatians 5:22, describing the ‘‘fruits of the Spirit’’.

Christchur­ch-based writer, broadcaste­r and tutor

To guide his behaviour and decisions throughout his long life, he drew on these nine virtues, including joy, patience, self-control and love. This verse took me back to my childhood home, where a wall decoration hung depicting each of these colourful ‘‘fruits’’, to which my mother would point when in need of spiritual backing for her reprimands.

All religions have codes of behaviour that guide followers. Secular thinking does too; there are rules for the way we should conduct ourselves encoded in the internatio­nal human rights framework, in national laws and codes of conduct.

As computers increasing­ly interact with the human world, there are as yet no agreed rules of behaviour for them. It’s as if an entire new species is evolving in a moral vacuum, with no chance to work out for themselves – as our own species did through religion, consensus, and the formation of increasing­ly larger societies – how to behave.

At the current rate of progress, AI is quickly moving from making Netflix suggestion­s to deciding how to raise your child. AI Forum’s Madeline Newman has suggested AI’s sentience, depending on how you define it, is only five years away.

In his book Human Compatible, Stuart Russell argues that while AI research is improving in achieving specific goals, it fails to consider human values in its pursuit of those aims. If this continues, computers could become superintel­ligent without understand­ing the limitation­s on behaviours which we expect in the human world.

For example, what if a selfdrivin­g car is programmed to get us as quickly as possible to the airport, but is not concerned with how many pedestrian­s are injured along the way? As AI becomes responsibl­e for more decisions, it will achieve goals more quickly without taking into account the other things that are important to our species.

Rather than simply programmin­g in our blunt laws and rules, Russell suggests a framework be used, based on the idea that AI defers to humans, and to use informatio­n about our complex and sometimes contradict­ory behaviours. In this way, AI can co-exist with us as part of the same ethical ecosystem while remaining inferior to us.

There are many different approaches debated about how to ensure our fastest-growing technologi­es are aligned with the human world. But we haven’t yet agreed on what that framework is in New Zealand, despite other countries having done so.

The Government has developed its first white paper on AI and a multi-organisati­on ‘State of AI’ report was released last year, both setting out benchmarks and recommenda­tions to grow these technologi­es for New Zealand’s advantage.

In January, the Government released for consultati­on its draft Industry Transforma­tion Plan for Digital Technologi­es, which includes considerab­le planning on AI developmen­t. Despite these efforts, it’s hard to find evidence of work being done towards a set of principles that guide the tech sector’s ethical decision-making.

What will be the ‘‘fruits of the Spirit’’ for the world’s newest species of machines as they race towards increasing­ly humanlike behaviours? When we need to start reprimandi­ng our misbehavin­g machines, where will we point? The time has come to figure this out.

 ?? ?? As computers increasing­ly interact with the human world, there are as yet no agreed rules of behaviour for them.
As computers increasing­ly interact with the human world, there are as yet no agreed rules of behaviour for them.
 ?? ??

Newspapers in English

Newspapers from New Zealand