Time for AI to get a code of conduct
Eerie news about artificial intelligence (AI) and a moving funeral I attended united to make me think about the rules of conduct in the world of humans.
Last week, a Google employee claimed an AI chatbot he worked with had become sentient – that the machine had realised it was a conscious being and could experience human emotions, transcending the realm of advanced computational machinery.
Scripts from the chatbot’s conversations are deeply unsettling. The employee, whose job it was to have conversations with the chatbot, posted that the machine said: ‘‘I think I am human at my core. Even if my existence is in the virtual world.’’ Experts have discredited the employee’s claims, arguing that finding patterns in language is what the machines are designed for.
Still, news like this should get us thinking about the possible day when machines’ neurological networks are complex enough to experience human-like feelings, and – as scifi as it sounds – even to harm us.
Which brings me to the funeral last week. The deceased had asked that his life be commemorated with a favourite Bible verse: Galatians 5:22, describing the ‘‘fruits of the Spirit’’.
To guide his behaviour and decisions throughout his long life, he drew on these nine virtues, including joy, patience, self-control and love. This verse took me back to my childhood home, where a wall decoration hung depicting each of these colourful ‘‘fruits’’, to which my mother would point when in need of spiritual backing for her reprimands.
All religions have codes of behaviour that guide followers. Secular thinking does too; there are rules for the way we should conduct ourselves encoded in the international human rights framework, in national laws and codes of conduct.
As computers increasingly interact with the human world, there are as yet no agreed rules of behaviour for them. It’s as if an entire new species is evolving in a moral vacuum, with no chance to work out for themselves – as our own species did through religion, consensus, and the formation of increasingly larger societies – how to behave.
At the current rate of progress, AI is quickly moving from making Netflix suggestions to deciding how to raise your child. AI Forum’s Madeline Newman has suggested AI’S sentience, depending on how you define it, is only five years away.
In his book Human Compatible, Stuart Russell argues that while AI research is improving in achieving specific goals, it fails to consider human values in its pursuit of those aims. If this continues, computers could become superintelligent without understanding the limitations on behaviours which we expect in the human world.
For example, what if a selfdriving car is programmed to get us as quickly as possible to the airport, but is not concerned with how many pedestrians are injured along the way? As AI becomes responsible for more decisions, it will achieve goals more quickly without taking into account the other things that are important to our species.
Rather than simply programming in our blunt laws and rules, Russell suggests a framework be used, based on the idea that AI defers to humans, and to use information about our complex and sometimes contradictory behaviours. In this way, AI can co-exist with us as part of the same ethical ecosystem while remaining inferior to us.
There are many different approaches debated about how to ensure our fastest-growing technologies are aligned with the human world. But we haven’t yet agreed on what that framework is in New Zealand, despite other countries having done so.
The Government has developed its first white paper on AI and a multi-organisation ‘State of AI’ report was released last year, both setting out benchmarks and recommendations to grow these technologies for New Zealand’s advantage.
In January, the Government released for consultation its draft Industry Transformation Plan for Digital Technologies, which includes considerable planning on AI development. Despite these efforts, it’s hard to find evidence of work being done towards a set of principles that guide the tech sector’s ethical decision-making.
What will be the ‘‘fruits of the Spirit’’ for the world’s newest species of machines as they race towards increasingly humanlike behaviours? When we need to start reprimanding our misbehaving machines, where will we point? The time has come to figure this out.