Robots ‘need rights’ before consciousness
ROBOTS need a “bill of rights”, and the Government should start drawing one up now before conscious artificial intel intelligence exists, a scientist has said.
Jacy Reese Anthis, the founder of the Sentience Institute, which describes itself as the first and only AI rights organisation, is among the experts predicting robots could become sentient within the next 50 years, in the manner of the film I, Robot.
Mr Anthis, 29, sees consciousness as comprising a range of features, including learning via trial and error, focused attention on harmful stimuli, aversive memory associations with harmful stimuli, mood-like states, goal-directed behaviour and verbal reports of experiential states.
“If we wait to consider this until AI is more capable, it could be much harder to reason carefully and thoughtfully about what to do.” Citing research showing that 75 per cent of the American public say sentient AIs deserve to be treated with respect, Mr Anthis said: “Think tanks and governments in the UK and elsewhere should start work now on a bill of AI rights that protects the interests of all future sentient beings.”
The German philosopher Thomas Metzinger, concerned by the prospect of AI that can suffer, has called for a moratorium on artificial sentience until 2050. Mr Anthis does not think such a ban would be effective, but agreed “we need time for human morality to catch up with human tech”. He said: “It’s extremely important that we don’t exploit sentient AI the way wa y humans have exploited each other and non-human animal animals s throughout our history. We need to ensure they have bodily integrity, informed consent, legal representation, and other unalienable rights.”
Shimon Whiteson, a professor of computer science at the University of Oxford and the head of research at the autonomous driving company Waymo, said: “Whether such a system, if conscious, merits rights akin to human rights is certainly a question for philosophy, not science.”
Prof David Gunkel, an AI ethics specialist at Northern Illinois University, said that the question of sentience has been key in deciding the moral and legal status of animals, from dogs and cats, which we think of as sentient, down to bivalves, to which we generally ascribe less moral importance.
The Th problem, he said, is sentience is difficult diffi to define, let alone detect. If a future futu AI tells us it is suffering, asked Prof Gunkel, “how can we be sure that these thes are signs of internal psychological states state and not just externally manipulated appearances meant to trick us?”