Man, machine’s battle of the mind
Can machines great computer in 1950.
He answered that, should a computer’s responses become sufficiently complex and flexible as to convince an interrogator that they were being spontaneously produced by a living human, rather than being the effects of clever programming, there could be no reason for not concluding that that computer thinks.
After all, he added, we are obliged to use exactly the same inference from behaviour to thought in the case of humans.
The Turing Test has led to an extraordinary reversal: our brains, it is often said, are just a type of computer. “Artificial intelligence” is virtually a misnomer, since all that think?” asked the scientist Alan Turing “intelligence” and “thinking” come down to is algorithm-driven operations, for which sentience is unnecessary. Eventually, perhaps, some combination of metals and polymers will generate life and consciousness, and then computers and robots will not only mechanically “think”; they will also feel.
Yet if they did, their users would then be guilty of enslaving, murdering and raping them.
And since it will be impossible to know whether or when a robot has tipped over into sentience, maybe, suggests David Gunkel in his provocative new book, ‘Robot Rights’, we need to pre-empt this moral catastrophe. Should robots have rights?
Gunkel admits that the question sounds preposterous.
Standard ethical custom assumes that there are two sorts of entities in the world – persons, who are owed moral and legal obligations, and things, which are not. Robots, being artefacts and instruments, are paradigmatically things without “independent moral status”.
But, insists Gunkel, the history of moral philosophy has consisted in a perpetual redrawing of the line between “who” and “what”.
Why shouldn’t robots be the next candidate for acceptance into the “ever-expanding circle of moral inclusion”, like the “previously excluded or marginalised others – women, people of colour, animals, the environment, etc” whose admittance had to be battled for?
Only an entity that already possesses agency, choice and power (and therefore potential responsibility) can qualify to have rights, according to “will” rights theorists. Gunkel reminds us, however, that at the end of the 18th century, Jeremy Bentham, founder of Utilitarianism, deplored the way that “animals . . . . stand degraded into the class of things”, due to the neglect of their interests. Bentham urged that the right question is “not, Can they reason? nor, Can they talk? but, Can they suffer?” Given that machines can incontrovertibly be said to be, then perhaps, like non-human animals, they have interests, too.
But if to “be” is just a matter of occupying space, do things like lakes, stones or bottles have interests, too?
Psychological research, says Gunkel, has found that humans react to human-resembling robots as if appearance were reality.
It is not “the inner nature” of the robot that matters; anyway, cracking the robot open to see its innards would not inform you whether or not it has feeling; any more than observing neuronal movement in a brain could, more than inferentially, “show” you its owner’s consciousness. –