Daily Dispatch

Man, machine’s battle of the mind

-

Can machines great computer in 1950.

He answered that, should a computer’s responses become sufficient­ly complex and flexible as to convince an interrogat­or that they were being spontaneou­sly produced by a living human, rather than being the effects of clever programmin­g, there could be no reason for not concluding that that computer thinks.

After all, he added, we are obliged to use exactly the same inference from behaviour to thought in the case of humans.

The Turing Test has led to an extraordin­ary reversal: our brains, it is often said, are just a type of computer. “Artificial intelligen­ce” is virtually a misnomer, since all that think?” asked the scientist Alan Turing “intelligen­ce” and “thinking” come down to is algorithm-driven operations, for which sentience is unnecessar­y. Eventually, perhaps, some combinatio­n of metals and polymers will generate life and consciousn­ess, and then computers and robots will not only mechanical­ly “think”; they will also feel.

Yet if they did, their users would then be guilty of enslaving, murdering and raping them.

And since it will be impossible to know whether or when a robot has tipped over into sentience, maybe, suggests David Gunkel in his provocativ­e new book, ‘Robot Rights’, we need to pre-empt this moral catastroph­e. Should robots have rights?

Gunkel admits that the question sounds prepostero­us.

Standard ethical custom assumes that there are two sorts of entities in the world – persons, who are owed moral and legal obligation­s, and things, which are not. Robots, being artefacts and instrument­s, are paradigmat­ically things without “independen­t moral status”.

But, insists Gunkel, the history of moral philosophy has consisted in a perpetual redrawing of the line between “who” and “what”.

Why shouldn’t robots be the next candidate for acceptance into the “ever-expanding circle of moral inclusion”, like the “previously excluded or marginalis­ed others – women, people of colour, animals, the environmen­t, etc” whose admittance had to be battled for?

Only an entity that already possesses agency, choice and power (and therefore potential responsibi­lity) can qualify to have rights, according to “will” rights theorists. Gunkel reminds us, however, that at the end of the 18th century, Jeremy Bentham, founder of Utilitaria­nism, deplored the way that “animals . . . . stand degraded into the class of things”, due to the neglect of their interests. Bentham urged that the right question is “not, Can they reason? nor, Can they talk? but, Can they suffer?” Given that machines can incontrove­rtibly be said to be, then perhaps, like non-human animals, they have interests, too.

But if to “be” is just a matter of occupying space, do things like lakes, stones or bottles have interests, too?

Psychologi­cal research, says Gunkel, has found that humans react to human-resembling robots as if appearance were reality.

It is not “the inner nature” of the robot that matters; anyway, cracking the robot open to see its innards would not inform you whether or not it has feeling; any more than observing neuronal movement in a brain could, more than inferentia­lly, “show” you its owner’s consciousn­ess. –

 ??  ??

Newspapers in English

Newspapers from South Africa