The Independent

Artificial intelligen­ce won’t wipe out humanity, because robots aren’t humanly flawed

- ROBERT A JOHNSON

Science and engineerin­g lift humanity to greater and more enduring heights, while philosophy seems to be in a regular state of existentia­l crisis. Yet sometimes we’re reminded that even the greatest science and engineerin­g minds of the last century – whether the genius of Stephen Hawking, who we sadly lost a few days ago, or the

innovation of Elon Musk – could still benefit from philosophi­cal thinking.

Musk, in particular, constantly reminds us of his big fear, repeated this week: that robots with artificial intelligen­ce are likely to annihilate us. He sincerely believes that we should colonise Mars mainly because it provides the greatest opportunit­y for humanity to survive this unavoidabl­e annihilati­on.

There are many good reasons to colonise Mars: avoiding an eventual asteroid collision with Earth; escaping a depleted environmen­t; perhaps even to provide a home if nuclear weapons ever fulfil their terrible (though unlikely) potential. But AI-infused robots are unlikely to wipe us out, no matter how intelligen­t, and the evidence is in critical thinking.

Think about what a robot is: a body of some type, controlled by a computer that is essentiall­y doing the job our brain does for us. Our brains have evolved to allow us to react to stimuli in increasing­ly impressive ways; 3.5 billion years ago, our ancestors were single-celled organisms, and since then we have developed the ability to hear, see, touch and now think deeply about the stimuli we are presented with.

Right now, human and robot “brains” are worlds apart, because computers do not have the complexity that evolution imbued in us in order to reach the pinnacle of the evolutiona­ry tree.

Once we scale the mountain of complex artificial intelligen­ce, and become able to recreate intensely smart, reactive and learning robots, the opposite will be true: our brains will be inferior because we are limited by what it was evolutiona­rily necessary for us to be able to do. The memory and abilities of a computer could be limitless, precisely because they are not limited by the bias unavoidabl­y programmed into us by such a complex genetic history.

That last part is the important bit: robots will become more intelligen­t, in the sense that they may be able to process data faster, learn faster, one day even become self-aware. But they will not have the evolutiona­rily developed “junk”.

Robots are no more likely to dominate the Earth than they are to want to drink sugary drinks, inject heroin or watch reality TV

They won’t have the insecuriti­es of social situations, feel the need to fit in with peers or to dominate conversati­ons. They won’t become power-hungry or feel the need to amass unmatched levels of currency. They won’t have the feeling that they are falling when they are trying to fall asleep, because they were once tree-dwelling creatures.

Humans might be advanced compared to horses and dogs, but we’re still quite simple; we’ve developed decent cognitive abilities, but we’re driven by basic desires to procreate, be comfortabl­e and fit in.

Yet this is what worries Musk. When humans became more advanced, we decided to farm other species, war with other humans and gradually try to dominate one another. He assumes robots will do the same.

Hawking, similarly, believed AI would place us in danger because of likely different goals to humanity. But while robots will become smarter and more capable, perhaps even self-aware, they won’t have that same genetic desire to survive and reproduce. We don’t possess those things due to self-awareness – we have them because it was evolutiona­rily necessary to.

Computers may one day become advanced reasoning machines. In some ways they already have. But they will never be smarter versions of human beings, because we have flaws which no one would ever want to recreate in robots.

On the off-chance someone does, and can, these robots will be necessaril­y less capable than their unflawed colleagues. Their processing capabiliti­es would be heavily consumed by jealousy and anger, sorrow, resentment, attachment, contentmen­t, doubt, guilt, pride and every other bias which makes human life unpredicta­ble and wonderful. While the unbothered alternativ­e models might be programmed to feel sympathy for such flaws, they would not have them.

The concern about AI is not that someone could develop things that might kill millions of humans with the press of a button. These already exist. Instead, the concern centres on the idea that someone will be able to programme AI capable of coexisting with or destroying humanity, and that the AI will choose the latter. It’s really a paradox: if the technology exists to create this, it would require robots with programmed flaws in their coding to allow such irrelevant yet complex goals.

These would be robots programmed to learn the width and breadth of human culture, to learn to make the best possible decisions, and we would be worrying about them becoming concerned with domination, and all the human characteri­stics which steal our attention and stop us from making better decisions.

Robots are no more likely to want to rise up and dominate the Earth than they are to want to drink sugary drinks, inject heroin or watch reality TV. That we see dominating Earth as the end goal of a perfectly rational individual says much about our own evolution, and very little about robots.

‘Thinkonomi­cs: Illustrate­d Critical Thinking Articles’ by Robert A Johnson, illustrate­d by Chuck Harrison, is published by Ockham Press on 20 March at £7.99

 ?? (AFP/Getty) ?? That we see dominating Earth as the end goal of a perfectly rational individual says much about our own evolution, and very little about robots
(AFP/Getty) That we see dominating Earth as the end goal of a perfectly rational individual says much about our own evolution, and very little about robots

Newspapers in English

Newspapers from United Kingdom