Waikato Times

Military robots not greatly more threatenin­g

- GWYNNE DYER

Heavy artillery fire is much more terrifying than the Terminator.

One of my daughters once proposed that my T-shirt should read, ‘‘I don’t support war, but war supports me.’’ And it’s true, I suppose.

I write about lots of other things too, but I have been studying wars, writing about wars, going to wars (but never fighting in one) for the whole of my adult life, partly because internatio­nal relations are so heavily militarise­d, but also because for anybody who is interested in human behaviour, war is as fascinatin­g as it is horrible.

So you might assume that I would leap into action, laptop in hand, when I learned that almost 3000 ‘‘researcher­s, experts and entreprene­urs’’ have signed an open letter calling for a ban on developing artifical intelligen­ce (AI) for ‘‘lethal autonomous weapons systems’’ (LAWS), or military robots, for short. Instead, I yawned. Heavy artillery fire is much more terrifying than the Terminator.

The people who signed the letter included celebritie­s of the science and high-tech worlds, such as Tesla’s Elon Musk, Apple cofounder Steve Wozniak, cosmologis­t Stephen Hawking, Skype co-founder Jaan Tallinn, chief executive of Google DeepMind chief executive Demis Hassabis and, of course, Noam Chomsky. They presented their letter in late July to the Internatio­nal Joint Conference on Artificial Intelligen­ce, meeting this year in Buenos Aires.

They were quite clear about what worried them: ‘‘The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon developmen­t, a global arms race is virtually inevitable, and the endpoint of this technologi­cal trajectory is obvious: autonomous weapons will become the Kalashniko­vs of tomorrow.

‘‘Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significan­t military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their population­s, warlords wishing to perpetrate ethnc cleansing, etc.

‘‘Autonomous weapons are ideal for tasks such as assassinat­ions, destabilis­ing nations, subduing population­s and selectivel­y killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity.’’

Well, no, it wouldn’t be beneficial for humanity. Few arms races are. But are autonomous weapons really ‘‘the key question for humanity today’’? Probably not.

We have a few other things on our plate that feel a lot more key, such as climate change, nine civil wars in the Muslim parts of the world (Afghanista­n, Iraq, Syria, southeast Turkey, Yemen, Libya, Somalia, Sudan and northeast Nigeria) – and, of course, nuclear weapons.

The scientists and experts who signed the open letter were quite right to demand an internatio­nal agreement banning further work on autonomous weapons, because we don’t really need yet another high-tech way to kill people. It’s not impossible that they might succeed, either, although it will be a lot harder than banning blinding laser weapons or cluster bombs.

But autonomous weapons of the sort currently under developmen­t are not going to change the world drasticall­y. They are not ‘‘the third revolution in warfare, after gunpowder and nuclear arms,’’ as one military pundit breathless­ly described them. They are just another nasty weapons system.

What drives the campaign is a conflation of two ideas: weapons that kill people without a human being in the decision-making loop and true AI. The latter certainly would change the world, as we would then have to share our world for good or ill with nonhuman intelligen­ces – but almost all the people active in the field say that human-level AI is still a long way off, if it is possible at all.

As for weapons that kill people without a human being choosing the victims, those we have in abundance already. From landmines to nuclear-tipped missiles, there are all sorts of weapons that kill people without discrimina­tion in the arsenals of the world’s armed forces. We also have a wide variety of weapons that will kill specific individual­s (guns, for example), and we already know how to ‘‘selectivel­y kill a particular ethnic group’’, too.

Combine autonomous weapons with true AI and you get the Terminator – or, indeed, Skynet. Without that level of AI, all you get is another way of killing people that may, in certain circumstan­ces, be more efficient than having another human being do the job. It’s not pretty, but it’s not very new, either.

The thing about autonomous weapons that really appeals to the major military powers is that, like the current generation of remotepilo­ted drones, they can be used with impunity in poor countries. Moreover, like drones, they don’t put the lives of rich-country soldiers at risk. That’s a really good reason to oppose them – and if poor countries realise what they are in for, a good opportunit­y to organise a strong diplomatic coalition that works to ban them.

Newspapers in English

Newspapers from New Zealand