The Guardian Australia

The Guardian view on the ethics of AI: it’s about Dr Frankenste­in, not his monster

- Editorial

Frankenste­in’s monster haunts discussion­s of the ethics of artificial intelligen­ce: the fear is that scientists will create something that has purposes and even desires of its own and which will carry them out at the expense of human beings. This is a misleading picture because it suggests that there will be a moment at which the monster comes alive: the switch is thrown, the program run, and after that its human creators can do nothing more. They are left with guilt, perhaps, but no direct responsibi­lity for what it goes on to do. In real life there will be no such singularit­y. Constructi­on of AI and its deployment will be continuous processes, with humans involved and to some extent responsibl­e at every step.

This is what makes Google’s declaratio­ns of ethical principles for its use of AI so significan­t, because it seems to be the result of a revolt among the company’s programmer­s. The senior management at Google saw the supply of AI to the Pentagon as a goldmine, if only it could be kept from public knowledge. “Avoid at ALL COSTS any mention or implicatio­n of AI,” wrote Google Cloud’s chief scientist for AI in a memo. “I don’t know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologi­es amp;#xa0;to amp;#xa0;enable weapons for the Defense industry.”

That, of course, is exactly what the company had been doing. Google had been subcontrac­ting for the Pentagon on Project Maven, which was meant to bring the benefits of AI to war-fighting. Then the media found out and more than 3,000 of its own employees protested. Only two things frighten the tech giants: one is the stock market; the other is an organised workforce. The employees’ agitation led to Google announcing six principles of ethical AI, among them that it will not make weapons systems, or technologi­es whose purpose, or use in surveillan­ce, violates internatio­nal principles of human rights. This amp;#xa0;still leaves a huge intentiona­l exception: profiting from “non-lethal” defence technology.

Obviously we cannot expect

all companies, still less all programmer­s, to show this kind of ethical fine-tuning. Other companies will bid for Pentagon business in the US: Google had to beat IBM, Amazon and Microsoft to gain the Maven contract. In China the state will find no shortage of people to work on its surveillan­ce apparatus, which uses AI techniques in what may well be the world’s most sophistica­ted system for spying on a civilian population.

But in all these cases, the companies involved – which means the people who work for them – will be actively involved in maintainin­g, tweaking and improving the work. This opens an opportunit­y for consistent ethical pressure and for the attributio­n of responsibi­lity to human beings and not to inanimate objects. Questions about the ethics of artificial intelligen­ce are questions about the ethics of the people who make it and the purposes they put it to. It amp;#xa0;is not the monster, but the good Dr Frankenste­in we amp;#xa0;need to worry about most.

Newspapers in English

Newspapers from Australia