ROBOCOPS: THE NEW FACE OF THE POLICE, OR AN ELABORATE PR STUNT?
Prof Alan Winfield, a robot ethicist at the University of the West of England, discusses the pros and cons of police robots
What do you think today’s robots can usefully contribute to the police service?
The one positive thing that I can see is a kind of reassuring presence. That’s if they’re trusted. It depends on how people react to the robots, but robots moving around a shopping mall, for instance, could prove reassuring – even if not as much as real, live human police. Having said that, I do appreciate there are cultural differences, and in some countries, particularly in the Far East, robots are likely regarded with a greater level of trust than in the UK.
There’s talk of robots being given greater powers. Could they make arrests?
The power to arrest someone is a privileged duty because you are essentially making a judgment about whether that person has committed a crime. If a human makes that judgment and it turns out to have been incorrect, then they can be held accountable. But you can’t sanction a robot: they can’t be held responsible for their own behaviour, at least until the fardistant future.
So robots could never be fully-fledged police officers?
I’m not saying it’s impossible that we could build robots that have some responsibility, but for something to be responsible in law, it’s got to have some kind of personhood. Giving a robot personhood right now is absurd – it’s like giving your washing machine personhood, or your mobile phone. Think of a robot like Data from Star Trek,a robot that effectively earns trust and genuine friendship from its human colleagues, that demonstrates its reliability over many years of companionship, and actually cares about the people it works with. That’s what we’d need in order to be able to assign it consequential responsibilities like the power to arrest someone. I think we’re looking hundreds of years into the future before we can build such a machine.
What kinds of problems could a robot police officer encounter?
There have been examples of robots being hassled by kids, although you can’t really abuse a machine, as such. Another problem is the robot being ‘gamed’. In other words, people will work out what its weaknesses are, where its senses are, and then try and back it into a corner or persuade it to go in a particular direction.
Another big worry that I have is hacking, and we know from experience that no systems are unhackable. We’ve seen incidents of driverless cars being hacked, and even devices apparently as benign as webcams. So a malicious person could hack into a police robot and cause all kinds of havoc, particularly if they’re remotely controlling the robot. All told, you’ve got a whole spectrum of potential problems with robot police, and these will all happen – there’s no doubt about it.
Who’s responsible if someone is injured by a police robot, or if it makes a mistake?
The owner of the robot probably ultimately has responsibility, but if there was a manufacturing fault, it’s no different to your car. If you crash into someone and cause injury, it’s your responsibility, but if it turned out the crash was partly caused by a significant fault in the car, then the responsibility might be shared with the people who maintained your car – who fixed the brakes the last time, for example – or even with the car’s manufacturers, who, for
whatever reason, might have built in some design flaws.
Do we need any new laws to deal with potential police robot incidents?
Robots are no different from any other manufactured object. They’re human-made artefacts, and we have tonnes of legal history of accidents with machines, in which culpability is discoverable and people are held to account and end up paying for it, often through their insurance, of course.
So I think it’s quite wrong to give robots any special status in this regard. I suspect the new law that’s needed is more around issues of transparency. So you’ve heard of a black box in an aeroplane – it’s a flight data recorder, and when air accidents happen, the first thing that the investigators do is look for the recorders. They’re absolutely crucial to finding out what went wrong in the accident. I think that robots, especially those in roles such as the police, absolutely must be fitted with a robot equivalent of the flight data recorder that basically records everything that happens to it. In fact, I recently wrote a paper on this: The Case
For An Ethical Black Box. I think it should be illegal to operate driverless cars, care robots or police robots without one.
There must be some advantages to robot police officers. Couldn’t they be completely fair and impartial in a way that a human cannot?
The experience of AI [artificial intelligence] has shown that this is not the case. It’s very difficult to build unbiased AI systems. Face recognition algorithms are typically quite good at recognising white faces, but not other ethnicities, and this is simply a bias that reflects the fact that the datasets used to train the facial recognition algorithms have not been properly designed. So the idea that a robot would be more impartial is... I mean, it depends on the kinds of decisions it’s making. Unfortunately, there are examples of bias in AI systems being reported all the time.
So are police robots more of a publicity stunt than a realistic application for humanoid robots right now?
Yes, I think the worry is that it can be a PR stunt, particularly if you’re a country that is very serious about investing heavily in robotics and AI. I think it helps to raise the visibility and the profile of that level of investment so, yes, there’s probably a big publicity aspect to it.
“A MALICIOUS PERSON COULD HACK INTO A POLICE ROBOT AND CAUSE ALL KINDS OF HAVOC, PARTICULARLY IF THEY’RE REMOTELY CONTROLLING IT”