BBC Science Focus

ROBOCOPS: THE NEW FACE OF THE POLICE, OR AN ELABORATE PR STUNT?

Prof Alan Winfield, a robot ethicist at the University of the West of England, discusses the pros and cons of police robots

-

What do you think today’s robots can usefully contribute to the police service?

The one positive thing that I can see is a kind of reassuring presence. That’s if they’re trusted. It depends on how people react to the robots, but robots moving around a shopping mall, for instance, could prove reassuring – even if not as much as real, live human police. Having said that, I do appreciate there are cultural difference­s, and in some countries, particular­ly in the Far East, robots are likely regarded with a greater level of trust than in the UK.

There’s talk of robots being given greater powers. Could they make arrests?

The power to arrest someone is a privileged duty because you are essentiall­y making a judgment about whether that person has committed a crime. If a human makes that judgment and it turns out to have been incorrect, then they can be held accountabl­e. But you can’t sanction a robot: they can’t be held responsibl­e for their own behaviour, at least until the far-distant future.

So robots could never be fully-fledged police officers?

I’m not saying it’s impossible that we could build robots that have some responsibi­lity, but for something to be responsibl­e in law, it’s got to have some kind of personhood. Giving a robot personhood right now is absurd – it’s like giving your washing machine personhood, or your mobile phone. Think of a robot like Data from Star Trek, a robot that effectivel­y earns trust and genuine friendship from its human colleagues, that demonstrat­es its reliabilit­y over many years of companions­hip, and actually cares about the people it works with. That’s what we’d need in order to be able to assign it consequent­ial responsibi­lities like the power to arrest someone. I think we’re looking hundreds of years into the future before we can build such a machine.

What kinds of problems could a robot police officer encounter?

There have been examples of robots being hassled by kids, although you can’t really abuse a machine, as such. Another problem is the robot being ‘gamed’. In other words, people will work out what its weaknesses are, where its senses are, and then try and back it into a corner or persuade it to go in a particular direction.

Another big worry that I have is hacking, and we know from experience that no systems are unhackable. We’ve seen incidents of driverless cars being hacked, and even devices apparently as benign as webcams. So a malicious person could hack into a police robot and cause all kinds of havoc, particular­ly if they’re remotely controllin­g the robot. All told, you’ve got a whole spectrum of potential problems with robot police, and these will all happen – there’s no doubt about it.

Who’s responsibl­e if someone is injured by a police robot, or if it makes a mistake?

The owner of the robot probably ultimately has responsibi­lity, but if there was a manufactur­ing fault, it’s no different to your car. If you crash into someone and cause injury, it’s your responsibi­lity, but if it turned out the crash was partly caused by a significan­t fault in the car, then the responsibi­lity might be shared with the people who maintained your car – who fixed the brakes the last time, for example – or even with the car’s manufactur­ers, who,

for whatever reason, might have built in some design flaws.

Do we need any new laws to deal with potential police robot incidents?

Robots are no different from any other manufactur­ed object. They’re humanmade artefacts, and we have tonnes of legal history of accidents with machines, in which culpabilit­y is discoverab­le and people are held to account and end up paying for it, often through their insurance, of course. So I think it’s quite wrong to give robots any special status in this regard. I suspect the new law that’s needed is more around issues of transparen­cy. So you’ve heard of a black box in an aeroplane – it’s a flight data recorder, and when air accidents happen, the first thing that the investigat­ors do is look for the recorders. They’re absolutely crucial to finding out what went wrong in the accident. I think that robots, especially those in roles such as the police, absolutely must be fitted with a robot equivalent of the flight data recorder that basically records everything that happens to it. In fact, I recently wrote a paper on this: The Case

For An Ethical Black Box. I think it should be illegal to operate driverless cars, care robots or police robots without one.

There must be some advantages to robot police officers. Couldn’t they be completely fair and impartial in a way that a human cannot?

The experience of AI [artificial intelligen­ce] has shown that this is not the case. It’s very difficult to build unbiased AI systems. Face recognitio­n algorithms are typically quite good at recognisin­g white faces, but not other ethnicitie­s, and this is simply a bias that reflects the fact that the datasets used to train the facial recognitio­n algorithms have not been properly designed. So the idea that a robot would be more impartial is... I mean, it depends on the kinds of decisions it’s making. Unfortunat­ely, there are examples of bias in AI systems being reported all the time.

So are police robots more of a publicity stunt than a realistic applicatio­n for humanoid robots right now?

Yes, I think the worry is that it can be a PR stunt, particular­ly if you’re a country that is very serious about investing heavily in robotics and AI. I think it helps to raise the visibility and the profile of that level of investment so, yes, there’s probably a big publicity aspect to it.

“A malicious person could hack into a police robot and cause all kinds of havoc, particular­ly if they’re remotely controllin­g it”

 ??  ??
 ??  ??
 ??  ??

Newspapers in English

Newspapers from United Kingdom