ROBOCOPS: THE NEW FACE OF THE PO­LICE, OR AN ELAB­O­RATE PR STUNT?

Prof Alan Win­field, a ro­bot ethi­cist at the Univer­sity of the West of Eng­land, dis­cusses the pros and cons of po­lice ro­bots

BBC Earth (Asia) - - Science -

What do you think to­day’s ro­bots can use­fully con­trib­ute to the po­lice ser­vice?

The one pos­i­tive thing that I can see is a kind of re­as­sur­ing pres­ence. That’s if they’re trusted. It de­pends on how peo­ple re­act to the ro­bots, but ro­bots mov­ing around a shop­ping mall, for in­stance, could prove re­as­sur­ing – even if not as much as real, live hu­man po­lice. Hav­ing said that, I do ap­pre­ci­ate there are cul­tural dif­fer­ences, and in some coun­tries, par­tic­u­larly in the Far East, ro­bots are likely re­garded with a greater level of trust than in the UK.

There’s talk of ro­bots be­ing given greater pow­ers. Could they make ar­rests?

The power to ar­rest some­one is a priv­i­leged duty be­cause you are es­sen­tially mak­ing a judg­ment about whether that per­son has com­mit­ted a crime. If a hu­man makes that judg­ment and it turns out to have been in­cor­rect, then they can be held ac­count­able. But you can’t sanc­tion a ro­bot: they can’t be held re­spon­si­ble for their own be­hav­iour, at least un­til the fardis­tant fu­ture.

So ro­bots could never be fully-fledged po­lice of­fi­cers?

I’m not say­ing it’s im­pos­si­ble that we could build ro­bots that have some re­spon­si­bil­ity, but for some­thing to be re­spon­si­ble in law, it’s got to have some kind of per­son­hood. Giv­ing a ro­bot per­son­hood right now is ab­surd – it’s like giv­ing your wash­ing ma­chine per­son­hood, or your mo­bile phone. Think of a ro­bot like Data from Star Trek,a ro­bot that ef­fec­tively earns trust and gen­uine friend­ship from its hu­man col­leagues, that demon­strates its re­li­a­bil­ity over many years of com­pan­ion­ship, and ac­tu­ally cares about the peo­ple it works with. That’s what we’d need in or­der to be able to as­sign it con­se­quen­tial re­spon­si­bil­i­ties like the power to ar­rest some­one. I think we’re look­ing hun­dreds of years into the fu­ture be­fore we can build such a ma­chine.

What kinds of prob­lems could a ro­bot po­lice of­fi­cer en­counter?

There have been ex­am­ples of ro­bots be­ing has­sled by kids, although you can’t re­ally abuse a ma­chine, as such. Another prob­lem is the ro­bot be­ing ‘gamed’. In other words, peo­ple will work out what its weak­nesses are, where its senses are, and then try and back it into a cor­ner or per­suade it to go in a par­tic­u­lar di­rec­tion.

Another big worry that I have is hack­ing, and we know from ex­pe­ri­ence that no sys­tems are un­hack­able. We’ve seen in­ci­dents of driver­less cars be­ing hacked, and even de­vices ap­par­ently as be­nign as we­b­cams. So a ma­li­cious per­son could hack into a po­lice ro­bot and cause all kinds of havoc, par­tic­u­larly if they’re re­motely con­trol­ling the ro­bot. All told, you’ve got a whole spec­trum of po­ten­tial prob­lems with ro­bot po­lice, and these will all hap­pen – there’s no doubt about it.

Who’s re­spon­si­ble if some­one is in­jured by a po­lice ro­bot, or if it makes a mis­take?

The owner of the ro­bot prob­a­bly ul­ti­mately has re­spon­si­bil­ity, but if there was a man­u­fac­tur­ing fault, it’s no dif­fer­ent to your car. If you crash into some­one and cause in­jury, it’s your re­spon­si­bil­ity, but if it turned out the crash was partly caused by a sig­nif­i­cant fault in the car, then the re­spon­si­bil­ity might be shared with the peo­ple who main­tained your car – who fixed the brakes the last time, for ex­am­ple – or even with the car’s man­u­fac­tur­ers, who, for

what­ever rea­son, might have built in some de­sign flaws.

Do we need any new laws to deal with po­ten­tial po­lice ro­bot in­ci­dents?

Ro­bots are no dif­fer­ent from any other man­u­fac­tured ob­ject. They’re hu­man-made arte­facts, and we have tonnes of le­gal his­tory of ac­ci­dents with ma­chines, in which cul­pa­bil­ity is dis­cov­er­able and peo­ple are held to ac­count and end up pay­ing for it, of­ten through their in­sur­ance, of course.

So I think it’s quite wrong to give ro­bots any spe­cial sta­tus in this re­gard. I sus­pect the new law that’s needed is more around is­sues of trans­parency. So you’ve heard of a black box in an aero­plane – it’s a flight data recorder, and when air ac­ci­dents hap­pen, the first thing that the in­ves­ti­ga­tors do is look for the recorders. They’re ab­so­lutely cru­cial to find­ing out what went wrong in the ac­ci­dent. I think that ro­bots, es­pe­cially those in roles such as the po­lice, ab­so­lutely must be fit­ted with a ro­bot equiv­a­lent of the flight data recorder that ba­si­cally records ev­ery­thing that hap­pens to it. In fact, I re­cently wrote a pa­per on this: The Case

For An Eth­i­cal Black Box. I think it should be il­le­gal to op­er­ate driver­less cars, care ro­bots or po­lice ro­bots with­out one.

There must be some ad­van­tages to ro­bot po­lice of­fi­cers. Couldn’t they be com­pletely fair and im­par­tial in a way that a hu­man can­not?

The ex­pe­ri­ence of AI [ar­ti­fi­cial in­tel­li­gence] has shown that this is not the case. It’s very dif­fi­cult to build un­bi­ased AI sys­tems. Face recog­ni­tion al­go­rithms are typ­i­cally quite good at recog­nis­ing white faces, but not other eth­nic­i­ties, and this is sim­ply a bias that re­flects the fact that the datasets used to train the fa­cial recog­ni­tion al­go­rithms have not been prop­erly de­signed. So the idea that a ro­bot would be more im­par­tial is... I mean, it de­pends on the kinds of de­ci­sions it’s mak­ing. Un­for­tu­nately, there are ex­am­ples of bias in AI sys­tems be­ing re­ported all the time.

So are po­lice ro­bots more of a pub­lic­ity stunt than a re­al­is­tic ap­pli­ca­tion for hu­manoid ro­bots right now?

Yes, I think the worry is that it can be a PR stunt, par­tic­u­larly if you’re a coun­try that is very se­ri­ous about in­vest­ing heav­ily in ro­bot­ics and AI. I think it helps to raise the vis­i­bil­ity and the pro­file of that level of in­vest­ment so, yes, there’s prob­a­bly a big pub­lic­ity as­pect to it.

“A MA­LI­CIOUS PER­SON COULD HACK INTO A PO­LICE RO­BOT AND CAUSE ALL KINDS OF HAVOC, PAR­TIC­U­LARLY IF THEY’RE RE­MOTELY CON­TROL­LING IT”

Newspapers in English

Newspapers from Singapore

© PressReader. All rights reserved.