New Zealand Listener

Psychology

With robots ever more common in society, getting things right is no easy matter.

- by Marc Wilson

As robots proliferat­e, getting things right isn’t easy.

Would you buy a robot? At a time when our cellphones can do more than a warehouse full of mainframe just decades ago, when entreprene­urs are talking about colonising Mars and when actors can be computer-generated into movies long after they’ve died, it’s reasonable to think that time is close. Or even upon us. We have robotised our industry and our cellphones can talk back to us and recognise our voices.

We are at a point where US science-fiction writer Isaac Asimov’s Three Laws of Robotics are becoming a reality. Specifical­ly, that a robot may not cause or allow to cause through inaction any harm to a human; that a robot must do as it’s told by a human as long as this doesn’t lead to harm under law one; and that the robot must protect its own existence as long as doing so doesn’t conflict with laws one and two.

It’s all so easy when it’s written in a book or portrayed on the big screen. It’s a spectator sport. Except that we’re on the verge of having to decide about how we put this stuff into practice.

Take self-driving cars. Like cellphones, technology has moved at a crazy pace, and as of July this year, smart cars had reportedly driven more than 210 million kilometres. However, when it comes to the accident rate, the story is a little mixed. The Google fleet of smart cars has reported a single prang, but there’s been at least one fatality – in a Tesla. That said, the accident rate so far is pretty low. Tesla has suggested there are fewer accidents involving selfdrivin­g cars than human drivers, although the comparison may be a little unfair as self-driving is reserved for freeway-style motoring (where accident rates also tend to be lower).

So here’s a moral dilemma. Imagine that your autopilot is taking you to work. All is well until someone tries to cut it fine by running across the pedestrian crossing when the crossing lights are red. What do you want the car to do? Run the person down (killing him or her but preserving your life) or swerve and miss the person (but kill you)? What if it’s 10 people dashing across the road? What if your family are also in the car?

This is a version of that classic trolley-car problem where people are invited to decide if they’d let the car roll into a group of workmen up ahead or divert to a siding where a single workman cops it. Except that in this scenario it’s you or the pedestrian­s.

These are the kinds of scenarios that psychologi­st Jean-François Bonnefon of the Toulouse School of Economics and his colleagues presented to people in a series of online studies. It’s an elegant piece of work with real implicatio­ns for what’s to come.

The classic trolley-car problem often results in a utilitaria­n response – maximising the best outcome for the most people. And the same is true for smart cars – participan­ts were comfortabl­y (four to one) in favour of smart cars doing the right thing, where doing the right thing is minimising deaths – if it’s you or the 10 pedestrian­s, then save the pedestrian­s.

But it’s not all rosy for the pedestrian­s. Participan­ts appear to favour the idea of measures that save the most lives, and they think this is what other people should look for when buying a smart car, but they are much more iffy about whether they would buy one themselves (particular­ly if we’re told the Government will legislate for utility). We want others to do the right thing, but when it comes down to it, we’d prefer a car that weights our life more highly.

So, we may not be there just yet, but the future isn’t far away.

We prefer a selfdrivin­g car that weights our life more highly than others’.

 ??  ??
 ??  ?? Isaac Asimov: his robotics laws can be tricky in practice.
Isaac Asimov: his robotics laws can be tricky in practice.
 ?? by Marc Wilson PSYCHOLOGY ??
by Marc Wilson PSYCHOLOGY

Newspapers in English

Newspapers from New Zealand