The Post

Here’s what happens when police pull over a driverless car

Carolyn Y. Johnson reports on the decisions people would make in life-and-death situations.

- Peter Holley

Imagine this scenario: the brakes fail on a self-driving car as it hurtles towards a busy pedestrian crossing. A homeless person and a criminal are crossing in front of the car. Two cats are in the opposing lane.

Should the car swerve to mow down the cats or plough into two people?

It’s a relatively straightfo­rward ethical dilemma, as moral quandaries go. And people overwhelmi­ngly prefer to save human lives over animals, according to a massive new ethics study that asked people how a self-driving car should respond when faced with a variety of extreme trade-offs – dilemmas to which more than 2 million people responded.

But what if the choice is between two elderly people and a pregnant woman? An athletic person or someone who is obese? Passengers versus pedestrian­s?

The study, published in Nature, identified a few preference­s that were strongest: People opt to save people over pets, to spare the many over the few and to save children and pregnant women over older people.

But it also found other preference­s for sparing women over men, athletes over obese people and higher status people, such as executives, instead of homeless people or criminals. There were also cultural difference­s in the degree, for example, that people would prefer to save younger people over the elderly in a cluster of mostly Asian countries.

‘‘We don’t suggest that [policymake­rs] should cater to the public’s preference­s. They just need to be aware of it, to expect a possible reaction when something happens. If, in an accident, a kid does not get special treatment, there might be some public reaction,’’ said Edmond Awad, a computer scientist at the Massachuse­tts Institute of Technology Media Lab who led the work.

The thought experiment­s posed by the researcher’s Moral Machine website went viral, with their pictorial quiz taken by several million people in 233 countries. The study, which included 40 million responses to different dilemmas, provides a fascinatin­g snapshot of global public opinion as the era of self-driving cars looms large in the imaginatio­n, a vision of future convenienc­e propagated by technology companies that has recently been set back by the death of a woman in Arizona who was hit by a self-driving Uber vehicle.

Awad said one of the major surprises to the research team was how popular the research project became. It got picked up by Reddit, was featured in news stories and influentia­l YouTube users created videos of themselves going through the questions.

The thought-provoking scenarios are fun to debate. They build off a decades-old thought experiment by philosophe­rs called ‘‘the trolley problem,’’ in which an out-of-control trolley hurtles towards a group of five people standing in its path and a bystander has the option to let the car crash into them, or divert it onto a track where a single person stands.

Outside researcher­s said the results were interestin­g, but cautioned that the results could be overinterp­reted. In a randomised survey, researcher­s try to ensure that a sample is unbiased and representa­tive of the overall population, but in this case the voluntary study was taken by a population that was predominan­tly younger men. The scenarios are also distilled, As self-driving cars become increasing­ly common on American streets, an obvious question arises:

What happens when police want to pullover a robot-driven vehicle without a human backup driver?

In their recently updated Emergency Response Guide, Alphabet’s Waymo – which has hundreds of autonomous Chrysler Pacifica minivans on the road in Phoenix – provides a protocol that may offer some glimpse of what’s to come.

‘‘The Waymo vehicle uses its sensors to identify police or emergency vehicles by detecting their appearance, their sirens, and their emergency lights,’’ the guide states. ‘‘If a Waymo fully self-driving vehicle detects that a police or emergency vehicle is behind it and flashing its lights, the Waymo vehicle is designed to pull over and stop when it finds a safe place to do so.’’

Once it has come to a stop, Waymo’s vehicles can unlock its doors and roll down its windows, allowing someone from the company’s support team to communicat­e with law enforcemen­t, according to the guide.

If there are passengers in the vehicle, the guide states, Waymo’s ‘‘rider support specialist­s’’ can communicat­e with them via speakers, displays and ‘‘in-vehicle telecommun­ications.’’

If necessary, a Waymo employee may even be dispatched to the extreme and far more blackand-white than the ones that are abundant in the real world, where probabilit­ies and uncertaint­y are the norm. scene in person. The company says employees may be sent to the scenes of wrecks as well to interact with police and passengers.

‘‘The Waymo vehicle is capable of detecting that it was involved in a collision,’’ the guide states, noting that if a vehicle’s air bag is deployed, its self-driving capability is disabled. ‘‘The vehicle will then brake until it reaches a full stop and immediatel­y notify Waymo’s

‘‘The big worry that I have is that people reading this are going to think that this study is telling us how to implement a decision process for a self-driving car,’’ said Benjamin Fleet Response specialist­s.’’

Waymo has been testing its fleet of autonomous Chrysler Pacifica minivans in Phoenix for years, but the vehicles have been ferrying the public around portions of town without a backup driver for nearly a year.

The company, which has a 600-vehicle fleet in Phoenix, says its testing is ‘‘picking up speed’’ and recently announced plans to Kuipers, a computer scientist at the University of Michigan, who was not involved in the work.

Kuipers added that these thought experiment­s may frame some of the order thousands more Pacificas as it expands into other cities.

Hoping to avoid a landscape of varying state laws, companies like Waymo have been pushing for a set of federal regulatory rules that would help them to expand nationally, unleashing tens of thousands of self-driving vehicles onto public roads.

‘‘Some areas, like Connecticu­t and the District of Columbia, ban decisions that carmakers and programmer­s make about autonomous vehicle design in a misleading way. There’s a moral choice, he argued, that precedes the conundrum autonomous vehicles without a human in the driver’s seat. Others, like Michigan and Washington, allow it only if certain conditions are met,’’ Bloomberg reported in January.

‘‘California, home to many selfdrivin­g and other car technology companies, such as Lyft Inc. and Uber Technologi­es Inc., also requires a human behind a steering wheel, a California Department of Motor Vehicles spokespers­on told Bloomberg Law.’’

With the number of autonomous vehicles and rules governing them expanding, US transporta­tion regulators are already debating whether police should have the ability to disable driverless cars during emergencie­s, according to Reuters.

Even if this answer is yes, regulators acknowledg­e, a host of other consequent­ial questions must be answered.

In meetings over the summer, Reuters reported, many of the experts present argued that the same tools that police might use to control self-driving cars could be exploited by hackers and terrorists.

Many regulators, the article notes, ‘‘agreed that it is a question of when, not if, there is a massive cyber security attack targeting’’ driverless vehicles.

They added that ‘‘planning exercises are needed to prepare for and mitigate a large-scale, potentiall­y multimodal cyber security attack.’’ – Washington Post of whether to crash into a barrier and kill three passengers or to run over a pregnant woman pushing a stroller.

‘‘Building these cars, the process is not really about saying, ‘If I’m’ faced with this dilemma, who am I going to kill.’ It’s saying, ‘If we can imagine a situation where this dilemma could occur, what prior decision should I have made to avoid this?’’ Kuipers said.

Nicholas Evans, a philosophe­r at the University of Massachuse­tts at Lowell, pointed out that while the researcher­s described their three strongest principles as the ones that were universal, the cutoff between those and the weaker ones that weren’t deemed universal was arbitrary. They categorise­d the preference to spare young people over elderly people, for example, as a global moral preference, but not the preference to spare those who are following walk signals versus those who are jaywalking, or to save people of higher social status.

And the study didn’t test scenarios that could have raised even more complicate­d questions by showing how biased and problemati­c public opinion is as an arbiter of ethics, for example by including the race of the people walking across the road. Laws and regulation­s should not necessaril­y reflect public opinion, ethicists say, but protect vulnerable people against it.

Evans is working on a project that he said has been shaped in some ways as a response to the approach taken by the MIT team. He says he plans to use more nuanced crash scenarios, where real-world transporta­tion data can provide a probabilit­y of surviving a T-bone highway crash on the passenger side, for example, to assess the safety implicatio­ns of self-driving cars on American roadways.

‘‘We want to create a mathematic­al model for some of these moral dilemmas and then utilise the best moral theories that philosophy has to offer, to show what the result of choosing an autonomous vehicle to behave in a certain way is,’’ Evans said.

Iyad Rahwan, a computer scientist at MIT who oversaw the work, said that a public poll shouldn’t be the foundation of artificial intelligen­ce ethics.

But he said that regulating AI will be different from traditiona­l products, because the machines will have autonomy and the ability to adapt – which makes it more important to understand how people perceive AI and what they expect of technology.

‘‘We should take public opinion with a grain of salt,’’ Rahwan said. ‘‘I think it’s informativ­e.’’

– Washington Post

 ?? BLOOMBERG ?? A Waymo Chrysler Pacifica autonomous vehicle parked in Chandler, Arizona. The machine will pull over and open a door if police follow it with flashing lights. The office can then talk to someone from the company’s support team.
BLOOMBERG A Waymo Chrysler Pacifica autonomous vehicle parked in Chandler, Arizona. The machine will pull over and open a door if police follow it with flashing lights. The office can then talk to someone from the company’s support team.

Newspapers in English

Newspapers from New Zealand