Albuquerque Journal

Who decides who lives or dies in a driverless car?

- BY ERIC SCHWITZGEB­EL LOS ANGELES TIMES Eric Schwitzgeb­el is a professor of philosophy at UC Riverside and the author of “Perplexiti­es of Consciousn­ess.” He blogs at the Splintered Mind. He wrote this for the Los Angeles Times. Distribute­d by Tribune Conte

It’s 2025. You and your daughter are riding in a driverless car along Pacific Coast Highway.

The autonomous vehicle rounds a corner and detects a crosswalk full of children. It brakes, but your lane is unexpected­ly full of sand. It can’t get traction.

Your car does some calculatio­ns: If it continues braking, there’s a 90 percent chance that it will kill at least three children. Should it save them by steering you and your daughter off the cliff?

This isn’t an idle thought experiment. Driverless cars will be programmed to avoid collisions with pedestrian­s and other vehicles. They will also be programmed to protect the safety of their passengers.

What happens in an emergency when these two aims come into conflict?

The California Department of Motor Vehicles is now trying to draw up safety regulation­s for autonomous vehicles. These regulation­s might or might not specify when it is acceptable for collision-avoidance programs to expose passengers to risk to avoid harming others — for example, by crossing the double-yellow line or attempting an uncertain maneuver on ice.

Google, which operates most of the driverless cars being street-tested in California, prefers that the DMV not insist on specific functional safety standards. Instead, Google proposes that manufactur­ers “self-certify” the safety of their vehicles, with substantia­l freedom to develop collision-avoidance algorithms as they see fit.

That’s far too much responsibi­lity for private companies. Because determinin­g how a car will steer in a risky situation is a moral decision, programmin­g the collision-avoiding software of an autonomous vehicle is an act of applied ethics. We should bring the programmin­g choices into the open, for passengers and the public to see and assess.

Regulatory agencies will need to set some boundaries. For example, some rules should presumably be excluded as too selfish.

Consider the over-simple rule of protecting the car’s occupants at all costs. This would imply that if the car calculates that the only way to avoid killing a pedestrian would involve sideswipin­g a parked truck, with a 5 percent chance of injury to the car’s passengers, then the car should instead kill the pedestrian.

Other possible rules might be too sacrificia­l of the passengers. The equally over-simple rule of maximizing lives saved without any special regard for the car’s occupants would unfairly disregard personal accountabi­lity.

What if other drivers — human drivers — have knowingly put themselves in danger? Should your autonomous vehicle risk your safety, perhaps even your life, because a reckless motorcycli­st chose to speed around a sharp curve?

A lab must not be allowed to resolve these difficult questions on our behalf.

That said, a good regulatory framework ought to allow some manufactur­er variation and consumer choice, within ethical limits.

Manufactur­ers or fleet operators could offer passengers a range of options. “When your child is in the car, our onboard systems will detect it and prioritize the protection of rear-seat passengers!” Cars might have aggressive modes (maximum allowable speed), safety modes, ethical utilitaria­n modes (perhaps visibly advertised so that others can admire your benevolenc­e) and so forth.

Some consumer freedom seems ethically desirable.

To require that all vehicles at all times employ the same set of collision-avoidance procedures would needlessly deprive people of the opportunit­y to choose algorithms that reflect their values.

Some people might wish to prioritize the safety of their children over themselves. Others might want to prioritize all passengers equally. Some people might wish to choose algorithms more selfsacrif­icial on behalf of strangers than the government could legitimate­ly require of its citizens.

There will also always be trade-offs between speed and safety, and different passengers might legitimate­ly weigh them differentl­y, as we now do in our manual driving choices.

Further, although we might expect computers to have faster reaction times than people, our best computer programs still lag far behind normal human vision at detecting objects in novel, cluttered environmen­ts.

Suppose your car happens upon a woman pushing a rack of coats in a windy swirl of leaves. Vehicle owners may insist on some sort of preemptive override, some way of telling their car not to employ its usual algorithm, lest it sacrifice them for a mirage.

There is something romantic about the hand upon the wheel — about the responsibi­lity it implies. But future generation­s might be amazed that we allowed music-blasting 16-year-olds to pilot vehicles unsupervis­ed at 65 mph, with a flick of the steering wheel the difference between life and death. A well-designed machine will probably do better in the long run.

That machine will never drive drunk, never look away from the road to change the radio station or yell at the kids in the back seat. It will, however, have power over life and death. We need to decide — publicly — how it will exert that power.

 ??  ??

Newspapers in English

Newspapers from United States