Bangkok Post

Who will be to blame when the robots go wrong?

- TIMOTHY LAVIN MARY DUENWALD

In the demilitari­sed zone dividing North and South Korea, SGR-1 robots are on patrol, equipped with cameras and radar to detect intruders as well as speakers to warn them off. If that fails, they also carry machine guns and grenade launchers.

In the US, the Home Exploring Robotic Butler can retrieve a book from a shelf, a meal from a microwave or a drink from the kitchen. It can even separate an Oreo cookie.

In Japan, a seal-like robot called Paro provides companions­hip for seniors — and seems to ease the effects of dementia.

Over the next few decades, robots will become part of everyday life. But as they grow more sophistica­ted and autonomous, they will confront situations of cultural and moral ambiguity that won’t be easily resolved — situations that people, over the millennia, have learned to navigate but that resist codificati­on that machines can easily understand. This means robots, from the battlefiel­d to the nursing home, will require more advanced ethical-decision-making abilities. And humans will need to think through what should happen when they cause harm.

Three challenges in particular need to be explored. The first and most immediate is in warfare. Some 40 countries are at work on weapons and military equipment that have some degree of autonomy — from drones to the Legged Squad Support System — as are plenty of private companies. The appeal seems obvious. Unmanned weapons don’t need health insurance or food or hazard pay. They never lash out in anger, disobey an order or suffer from post-traumatic stress. And, at least in theory, they could save the lives of many human soldiers.

At the same time, fully autonomous weapons — those that are capable of making their own decisions about whether to attack or kill, without a human ‘‘in the loop’’ — make us deeply uneasy. Only 26% of respondent­s to a survey by the University of Massachuse­tts Amherst favoured their use. Human Rights Watch has asserted they violate internatio­nal humanitari­an law and should be banned.

Yet bans on specific weapons systems — such as military airplanes or submarines — have almost never been effective in the past. Instead, legal prohibitio­ns and ethical norms have arisen that effectivel­y limit their use. So a more promising approach might be to adapt existing internatio­nal law to govern autonomous technology — for instance, by requiring that such weapons, like all others, can’t be used indiscrimi­nately or cause unnecessar­y suffering. It may turn out that robotic weapons are actually better at meeting those requiremen­ts than humans are.

Outside of warfare, robots will confront situations with no obvious moral resolution. Suppose one is assigned to make sure your grandmothe­r takes her pills. Then one day she refuses. A host of quandaries — from medical ethics to privacy rights to cultural mores — arise that would be hard enough for a person to resolve.

Situations like this will demand that engineers cooperate closely with ethicists in designing software, and they will require much more sophistica­ted rules than the ‘‘Three Laws of Robotics’’ made famous by Isaac Asimov. Ronald Arkin, of the Georgia Institute of Technology, has done pioneering work on creating ‘‘ethical governors’’ for robots. But we’re a long way from a satisfacto­ry simulation of morality. Technology companies would be wise to boost their investment in such research for the sake of both profits and liability.

Which leads to our third concern. When a robot with some degree of autonomy unexpected­ly harms someone or something, who’s liable? The manufactur­er? The software designer? The owner? To some extent, the existing tort system can be adapted to help sort things out. One study suggests a hybrid liability system in which robots would be treated as domesticat­ed animals in cases where their owners or victims acted negligentl­y, and as commercial products in cases where the machines were defective.

As robots grow more sophistica­ted, and people more reliant on them, another model to consider is the National Vaccine Injury Compensati­on Programme. The government could establish a fund, paid for by a tax on autonomous machines, to compensate accident victims, thus ensuring that manufactur­ers won’t fear rare but very costly lawsuits — and won’t be discourage­d from inventing new robots — provided they follow best practices in designing and marketing them.

In warfare as in ordinary life, when robots cause harm, it will be critical that the lines of accountabi­lity are clear. Scenarios in which intelligen­t machines grow self-aware enough to enslave humanity — evoked so vividly in movies such as The Terminator — aren’t plausible. Yet they express an important human intuition: there is a danger in ceding too much control to technology.

This intuition can guide us into the new robotic era. But we shouldn’t let it impede a promising field. With rules in place, the rise of the machines should be nothing to fear. ©2013 BLOOMBERG VIEW

Newspapers in English

Newspapers from Thailand