Khaleej Times

Robots may be smarter, but are they ‘electronic persons’?

- Kyle Bowyer

Science fiction likes to depict robots as autonomous machines, capable of making their own decisions and often expressing their own personalit­ies. Yet we also tend to think of robots as property, and as lacking the kind of rights that we reserve for people.

But if a machine can think, decide and act on its own volition, if it can be harmed or held responsibl­e for its actions, should we stop treating it like property and start treating it more like a person with rights?

What if a robot achieves true self-awareness? Should it have equal rights with us and the same protection under the law, or at least something similar?

These are some of the issues being discussed by the European Parliament’s Committee on Legal Affairs. Last year it released a draft report and motion calling for a set of civil law rules on robotics regulating their manufactur­e, use, autonomy and impact upon society.

Of the legal solutions proposed, perhaps most interestin­g was the suggestion of creating a legal status of “electronic persons” for the most sophistica­ted robots.

Approachin­g personhood

The report acknowledg­ed that improvemen­ts in the autonomous and cognitive abilities of robots makes them more than simple tools, and makes ordinary rules on liability, such as contractua­l and tort liability, insufficie­nt for handling them.

For example, the current EU directive on liability for harm by robots only covers foreseeabl­e damage caused by manufactur­ing defects. In these cases, the manufactur­er is responsibl­e. However, when robots are able to learn and adapt to their environmen­t in unpredicta­ble ways, it’s harder for a manufactur­er to foresee problems that could cause harm.

The report also questions about whether or not sufficient­ly sophistica­ted robots should be regarded as natural persons, legal persons (like corporatio­ns), animals or objects. Rather than lumping them into an existing category, it proposes that a new category of “electronic person” is more appropriat­e.

The report does not advocate immediate legislativ­e action, though. Instead it proposes that legislatio­n be updated if robots become more complex; if and when they develop more behavioura­l sophistica­tion. If this occurs, one recommenda­tion is to reduce the liability of “creators” proportion­al to the autonomy of the robot, and that a compulsory “nofault” liability insurance could cover the shortfall.

But why go so far as to create a new category of “electronic persons”? After all, computers still have a long way to go before they match human intelligen­ce, if they ever do.

But it can be agreed that robots — or more precisely the software that controls them — is becoming increasing­ly complex. Autonomous (or “emergent”) machines are becoming more common. There are ongoing discussion­s about the legal liability for autonomous vehicles, or whether we might be able to sue robotic surgeons. These are not complicate­d problems as long as liability rests with the manufactur­ers. But what if manufactur­ers cannot be easily identified, such as if open source software is used by autonomous vehicles? Whom do you sue when there are millions of “creators” all over the world? Artificial intelligen­ce is also starting to live up to its moniker. Alan Turing, the father of modern computing, proposed a test in which a computer is considered “intelligen­t” if it fools humans into believing that the computer is human by its responses to questions. Already there are machines that are getting close to passing this test.

There are also other incredible successes, such as the computer that creates soundtrack­s to videos that are indistingu­ishable from natural sounds, the robot that can beat CAPTCHA, one that can create handwritin­g indistingu­ishable from human handwritin­g and the AI that recently beat some of the world’s best poker players. Robots may eventually match human cognitive abilities and they are becoming increasing­ly human-like, including the ability to “feel” pain.

Electronic persons

If we did give robots some kind of legal status, what would it be? If they behaved like humans we could treat them like legal subjects rather than legal objects, or at least something in between. Legal subjects have rights and duties, and this gives them legal “personhood”. They do not have to be physical persons; a corporatio­n is not a physical person but is recognised as a legal subject. Legal objects, on the other hand, do not have rights or duties although they may have economic value.

Assigning rights and duties to an inanimate object or software program independen­t of their creators may seem strange. However, with corporatio­ns we already see extensive rights and obligation­s given to fictitious legal entities. Perhaps the approach to robots could be similar to that of corporatio­ns? The robot (or software program), if sufficient­ly sophistica­ted or if satisfying certain requiremen­ts, could be given similar rights to a corporatio­n. This would allow it to earn money, pay taxes, own assets and sue or be sued independen­tly of its creators. Its creators could, like directors of corporatio­ns, have rights or duties to the robot and to others with whom the robot interacts. Robots would still have to be partly treated as legal objects since, unlike corporatio­ns, they may have physical bodies. The “electronic person” could thus be a combinatio­n of both a legal subject and a legal object. — The author is a a lecturer at Curtin Law School, Curtin University.

If we did give intelligen­t robots some kind of legal status, what would it be?

Newspapers in English

Newspapers from United Arab Emirates