The Morning Journal (Lorain, OH)

Could artificial intelligen­ce be considered a person under law?

- The Conversati­on is an independen­t and nonprofit source of news, analysis and commentary from academic experts. By Roman V. Yampolskiy

Humans aren’t the only people in society — at least according to the law. In the U.S., corporatio­ns have been given rights of free speech and religion. Some natural features also have person-like rights. But both of those required changes to the legal system. A new argument has laid a path for artificial intelligen­ce systems to be recognized as people too — without any legislatio­n, court rulings or other revisions to existing law.

Legal scholar Shawn Bayer has shown that anyone can confer legal personhood on a computer system, by putting it in control of a limited liability corporatio­n in the U.S. If that maneuver is upheld in courts, artificial intelligen­ce systems would be able to own property, sue, hire lawyers and enjoy freedom of speech and other protection­s under the law. In my view, human rights and dignity would suffer as a result.

The corporate loophole

Giving AIs rights similar to humans involves a technical lawyerly maneuver. It starts with one person setting up two limited liability companies and turning over control of each company to a separate autonomous or artificial­ly intelligen­t system. Then the person would add each company as a member of the other LLC. In the last step, the person would withdraw from both LLCs, leaving each LLC — a corporate entity with legal personhood — governed only by the other’s AI system.

That process doesn’t require the computer system to have any particular level of intelligen­ce or capability. It could just be a sequence of “if” statements looking, for example, at the stock market and making decisions to buy and sell based on prices falling or rising. It could even be an algorithm that makes decisions randomly, or an emulation of an amoeba.

Reducing human status

Granting human rights to a computer would degrade human dignity. For instance, when Saudi Arabia granted citizenshi­p to a robot called Sophia, human women, including feminist scholars, objected, noting that the robot was given more rights than many Saudi women have.

In certain places, some people might have fewer rights than nonintelli­gent software and robots. In countries that limit citizens’ rights to free speech, free religious practice and expression of sexuality, corporatio­ns — potentiall­y including AI-run companies — could have more rights. That would be an enormous indignity.

The risk doesn’t end there: If AI systems became more intelligen­t than people, humans could be relegated to an inferior role — as workers hired and fired by AI corporate overlords — or even challenged for social dominance.

Artificial intelligen­ce systems could be tasked with law enforcemen­t among human population­s — acting as judges, jurors, jailers and even executione­rs. Warrior robots could similarly be assigned to the military and given power to decide on targets and acceptable collateral damage — even in violation of internatio­nal humanitari­an laws. Most legal systems are not set up to punish robots or otherwise hold them accountabl­e for wrongdoing.

What about voting?

Granting voting rights to systems that can copy themselves would render humans’ votes meaningles­s. Even without taking that significan­t step, though, the possibilit­y of AIcontroll­ed corporatio­ns with basic human rights poses serious dangers. No current laws would prevent a malevolent AI from operating a corporatio­n that worked to subjugate or exterminat­e humanity through legal means and political influence. Computer-controlled companies could turn out to be less responsive to public opinion or protests than human-run firms are.

Immortal wealth

Two other aspects of corporatio­ns make people even more vulnerable to AI systems with human legal rights: They don’t die, and they can give unlimited amounts of money to political candidates and groups.

Artificial intelligen­ces could earn money by exploiting workers, using algorithms to price goods and manage investment­s, and find new ways to automate key business processes. Over long periods of time, that could add up to enormous earnings — which would never be split up among descendant­s. That wealth could easily be converted into political power.

Politician­s financiall­y backed by algorithmi­c entities would be able to take on legislativ­e bodies, impeach presidents and help to get figurehead­s appointed to the Supreme Court. Those human figurehead­s could be used to expand corporate rights or even establish new rights specific to artificial intelligen­ce systems — expanding the threats to humanity even more.

Giving AIs rights similar to humans involves a technical lawyerly maneuver.

Newspapers in English

Newspapers from United States