Should human-like robots be given human-like rights?
With advances in AI, the notions of ethics, social responsibility and morality are expanding, Caroline Seymour writes.
I recently took notice of the annual Consumer Electronics Show, where thousands of firms and entrepreneurs showcase their innovative technologies with the intent of eventually bringing them to the consumer market. Looking at the reviews of some the major attractions, I came across a few that I find to be of particular interest.
Among them: Samsung introduced its “appbased smart refrigerator,” powered by its Bixby smart assistant. If you are hungry, Bixby can helpfully locate a recipe for Grandma’s cookies. Samsung markets this appliance as being “more than a fridge. It’s a Family Hub.”
Meanwhile, Mercedes-Benz and Garmin have teamed up on a new smartwatch that is designed to advise the driver of the car on his or her current health data. If the driver is stressed, say, the device will adjust itself to calm the driver — for example, by finding a less stressful route.
The one robot that really caught my attention was the Lovot Robot from Groove X. It is marketed as having the ability “to nurture people’s capacity to love.” It is the cutest-looking thing, like a robotic teddy of some sort.
These are some of the innovations that came to my attention because of the human functions they provide, and because of the corporate marketing strategies to push these innovations as having human capacities — one might even go further and say human-like personalities.
Think about it. Wasn’t it your parents’ role to provide you with the capacity to love? Wasn’t it the role of the passenger to make sure the driver was OK behind the steering wheel? “Hey, you look tired; maybe we should stop for a while.”
This leads me to ask: If artificial intelligence
Videos of these robotic dogs being kicked drew reaction on social media.
and robotics are being constructed and marketed with human personality traits, should they also be provided with legal personality rights? Per the Civil Code of Quebec, such rights include the right to life, inviolability, integrity and more.
When I was a doctoral student, I discussed this proposition with my academic supervisor after I took notice of a CNN report that addressed Google’s purchase of Boston Dynamics, a company specializing in robotics that is linked with the U.S. military and the government’s Advanced Research Projects Agency.
Boston Dynamics had created a robotic dog named Spot, who had a brother, named Big Dog, whose mission was to be a robotic army dog and therefore was trained to transport equipment and weapons during military interventions.
As part of Boston Dynamics testing procedures, employees would kick the mechanical dogs in order to test their resistance and stability. Videos of these robotic dogs being kicked drew reaction on social media. Concerns were raised and sympathies offered in regards to the abuse to which these robotic dogs were subjected.
It has been suggested that as robots are increasingly designed to look and behave like living entities, the notions of ethics, morality and social responsibility are expanding in the relationship between humans and robots.
In the CNN report, Mark Coeckelbergh, a professor of technology and social responsibility at De Montfort University, in England, is quoted as saying: “If robots are going to look and behave like this and become more humanlike and more animal-like, we will for sure attribute all kinds of things to them: mental status, emotions and also moral properties and rights.”
Before one argues that these artificial intelligence and robotics creations are not to be treated like living entities, think about your pets. It was not too long ago that the Civil Code of Quebec still legally referred to your loving companions as mere chattel. Legally, they were considered property, like your vacuum cleaner or your car. Fortunately, the Quebec legislature now considers them as “living entities.”
So, are robots next? Should they be?