The Guardian Australia

AI ethicist Kate Darling: ‘Robots can be our partners’

- Zoë Corbyn

Dr Kate Darling is a research specialist in human-robot interactio­n, robot ethics and intellectu­al property theory and policy at the Massachuse­tts Institute of Technology (MIT) Media Lab. In her new book, The New Breed, she argues that we would be better prepared for the future if we started thinking about robots and artificial intelligen­ce (AI) like animals.

What is wrong with the way we think about robots? So often we subconscio­usly compare robots to humans and AI to human intelligen­ce. The comparison limits our imaginatio­n. Focused on trying to recreate ourselves, we’re not thinking creatively about how to use robots to help humans flourish.

Why is an animal analogy better? We have domesticat­ed animals because they are useful to us – oxen to plough our fields, pigeon delivery systems. Animals and robots aren’t the same, but the analogy moves us away from the persistent robot-human one. It opens our mind to other possibilit­ies – that robots can be our partners – and lets us see some of the choices we have in shaping how we use the technology.

But companies are trying to develop robots to take humans out of the equation – driverless robot cars, package delivery by drone. Doesn’t an animal analogy conceal what, in fact, is a significan­t threat? There is a threat to people’s jobs. But that threat is not the robots - it is company decisions that are driven by a broader economic and political system of corporate capitalism. The animal analogy helps illustrate that we have some options. The different ways that we’ve harnessed animals’ skills in the past shows we could choose to design and use this technology as a supplement to human labour, instead of just trying to automate people away.

Who should be responsibl­e when a robot causes harm? In the middle ages, animals were put on trial and punished… We did it for hundreds of years of western history: pigs, horses, dogs and plagues of locusts – and rats too. And bizarrely the trials followed the same rules as human trials. It seems so strange today because we don’t hold animals morally accountabl­e for their actions. But my worry when it comes to robots is, because of the robot human comparison, we’re going to fall into this same type of middle ages animal trial fallacy, where we try to hold them accountabl­e to human standards. And we are starting to see glimmers of that, where companies and government­s say: “Oh, it wasn’t our fault, it was this algorithm.” .

Shouldn’t we hold robot manufactur­ers responsibl­e for any harm? My concern is that companies are being let off the hook. In the case of the cyclist killed by a self-driving Uber car in 2018, the back-up driver was held responsibl­e instead of the manufactur­er. The argument from the companies is that they shouldn’t be responsibl­e for learning technology, because they aren’t able to foresee or plan for every possibilit­y. I take inspiratio­n from historical models of how we have assigned legal responsibi­lity when animals cause unanticipa­ted harm: for example, in some cases, we distinguis­h between dangerous and safer animals and solutions range from holding owners strictly responsibl­e to allowing some flexibilit­y, depending on the context. If your tiny poodle bites someone on the street, totally unexpected­ly for the first time, you’re not going to be punished like you would if it were a cheetah. But the main point is that unforeseea­ble behaviour isn’t a new problem and we shouldn’t let companies argue that it is.

You don’t have any pets but you have many robots. Tell us about them… I have seven Pleo baby robot dinosaurs, an Aibo robotic dog, a Paro baby seal robot and a Jibo robot assistant. My first Pleo I named Yochai. I ended up learning from it first-hand about our capacity to empathise with robots. It turned out to mimic pain and distress very well. And, showing it to my friends and having them hold it up by the tail, I realised it really bothered me if they held it up too long. I knew exactly how the robot worked – that everything was a simulation – but I still felt compelled to make the pain stop. There’s a substantia­l body of research now showing that we do empathise with robots.

Some people, such as social psychologi­st Sherry Turkle, worry about companions­hip robots replacing human relationsh­ips. Do you share this fear?It doesn’t seem to have any foundation in reality. We are social creatures able to develop relationsh­ips with all different types of people, animals and things. A relationsh­ip with a robot wouldn’t necessaril­y take away from any of what we already have.

What, if any, are the real issues with robot companions?I worry that companies may try to take advantage of

people who are using this very emotionall­y persuasive technology – for example, a sex robot exploiting you in the heat of the moment with a compelling in-app purchase. Similar to how we’ve banned subliminal advertisin­g in some places, we may want to consider the emotional manipulati­on that will be possible with social robots.

What about privacy? Animals can keep your secrets, but a robot may not…These devices are moving into intimate spaces of our lives and much of their functional­ity comes from their ability to collect and store data to learn. There’s not enough protection for these giant datasets these companies are amassing. I also worry that because a lot of social robotics deals with characters modelled on humans, it raises issues around gender and racial biases that we put into the design. Harmful stereotype­s get reinforced and embedded into the technology. And I worry that we are looking to these robot companions as a solution to our societal problems such as loneliness or lack of care workers. Just as robots haven’t caused these problems, they also can’t fix them. They should be treated as supplement­al tools to human care that provide something new.

Should we give rights to robots? This often comes up in science fiction, revolving around the question of

I don’t disagree that robots would deserve rights if they became conscious or sentient. But that's a far-future scenario

whether robots are sufficient­ly like us. I don’t disagree that robots, theoretica­lly, would deserve rights if they were to become conscious or sentient. But that is a far-future scenario. Animal rights are a much better predictor for how this conversati­on around robot rights is going to play out in practice, at least in western society. And on animal rights we are hypocrites. We like to believe that we care about animal suffering but if you look at our actual behaviour, we gravitate towards protecting the animals that we relate to emotionall­y or culturally. In the US you can get a burger at the drive-through, but we don’t eat dog meat. I think it’s likely we will do the same with robots: giving rights to some and not others.

Should we have human-looking robots at all?I don’t think we’re ever going to stop doing it but, for most practical intents and purposes, the human form is overrated and overused. We can put emotions into everything from blobs to chairs. People may even respond better to non-human robots, because what’s often disappoint­ing is when things that look like you don’t quite behave the way you expect.

• The New Breed: How to Think About Robots by Kate Darling is published by Penguin (£20). To order a copy go to guardianbo­okshop.com. Delivery charges may apply

 ?? Photograph: Gian Paul Lozza ?? Dr Kate Darling says her baby robot dinosaurs mimic pain and distress very well.
Photograph: Gian Paul Lozza Dr Kate Darling says her baby robot dinosaurs mimic pain and distress very well.

Newspapers in English

Newspapers from Australia