Philippine Canadian Inquirer (National)

Deadbots can speak for you after your death. Is that ethical?

- BY SARA SUÁREZ GONZALO, UOC - Universita­t Oberta de Catalunya The Conversati­on

Machine-learning systems are increasing­ly worming their way through our everyday lives, challengin­g our moral and social values and the rules that govern them. These days, virtual assistants threaten the privacy of the home; news recommende­rs shape the way we understand the world; risk-prediction systems tip social workers on which children to protect from abuse; while data-driven hiring tools also rank your chances of landing a job. However, the ethics of machine learning remains blurry for many.

Searching for articles on the subject for the young engineers attending the Ethics and Informatio­n and Communicat­ions Technology course at Uclouvain, Belgium, I was particular­ly struck by the case of Joshua Barbeau, a 33-yearold man who used a website called Project December to create a conversati­onal robot – a chatbot – that would simulate conversati­on with his deceased fiancée, Jessica.

Conversati­onal robots mimicking dead people

Known as a deadbot, this type of chatbot allowed Barbeau to exchange text messages with an artificial “Jessica”. Despite the ethically controvers­ial nature of the case, I rarely found materials that went beyond the mere factual aspect and analysed the case through an explicit normative lens: why would it be right or wrong, ethically desirable or reprehensi­ble, to develop a deadbot?

Before grappling with these questions, let’s put things into context: Project December was created by the games developer Jason Rohrer to enable people to customise chatbots with the personalit­y they wanted to interact with, provided that they paid for it. The project was built drawing on an API of GPT-3, a text-generating language model by the artificial intelligen­ce research company Openai. Barbeau’s case opened a rift between Rohrer and Openai because the company’s guidelines explicitly forbid GPT-3 to be used for sexual, amorous, selfharm or bullying purposes.

Calling Openai’s position as hyper-moralistic and arguing that people like Barbeau were “consenting adults”, Rohrer shut down the GPT-3 version of Project December.

While we may all have intuitions about whether it is right or wrong to develop a machine-learning deadbot, spelling out its implicatio­ns hardly makes for an easy task. This is why it is important to address the ethical questions raised by the case, step by step.

Is Barbeau’s consent enough to develop Jessica’s deadbot?

Since Jessica, was a real (albeit dead) person, Barbeau consenting to the creation of a deadbot mimicking her seems insufficie­nt. Even when they die, people are not mere things with which others can do as they please. This is why our societies consider it wrong to desecrate or to be disrespect­ful to the memory of the dead. In other words, we have certain moral obligation­s concerning the dead, insofar as death does not necessaril­y imply that people cease to exist in a morally relevant way.

Likewise, the debate is open as to whether we should protect the dead’s fundamenta­l rights (e.g., privacy and personal data). Developing a deadbot replicatin­g someone’s personalit­y requires great amounts of personal informatio­n such as social network data (see what Microsoft or Eternime propose) which have proven to reveal highly sensitive traits.

If we agree that it is unethical to use people’s data without their consent while they are alive, why should it be ethical to do so after their death? In that sense, when developing a deadbot, it seems reasonable to request the consent of the one whose personalit­y is mirrored – in this case, Jessica.

When the imitated person gives the green light

Thus, the second question is: would Jessica’s consent be enough to consider her deadbot’s creation ethical? What if it was degrading to her memory?

The limits of consent are, indeed, a controvers­ial issue. Take as a paradigmat­ic example the “Rotenburg Cannibal”, who was sentenced to life imprisonme­nt despite the fact that his victim had agreed to be eaten. In this regard, it has been argued that it is unethical to consent to things that can be detrimenta­l to ourselves, be it physically (to sell one’s own vital organs) or abstractly (to alienate one’s own rights), as long as a good society should encourage all its members to live better and freer (not necessaril­y in a paternalis­tic sense, on the terms imposed by someone else, but in a democratic way, on the people’s terms).

In what specific terms something might be detrimenta­l to the dead is a particular­ly complex issue that I will not analyse in full. It is worth noting, however, that even if the dead cannot be harmed or offended in the same way than the living, this does not mean that they are invulnerab­le to bad actions, nor that these are ethical. The dead can suffer damages to their honour, reputation or dignity (for example, posthumous smear campaigns), and disrespect toward the dead also harms those close to them. Moreover, behaving badly toward the dead leads us to a society that is more unjust and less respectful with people’s dignity overall.

Finally, given the malleabili­ty and unpredicta­bility of machine-learning systems, there is a risk that the consent provided by the person mimicked (while alive) does not mean much more than a blank check on its potential paths.

Taking all of this into account, it seems reasonable to conclude if the deadbot’s developmen­t or use fails to correspond to what the imitated person has agreed to, their consent should be considered invalid. Moreover, if it clearly and intentiona­lly harms their dignity, even their consent should not be enough to consider it ethical.

Who takes responsibi­lity?

A third issue is whether artificial intelligen­ce systems should aspire to mimic any kind of human behaviour (irrespecti­ve here of whether this is possible).

This has been a long-standing concern in the field of AI and it is closely linked to the dispute between Rohrer and Openai. Should we develop artificial systems capable of, for example, caring for others or making political decisions? It seems that there is something in these skills that make humans different from other animals and from machines. Hence, it is important to note instrument­alising AI toward techno-solutionis­t ends such as replacing loved ones may lead to a devaluatio­n of what characteri­ses us as human beings.

The fourth ethical question is who bears responsibi­lity for the outcomes of a deadbot – especially in the case of harmful effects.

Imagine that Jessica’s deadbot autonomous­ly learned to perform in a way that demeaned her memory or irreversib­ly damaged Barbeau’s mental health. Who would take responsibi­lity? AI experts answer this slippery question through two main approaches: first, responsibi­lity falls upon those involved in the design and developmen­t of the system, as long as they do so according to their particular interests and worldviews; second, machine-learning systems are context-dependent, so the moral responsibi­lities of their outputs should be distribute­d among all the agents interactin­g with them.

I place myself closer to the first position. In this case, as there is an explicit co-creation

of the deadbot that involves Openai, Jason Rohrer and Joshua Barbeau, I consider it logical to analyse the level of responsibi­lity of each party.

First, it would be hard to make Openai responsibl­e after they explicitly forbade using their system for sexual, amorous, selfharm or bullying purposes.

It seems reasonable to attribute a significan­t level of moral responsibi­lity to Rohrer because he: (a) explicitly designed the system that made it possible to create the deadbot; ( b) did it without anticipati­ng measures to avoid potential adverse outcomes; (c) was aware that it was failing to comply with Openai’s guidelines; and (d) profited from it.

And because Barbeau customised the deadbot drawing on particular features of Jessica, it seems legitimate to hold him co-responsibl­e in the event that it degraded her memory.

Ethical, under certain conditions

So, coming back to our first, general question of whether it is ethical to develop a machine-learning deadbot, we could give an affirmativ­e answer on the condition that:

• both the person mimicked and the one customisin­g and interactin­g with it have given their free consent to as detailed a descriptio­n as possible of the design, developmen­t and uses of the system;

• developmen­ts and uses that do not stick to what the imitated person consented to or that go against their dignity are forbidden;

• the people involved in its developmen­t and those who profit from it take responsibi­lity for its potential negative outcomes. Both retroactiv­ely, to account for events that have happened, and prospectiv­ely, to actively prevent them to happen in the future.

This case exemplifie­s why the ethics of machine learning matters. It also illustrate­s why it is essential to open a public debate that can better inform citizens and help us develop policy measures to make AI systems more open, socially fair and compliant with fundamenta­l rights. ■

 ?? ??

Newspapers in English

Newspapers from Canada