Popular Mechanics (South Africa)

After you die, you could be resurrecte­d as a chatbot. That’s a problem

-

NO ONE KNOWS WHERE WE GO WHEN we die. Or for that matter, what happens to our most intimate thoughts, dreams, and desires when the nerve cells in our brains fire for the very last time. But it looks as though Microsoft may have some ideas. In December 2020, the US Patent and Trademark Office (USPTO) granted a patent to Microsoft that outlines a process to create a conversati­onal chatbot of a specific person using their social data. Specifical­ly, Microsoft could use images, voice data, social media posts, text messages, and written letters to ‘create or modify a special index in the theme of the specific person’s personalit­y’.

That sounds pretty benign, but in an eerie twist, the patent states that the chatbot could potentiall­y be inspired by friends or family members who are already dead. And the system could even generate a 2D or 3D simulacrum of the person.

Naturally, this opens a whole can of worms, explains Irina Raicu, the director of the internet ethics programme at Santa Clara University’s Markkula Center for Applied Ethics. ‘If you try to create a very good chatbot for someone who died … you could put words into people’s mouths that they never said,’ she notes.

Taking a person’s tweets and Facebook posts, then creating an index – or a sort of catalogue for the data to help a computer search for the right answers to a query – does not always lead to organic or honest responses.

‘If this becomes accepted, I think this could have a chilling effect on human communicat­ions,’ Raicu says. ‘If I’m worried that anything I’m going to say could be used in a weird avatar of myself, I’ll have to secondgues­s everything.’ Using sarcasm on the internet, for instance? You might not want to anymore, for fear that your comments could be taken in earnest and built into a chatbot dialogue, potentiall­y harming your reputation post-mortem.

This isn’t the first time an intelligen­t chatbot has been created as a way to bring back the dead.

In 2015, technologi­st Eugenia Kuyda’s friend, Roman, died in a sudden and tragic car accident in Moscow. She gathered text message conversati­ons between Roman and many of his friends and assembled a chatbot that could serve as a sort of analogue for him. In 2017, she used that experience to launch Replika, an AI chatbot service that allows anyone to make their own virtual friend.

Regardless of any positive effects, it raises an issue: While these chatbots may be beneficial to the person who is grieving, they may also be exploiting the dead, Raicu says.

In the case of the Microsoft patent, Raicu says that an individual has a constituti­onal right to privacy, so this sort of chatbot is already a violation of a deceased person’s autonomy – they have no say in which bits of their social data go into the final chatbot, for instance. And creating a chatbot modelled on a person who has never consented in the first place feels unfair, because they aren’t a part of the decision-making process.

On the one hand, Raicu says, much of this brand of innovation is driven by people who do feel genuine empathy and want to help others through the loss of a loved one, perhaps. But at the same time, these technologi­sts must be astute in their designs, considerin­g the negative implicatio­ns.

It may seem dystopian, and perhaps a bit paranoid, but the only sure-fire way to protect your humanity from these kinds of programs would be to set up a section in your living will regarding your personal data, says Alexander Hauptmann, a research professor at Carnegie Mellon University’s Language Technologi­es Institute.

‘You could imagine that people might be able to put stuff in their will about how their archive of data should be used or disposed of,’ he says. ‘But then the other question is, who is actually going to sue [the person who built the chatbot]? Maybe some other family member who knows what the will said and objects to it.’

For what it’s worth, we asked Microsoft about the patent. While they didn’t tell us much, they did direct us to a January 2021 tweet from Tim O’Brien, general manager of AI Programs at Microsoft, in which he confirmed that there are no active plans at the company to use this chatbot patent.

‘But if I ever get a job writing for Black Mirror, I’ll know to go to the USPTO website for story ideas,’ he tweeted. Touché.

 ?? ??

Newspapers in English

Newspapers from South Africa