Ottawa Citizen

Let's do more to prevent harm of deepfakes

Legal recourse is needed for victims of AI images, writes Katheryne Soucy.

- Katheryne Soucy is a J.D. candidate (2025) at the University of Ottawa.

Gendered violence: another year, another way to perpetuate the cycle. What is the latest trend? The creation of non-consensual deepfakes.

Deepfakes are pictures or videos that have been created or altered with the use of artificial intelligen­ce. While such technology is not inherently evil, it sometimes finds itself in the hands of users who cannot — or will not — see the moral issue at stake. Unsurprisi­ngly, the majority of deepfake content found online is non-consensual pornograph­y, and the majority of it targets women.

Canada needs better legal recourse for victims of non-consensual deepfakes to hold perpetrato­rs accountabl­e.

It's becoming increasing­ly clear that deepfakes are having similar impacts to what's called “non-consensual distributi­on of intimate images” (NCDII). Non-consensual deepfakes are being used as a tool by abusers to control their victims. A study says that, similarly to revenge porn, “deepfakes are used to control, intimidate, isolate, shame and micromanag­e victims,” mostly women.

Victims of deepfakes also experience constant anxiety about who has viewed this content or when they might see it next. Even if the content is taken down, it may already have been shared or saved to personal devices.

The new Online Harms Act sheds light on the issue of deepfakes. This act holds promise with the creation of the Digital Safety Commission, which will work in tandem with social media platforms to restrict the proliferat­ion of deepfake content. Platforms will need to implement tools to flag harmful content. More importantl­y, content that is deemed to be harmful, such as NCDII, will be taken down within 24 hours.

While this is a step in the right direction, we are still not holding perpetrato­rs accountabl­e.

The Online Harms Act did not introduce any changes to the Criminal Code. However, the quality of deepfakes has improved, making it difficult to discern from non-digitally altered pictures, and victims are facing the same consequenc­es as with NCDII. Non-consensual deepfake of a sexual nature should carry the same judicial consequenc­es as with NCDII. We could amend our Criminal Code to criminaliz­e non-consensual deepfakes of a sexual nature.

It would be ideal to implement a new tort that recognizes deepfakes as a social and ethical wrong.

This tort could be applied when a defendant distribute­s non-consensual deepfakes of the plaintiff. A perpetrato­r should not be able to use the defence that they used media voluntaril­y uploaded online by the plaintiff. The issue at stake is how these once-consensual images are being used.

The only way to avoid becoming a victim of deepfakes is to limit your online presence. Not having an online presence can be disadvanta­geous in today's reality. It would also place the responsibi­lity on victims, mainly women, instead of condemning perpetrato­rs.

Because the content that is used to create deepfakes has likely been uploaded online consensual­ly, creators of deepfakes might believe that they're not doing anything wrong, or they simply don't care. In reality, this type of content is harming women and is just a new way of perpetuati­ng gendered violence.

While, ideally, it would be best to stop the creation and sharing of deepfake before it happens, it can be very difficult, if not impossible. Victims need access to better judicial recourse to hold perpetrato­rs accountabl­e.

Newspapers in English

Newspapers from Canada