Let's do more to prevent harm of deepfakes
Legal recourse is needed for victims of AI images, writes Katheryne Soucy.
The creation of non-consensual “deepfakes” is the latest trend to perpetuate the cycle of gendered violence.
Deepfakes are pictures or videos that have been created or altered with the use of artificial intelligence. While AI is not inherently evil, it sometimes finds itself in the hands of users who cannot or will not see the moral issue at stake. The majority of deepfake content found online is non-consensual pornography, and the majority of it targets women.
Canada needs better legal recourse for victims of non-consensual deepfakes to hold perpetrators accountable.
It's becoming increasingly clear that deepfakes are having similar impacts to what's called “non-consensual distribution of intimate images” (NCDII). A study says that, similarly to revenge porn, “deepfakes are used to control, intimidate, isolate, shame and micromanage victims,” mostly women.
Victims of deepfakes also experience anxiety about who has viewed this content or when they might see it next. Even if the content is taken down, it may already have been shared or saved to personal devices.
The new Online Harms Act holds promise with the creation of the Digital Safety Commission, which is to work in tandem with social media platforms to restrict the proliferation of deepfake content. Platforms will need to implement tools to flag harmful content. More important, content that is deemed to be harmful, such as NCDII, is to be taken down within 24 hours.
While this is a step in the right direction, we are still not holding perpetrators accountable.
The Online Harms Act does not introduce any changes to the Criminal Code. However, the quality of deepfakes has improved, making it difficult to discern from non-digitally altered pictures, and victims are facing the same consequences as with NCDII. Non-consensual deepfakes of a sexual nature should carry the same judicial consequences as with NCDII. The Criminal Code should be amended to criminalize non-consensual deepfakes of a sexual nature.
It would be ideal to implement a new tort that recognizes deepfakes as a social and ethical wrong. This tort could be applied when a defendant distributes non-consensual deepfakes of the plaintiff. A perpetrator should not be able to use the defence that they used media voluntarily uploaded online by the plaintiff. The issue at stake is how these once-consensual images are being used.
The only way to avoid becoming a victim of deepfakes is to limit your online presence. But not having an online presence can be disadvantageous in today's reality. It would also place the responsibility on victims, mainly women, instead of condemning perpetrators.
While, ideally, it would be best to stop the creation and sharing of deepfakes before they appear, it can be very difficult, if not impossible. Victims need access to better judicial recourse to hold perpetrators accountable.