Please use this identifier to cite or link to this item:
Record ID: b453cf63-285a-49ac-be6a-3dfdb04d7777
Web resource:
Type: Book Chapter
Title: Disrupting and preventing deepfake abuse: Exploring criminal law responses to AI-facilitated abuse
Other Titles: The Palgrave Handbook of Gendered Violence and Technology
Authors: Clough, Jonathan
Cooke, Talani
Flynn, Asher
ANRA Topic: Technology-facilitated abuse
ANRA Population: General population
Year: 2022
Publisher: Palgrave Macmillan

Artificial Intelligence (AI) is transforming the landscape of technology-facilitated abuse. In late 2017, a Reddit user uploaded a series of ‘fake’ pornographic videos transposing female celebrities’ faces onto the bodies of pornography actors. This was the first documented example of amateur deepfakes appearing in the mainstream. Since then, the commercialisation of AI-technologies has meant anyone with a social media or online profile—or indeed, who has had an image or video taken of them—is at potential risk of being ‘deepfaked’. AI-technologies have essentially eliminated the need for victims and abusers to have any kind of personal relationship or interaction, which substantially expands the pool of potential deepfake abusers and targets. As a result, new demands exist on the types of interventions needed to prevent, disrupt and respond to this form of abuse. In this chapter, drawing from an analysis of Australian criminal law, we consider whether legal responses are keeping pace with these ever-changing tools to abuse. We conclude by providing recommendations for future, multifaceted responses to deepfake abuse and the need for further research in this space.

Appears in Collections:Book Chapters

Files in This Item:
There are no files associated with this item.

Items in ANROWS library are protected by copyright, with all rights reserved, unless otherwise indicated.

Google Media

Google ScholarTM

Who's citing