A Focus on the Facts
- AI company Sensity warned that 96% of deepfakes they surveyed online were non-consensual ‘pornographic’ materials.
- They registered that 124 million deepfake ‘pornographic’ videos were available on the top four deepfake pornography websites.
- Sensity also warned that automated bots were being used to ‘strip’ the clothes from publicly available photos found on social media. 104,852 women were targeted.
- 70% of targets were private individuals whose photos had been harvested from social media.
- The Law Commission is currently reviewing legislation into image-based abuse which includes the sharing of ‘altered images’ such as deepfakes.
The Harmful Side of Deepfakes
Bullying
Extortion and Exploitation
Deepfakes can be used to create incriminating, embarrassing, or suggestive material. Some deepfakes are so good that it becomes difficult to distinguish between the deepfake and the real thing. Trying to convince others that an embarrassing or abusive image is fake can create additional layers of vulnerability and distress for victims. These images can then be used to extort money or additional ‘real’ images.
Additionally, deepfakes can be used to create so-called ‘revenge porn’, which is a form of image-based sexual abuse as retaliation or vengeance typically associated with the end of a relationship or not agreeing to a sexual relationship with the perpetrator.
There is also the potential for deepfakes to be used as a form of homophobic abuse, in which a person is depicted in gay pornography. This could then be used to ‘out’ the person or as an attempt to ‘destroy their reputation.’ For young people struggling with their sexual orientation, being depicted in any sexualised deepfakes may be particularly distressing.
Image-Based Sexual Abuse
There have been cases where images of children have been harvested and used to generate sexualised deepfakes. The realistic depiction of a victim engaging in a sex act can damage a child’s wellbeing and mental health. We know that deepfake software can be used to remove clothing from victims digitally. In some cases, there are commercial services where users can pay to have images professionally manipulated.
It is important that parents, carers, and safeguarding professionals are aware of the risks this form of (non-contact) sexual abuse brings. In some cases, victims may be unaware that their images have been harvested and misused to create deepfakes.
While many young people may be aware of and understand how images can be manipulated in this way, others may not. It is important to speak to them about the issue of deepfakes and how they can be misused.
How to spot a Deepfake
- Glitches – there are typically signs if you look closely at the video itself. Is there rippling or blurring around key facial features, like the neck, eyes, or mouth? This may become more obvious when a person moves, blinks, or turns their head or body.
- Audio – there may be an indication that lip movements do not match what you are hearing. Look closely for natural mouth movements.
- Blurring – are there any key parts of the video or image that are blurred or missing definition? You can usually detect this in features like teeth, hair, or skin tone.
Our Advice
- Learn – The best way to help protect children from deepfakes is to educate yourself. Share this article with other parents and safeguarding professionals to help spread the word!
- Talk – Discuss deepfakes and the importance of image consent with the children in your care. Ensure they know why they should ask someone before using an image of them to create a deepfake or manipulated picture.
- Check – Make sure all the devices your children own or have access to have the best safety settings enabled. Speak to the children in your care about their safety and privacy settings online. You should also check that they limit public access to their social media images.