Vivid Headlines

Our research on dark web forums reveals the growing threat of AI-generated child abuse images


Our research on dark web forums reveals the growing threat of AI-generated child abuse images

Samantha Lundrigan, Professor of Investigative Psychology and Public Protection, Anglia Ruskin University and Simon Bailey, Chair, International Policing and Public Protection Research Institute, Anglia Ruskin University

The UK aims to be the first country in the world to create new offences related to AI-generated sexual abuse. New laws will make it illegal to possess, create or distribute AI tools designed to generate child sexual abuse material (CSAM), punishable by up to five years in prison. The laws will also make it illegal for anyone to possess so-called "paedophile manuals" which teach people how to use AI to sexually abuse children.

In the last few decades, the threat against children from online abuse has multiplied at a concerning rate. According to the Internet Watch Foundation, which tracks down and removes abuse from the internet, there has been an 830% rise in online child sexual abuse imagery since 2014. The prevalence of AI image generation tools is fuelling this further.

Last year, we at the International Policing and Protection Research Institute at Anglia Ruskin University published a report on the growing demand for AI-generated child sexual abuse material online.

Researchers analysed chats that took place in dark web forums over the previous 12 months. We found evidence of growing interest in this technology, and of online offenders' desire for others to learn more and create abuse images.

Horrifyingly, forum members referred to those creating the AI-imagery as "artists". This technology is creating a new world of opportunity for offenders to create and share the most depraved forms of child abuse content.

Our analysis showed that members of these forums are using non-AI-generated images and videos already at their disposal to facilitate their learning and train the software they use to create the images. Many expressed their hopes and expectations that the technology would evolve, making it even easier for them to create this material.

Dark web spaces are hidden and only accessible through specialised software. They provide offenders with anonymity and privacy, making it difficult for law enforcement to identify and prosecute them.

The Internet Watch Foundation has documented concerning statistics about the rapid increase in the number of AI-generated images they encounter as part of their work. The volume remains relatively low in comparison to the scale of non-AI images that are being found, but the numbers are growing at an alarming rate.

The charity reported in October 2023 that a total of 20,254 AI generated imaged were uploaded in a month to one dark web forum. Before this report was published, little was known about the threat.

The perception among offenders is that AI-generated child sexual abuse imagery is a victimless crime, because the images are not "real". But it is far from harmless, firstly because it can be created from real photos of children, including images that are completely innocent.

While there is a lot we don't yet know about the impact of AI-generated abuse specifically, there is a wealth of research on the harms of online child sexual abuse, as well as how technology is used to perpetuate or worsen the impact of offline abuse. For example, victims may have continuing trauma due to the permanence of photos or videos, just knowing the images are out there. Offenders may also use images (real or fake) to intimidate or blackmail victims.

These considerations are also part of ongoing discussions about deepfake pornography, the creation of which the government also plans to criminalise.

Read more: Deepfake porn: why we need to make it a crime to create it, not just share it

All of these issues can be exacerbated with AI technology. Additionally, there is also likely to be a traumatic impact on moderators and investigators having to view abuse images in the finest details to identify if they are "real" or "generated" images.

UK law currently outlaws the taking, making, distribution and possession of an indecent image or a pseudo-photograph (a digitally-created photorealistic image) of a child.

But there are currently no laws that make it an offence to possess the technology to create AI child sexual abuse images. The new laws should ensure that police officers will be able to target abusers who are using or considering using AI to generate this content, even if they are not currently in possession of images when investigated.

We will always be behind offenders when it comes to technology, and law enforcement agencies around the world will soon be overwhelmed. They need laws designed to help them identify and prosecute those seeking to exploit children and young people online.

It is welcome news that the government is committed to taking action, but it has to be fast. The longer the legislation takes to enact, the more children are at risk of being abused.

Tackling the global threat will also take more than laws in one country. We need a whole-system response that starts when new technology is being designed. Many AI products and tools have been developed for entirely genuine, honest and non-harmful reasons, but they can easily be adapted and used by offenders looking to create harmful or illegal material.

The law needs to understand and respond to this, so that technology cannot be used to facilitate abuse, and so that we can differentiate between those using tech to harm, and those using it for good.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Previous articleNext article

POPULAR CATEGORY

entertainment

14444

discovery

6575

multipurpose

15185

athletics

15131