London: The UK government has announced “world-leading” legislation to criminalize AI tools designed to generate child sexual abuse material (CSAM), aiming to curb the alarming rise of AI-driven exploitation.
It is already illegal to possess AI-generated CSAM but the new laws will target the means of production.
The law says
The new laws will make it illegal to possess, create, or distribute AI tools that generate CSAM, punishable by up to five years in prison. Additionally, possessing AI-generated “paedophile manuals,” which provide guidance on using AI to abuse children, will carry a penalty of up to three years.
Safeguarding Minister Jess Phillips emphasized that Britain is the first country to introduce such measures, calling the issue a “global problem” requiring international cooperation. Home Secretary Yvette Cooper warned that AI is putting online child exploitation “on steroids,” making abuse more extreme and sadistic.
Home Secretary @YvetteCooperMP has announced first in the world measures to stop paedophiles using AI to create sexual images of children.
As technology rapidly advances, we’re strengthening our response to track down and stop online predators. pic.twitter.com/9VPVLjVRfX— Home Office (@ukhomeoffice) February 2, 2025
The Home Office highlighted that AI is being used to “nudeify” real-life images of children and superimpose faces onto existing CSAM, with some AI-generated images being so realistic that they are nearly indistinguishable from real abuse.
The NSPCC has received distressing reports from children, including a 15-year-old girl who discovered deepfake nudes of herself created from her Instagram photos.
Predators are also using AI-generated images to blackmail victims, forcing them into further abuse, including live streaming. To combat this, the government will introduce a specific offense for running websites that enable the sharing of CSAM or grooming advice, punishable by up to 10 years in prison.
UK Border Force will also gain new powers to compel suspected child sex offenders to unlock their digital devices for inspection.
The Internet Watch Foundation (IWF) recently identified 3,512 AI-generated CSAM images on a single dark web site within a month, with the prevalence of the most severe abuse rising by 10% since 2023.
The UK’s latest measures, to be introduced as part of the Crime and Policing Bill, aim to close legal loopholes and keep pace with evolving threats posed by AI-driven abuse.