Under new legislation, AI developers and child protection organisations will be able to test artificial intelligence (AI) models to prevent the creation of indecent images
Evil predators who make sexual deepfakes of children including infants under two years old will face a fresh crackdown under the law.
Under new legislation, AI developers and child protection organisations will be able to test artificial intelligence (AI) models to prevent the creation of indecent images.
Under the current UK law – which criminalises the possession and generation of child sexual abuse material – developers cannot carry out safety testing on AI models, meaning images can only be removed after they have been created and shared online.
Reports of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025, according to data published by the Internet Watch Foundation (IWF) today. There has also been a disturbing rise in depictions of infants, with images of 0–2-year-olds surging from five in 2024 to 92 in 2025.
READ MORE: Dating apps could be blocked if they fail to tackle cyber flashing under major law change
IWF said the severity of the material has also intensified, with Category A content – images involving penetrative sexual activity, sexual activity with an animal, or sadism – rising from 2,621 to 3,086 items. Girls have been overwhelmingly targeted, making up 94% of illegal AI images in 2025.
In what is being described as one of the first of its kind in the world, the change to the law will ensure AI systems’ safeguards can be “robustly tested from the start”, the Department for Science, Innovation and Technology (DSIT) said. It will also enable organisations to check models have protections against extreme pornography and non-consensual intimate images.
The changes will be tabled today as an amendment to the Crime and Policing Bill. The Government said it will bring together a group of experts in AI and child safety to ensure testing is “carried out safely and securely”.
The NSPCC said the new law must make it compulsory for AI models to be tested in this way.
Rani Govender, policy manager for child safety online at the charity, said: “To make a real difference for children, this cannot be optional. Government must ensure that there is a mandatory duty for AI developers to use this provision so that safeguarding against child sexual abuse is an essential part of product design.”
Kerry Smith, chief executive of the IWF, said: “AI tools have made it so survivors can be victimised all over again with just a few clicks, giving criminals the ability to make potentially limitless amounts of sophisticated, photorealistic child sexual abuse material. Safety needs to be baked into new technology by design. Today’s announcement could be a vital step to make sure AI products are safe before they are released.”
Technology Secretary Liz Kendall said: “We will not allow technological advancement to outpace our ability to keep children safe.
“These new laws will ensure AI systems can be made safe at the source, preventing vulnerabilities that could put children at risk. By empowering trusted organisations to scrutinise their AI models, we are ensuring child safety is designed into AI systems, not bolted on as an afterthought.”

