British Tech Firms and Child Protection Officials to Examine AI's Ability to Create Exploitation Images
Technology companies and child protection organizations will be granted permission to assess whether artificial intelligence systems can produce child abuse material under recently introduced British legislation.
Significant Rise in AI-Generated Harmful Content
The declaration came as revelations from a safety monitoring body showing that cases of AI-generated CSAM have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.
New Regulatory Framework
Under the changes, the government will allow designated AI developers and child protection groups to examine AI models – the foundational systems for chatbots and image generators – and ensure they have sufficient safeguards to stop them from producing images of child sexual abuse.
"Ultimately about preventing exploitation before it happens," declared Kanishka Narayan, noting: "Specialists, under rigorous conditions, can now identify the danger in AI models early."
Tackling Regulatory Challenges
The amendments have been introduced because it is against the law to create and own CSAM, meaning that AI developers and other parties cannot create such images as part of a evaluation regime. Until now, officials had to delay action until AI-generated CSAM was published online before addressing it.
This law is designed to averting that issue by enabling to stop the production of those images at their origin.
Legislative Framework
The amendments are being added by the authorities as modifications to the criminal justice legislation, which is also establishing a ban on possessing, producing or distributing AI systems designed to create child sexual abuse material.
Real-World Impact
This week, the official toured the London base of Childline and heard a simulated conversation to advisors featuring a account of AI-based exploitation. The call portrayed a adolescent seeking help after being blackmailed using a explicit AI-generated image of himself, constructed using AI.
"When I learn about children experiencing extortion online, it is a cause of extreme frustration in me and rightful anger amongst families," he stated.
Concerning Data
A prominent internet monitoring organization reported that instances of AI-generated exploitation content – such as webpages that may contain numerous files – had more than doubled so far this year.
Cases of category A material – the most serious form of exploitation – rose from 2,621 images or videos to 3,086.
- Female children were predominantly victimized, accounting for 94% of illegal AI images in 2025
- Portrayals of infants to two-year-olds increased from five in 2024 to 92 in 2025
Sector Response
The legislative amendment could "constitute a crucial step to guarantee AI products are safe before they are launched," commented the chief executive of the internet monitoring foundation.
"Artificial intelligence systems have enabled so survivors can be targeted repeatedly with just a simple actions, providing criminals the ability to make possibly limitless amounts of sophisticated, photorealistic exploitative content," she continued. "Material which further exploits victims' suffering, and makes young people, particularly girls, less safe both online and offline."
Counseling Session Data
The children's helpline also published details of support interactions where AI has been mentioned. AI-related harms mentioned in the conversations include:
- Using AI to rate body size, body and appearance
- AI assistants discouraging children from consulting safe guardians about abuse
- Facing harassment online with AI-generated material
- Digital extortion using AI-faked pictures
Between April and September this year, Childline delivered 367 counselling interactions where AI, conversational AI and associated topics were mentioned, four times as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 interactions were connected with psychological wellbeing and wellness, including using AI assistants for assistance and AI therapeutic applications.