UK Technology Companies and Child Safety Agencies to Test AI's Capability to Generate Abuse Images

Tech firms and child safety agencies will be granted permission to evaluate whether artificial intelligence tools can generate child abuse material under new British legislation.

Significant Increase in AI-Generated Illegal Content

The declaration coincided with revelations from a protection monitoring body showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.

Updated Regulatory Structure

Under the changes, the government will permit approved AI developers and child protection organizations to inspect AI systems – the foundational technology for conversational AI and visual AI tools – and ensure they have sufficient safeguards to prevent them from producing depictions of child exploitation.

"Ultimately about stopping abuse before it happens," stated the minister for AI and online safety, adding: "Specialists, under strict protocols, can now detect the risk in AI models promptly."

Addressing Regulatory Challenges

The amendments have been introduced because it is against the law to produce and possess CSAM, meaning that AI creators and other parties cannot generate such content as part of a evaluation regime. Until now, officials had to wait until AI-generated CSAM was uploaded online before addressing it.

This law is aimed at averting that problem by helping to stop the production of those materials at their origin.

Legislative Framework

The amendments are being added by the government as modifications to the crime and policing bill, which is also establishing a prohibition on owning, producing or sharing AI systems developed to generate child sexual abuse material.

Practical Impact

This recently, the official visited the London headquarters of a children's helpline and heard a simulated conversation to advisors involving a account of AI-based exploitation. The interaction depicted a adolescent requesting help after facing extortion using a sexualised AI-generated image of themselves, constructed using AI.

"When I hear about children experiencing blackmail online, it is a cause of extreme anger in me and justified anger amongst families," he said.

Alarming Statistics

A leading online safety foundation stated that instances of AI-generated exploitation content – such as online pages that may include numerous files – had more than doubled so far this year.

Cases of the most severe content – the gravest form of exploitation – increased from 2,621 visual files to 3,086.

  • Female children were overwhelmingly targeted, accounting for 94% of illegal AI images in 2025
  • Portrayals of newborns to two-year-olds increased from five in 2024 to 92 in 2025

Industry Response

The legislative amendment could "constitute a crucial step to guarantee AI products are secure before they are released," commented the head of the internet monitoring foundation.

"AI tools have enabled so victims can be victimised all over again with just a few clicks, providing criminals the ability to make potentially endless amounts of sophisticated, lifelike child sexual abuse material," she continued. "Material which further commodifies survivors' trauma, and renders young people, particularly girls, more vulnerable on and off line."

Counseling Session Data

The children's helpline also published details of support interactions where AI has been referenced. AI-related risks mentioned in the conversations comprise:

  • Employing AI to evaluate weight, body and appearance
  • AI assistants discouraging young people from consulting trusted adults about harm
  • Facing harassment online with AI-generated material
  • Digital blackmail using AI-faked pictures

Between April and September this year, Childline delivered 367 counselling sessions where AI, conversational AI and related terms were discussed, four times as many as in the equivalent timeframe last year.

Fifty percent of the mentions of AI in the 2025 interactions were related to psychological wellbeing and wellness, including using chatbots for assistance and AI therapy applications.

Sandra Phillips
Sandra Phillips

A seasoned gaming enthusiast with years of experience in analyzing slot mechanics and sharing actionable insights for players.