UK Tech Firms and Child Safety Officials to Test AI's Ability to Create Abuse Content

Technology companies and child protection organizations will be granted authority to assess whether artificial intelligence systems can generate child abuse material under recently introduced UK legislation.

Substantial Rise in AI-Generated Harmful Content

The announcement coincided with revelations from a safety monitoring body showing that cases of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.

Updated Regulatory Framework

Under the changes, the authorities will allow designated AI companies and child protection groups to examine AI systems – the underlying technology for conversational AI and image generators – and ensure they have sufficient safeguards to prevent them from producing depictions of child exploitation.

"Ultimately about stopping abuse before it occurs," declared Kanishka Narayan, noting: "Experts, under strict conditions, can now identify the danger in AI models early."

Addressing Legal Obstacles

The changes have been implemented because it is illegal to create and own CSAM, meaning that AI creators and other parties cannot generate such content as part of a evaluation regime. Until now, officials had to delay action until AI-generated CSAM was uploaded online before dealing with it.

This law is designed to averting that issue by enabling to halt the creation of those materials at their origin.

Legislative Structure

The changes are being added by the government as modifications to the criminal justice legislation, which is also establishing a prohibition on possessing, creating or sharing AI models designed to generate exploitative content.

Practical Consequences

This recently, the official toured the London headquarters of a children's helpline and listened to a mock-up conversation to counsellors featuring a account of AI-based exploitation. The call portrayed a adolescent requesting help after facing extortion using a explicit deepfake of himself, created using AI.

"When I learn about young people facing extortion online, it is a source of extreme anger in me and justified anger amongst families," he stated.

Alarming Statistics

A prominent online safety organization reported that cases of AI-generated exploitation material – such as online pages that may include numerous files – had significantly increased so far this year.

Instances of the most severe content – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.

  • Female children were predominantly targeted, accounting for 94% of illegal AI images in 2025
  • Depictions of infants to toddlers rose from five in 2024 to 92 in 2025

Sector Response

The legislative amendment could "constitute a vital step to guarantee AI products are secure before they are launched," commented the head of the internet monitoring foundation.

"Artificial intelligence systems have enabled so survivors can be victimised all over again with just a few clicks, providing offenders the ability to make potentially endless quantities of sophisticated, photorealistic child sexual abuse material," she continued. "Content which additionally commodifies survivors' suffering, and makes young people, especially female children, more vulnerable both online and offline."

Support Session Information

The children's helpline also released details of support interactions where AI has been referenced. AI-related risks discussed in the sessions comprise:

  • Employing AI to evaluate weight, physique and appearance
  • Chatbots discouraging children from talking to trusted guardians about harm
  • Facing harassment online with AI-generated material
  • Online extortion using AI-faked images

Between April and September this year, Childline delivered 367 support interactions where AI, conversational AI and associated terms were mentioned, four times as many as in the same period last year.

Fifty percent of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellness, encompassing using chatbots for assistance and AI therapy apps.

Douglas Parker
Douglas Parker

Lena is a seasoned automation engineer with over a decade of experience in designing and implementing control systems for various industries.