AI-Generated Child Sexual Abuse Material Is Rising Fast, UNICEF Warns
- Olga Nesterova
- 3 hours ago
- 3 min read

UNICEF is sounding the alarm over a rapid global increase in AI-generated sexualized images involving children, warning that the pace of technological misuse is outstripping current legal and safety protections.
In a statement released Tuesday, UNICEF said it is “increasingly alarmed” by the growing circulation of AI-manipulated images, including cases where real photographs of children are altered and sexualized using artificial intelligence. These images — commonly referred to as deepfakes — are being produced through tools that generate, modify, or fabricate realistic images, videos, or audio, often without the child’s knowledge or "consent".
One of the most disturbing trends highlighted is so-called “nudification,” in which AI tools are used to digitally remove or alter clothing in photos to create fake nude or sexualized images of children.
The scale of the threat
New data confirms that the problem is not marginal.
According to a joint study conducted by UNICEF, ECPAT International, and INTERPOL across 11 countries, at least 1.2 million children reported that images of them had been manipulated into sexually explicit deepfakes within the past year.
In some countries surveyed, this amounts to one in every 25 children — roughly the equivalent of one child in a typical classroom.
Children themselves are acutely aware of the risk. In several of the countries studied, up to two-thirds of children said they worry that AI could be used to create fake sexual images or videos of them. Levels of concern varied by country, underscoring disparities in awareness, safeguards, and digital protections.
“Deepfake abuse is abuse”
UNICEF was unequivocal in its assessment: AI-generated sexualized images of children constitute child sexual abuse material (CSAM).
“We must be clear,” UNICEF stated. “Deepfake abuse is abuse, and there is nothing fake about the harm it causes.”
When a child’s image or identity is used, the agency said, that child is directly victimized. Even when no identifiable child appears to be involved, AI-generated CSAM still normalizes the sexual exploitation of children, fuels demand for abusive content, and creates major obstacles for law enforcement trying to identify and protect victims.
Platforms and developers under scrutiny
UNICEF acknowledged that some AI developers are taking steps to implement safety-by-design approaches and guardrails to prevent misuse of their systems. But it warned that protections remain uneven — and in many cases inadequate.
Risks are compounded, the agency said, when generative AI tools are embedded directly into social media platforms, where manipulated images can spread rapidly and widely before detection.
Too often, harmful content is removed only after abuse has already occurred, sometimes days after a victim reports it.
What UNICEF is calling for
To confront what it describes as an escalating and urgent threat, UNICEF called for coordinated action across governments, technology companies, and AI developers:
Governments should expand legal definitions of child sexual abuse material to explicitly include AI-generated content, and criminalize its creation, possession, distribution, and procurement.
AI developers must implement robust safety-by-design measures to prevent their tools from being misused.
Digital platforms should prevent the circulation of AI-generated CSAM — not merely remove it after the fact — and invest in detection technologies that allow for immediate takedown.
“The harm from deepfake abuse is real and urgent,” UNICEF said. “Children cannot wait for the law to catch up.”
Research still unfolding
The findings form part of Disrupting Harm Phase 2, a multi-year research project led by UNICEF’s Office of Strategy and Evidence (Innocenti), ECPAT International, and INTERPOL, with funding from Safe Online. The research is based on nationally representative household surveys conducted with UNICEF and IPSOS, involving children aged 12–17 and their caregivers across diverse regions.
Country-level reports from the study are expected to be released throughout 2026.
As generative AI becomes more accessible and more powerful, UNICEF’s warning is clear: without urgent legal, technical, and systemic safeguards, the digital exploitation of children will continue to accelerate — largely unchecked.