UNICEF emphasizes: the scale of the problem is such that in every classroom, there is at least one child who has become a victim of such a crime.Deepfakes are images, videos, or audio recordings created using artificial intelligence (AI) that look or sound real. Such materials are increasingly used to create sexualized content involving children.
Researchers noted that "children are well aware of the risks associated with deepfakes." In some countries participating in the study, about two-thirds of children expressed concerns that AI could be used to create fake sexual images or videos.
UNICEF emphasizes that creating deepfakes with sexual content is equivalent to child abuse.
The organization also highlighted the importance of the work of AI developers who implement safeguards to prevent abuse. "UNICEF supports the efforts of those who develop systems with security by default," the organization added.
However, many AI models are created without proper protective mechanisms. The risk is exacerbated when generative AI tools are integrated into social networks, where such images spread rapidly.
The organization calls on AI developers to integrate safety at the design stage, rather than after problems arise. All digital companies must work to prevent the dissemination of materials containing images of sexual violence against children created using AI, rather than just removing them retroactively.
Illustration on the main page: pngtree.com.