JAKARTA - US federal prosecutors are increasingly pursuing suspects who use artificial intelligence (AI) tools to manipulate or create images of child sexual abuse, as law enforcement officials fear this technology could trigger a spike in illegal material.

The US Department of Justice has filed two criminal cases this year against defendants accused of using a generating AI system to produce explicit children's images. James Silver, head of the Justice Department's Computer Crime and Intellectual Property Section, warned that more similar cases would emerge.

"Our concern is this normalization," Silver said in an interview. "AI makes it easier to produce images like this, and the more images like that circulating, it's getting more and more normal. It's something we really want to prevent."

The emergence of a generating AI also sparked concerns that the technology would be used for cyberattacks, more sophisticated cryptocurrency scams, and undermine election security.

Child sexual abuse cases are one of the first instances of prosecutors trying to apply existing US laws to AI-related crimes. Even if successful, the verdict may face an appeal as the court should consider how this new technology affects the legal landscape related to child exploitation.

Prosecutors and child safety advocates warn that a generative AI system allows perpetrators to sexually change and display photos of ordinary children, and AI can make it difficult for law enforcement officials to identify and find true victims of abuse.

The National Center for Missing and Exploited Children (NCMEC), a nonprofit group that collects reports of online child exploitation, receives an average of 450 reports per month related to the generative AI, while the total report reached 3 million per month last year.

Cases involving images of AI-generated sexual harassment will face new legal challenges, especially when an identifiable child is not described. In that situation, prosecutors can file allegations of decency if child pornography law does not apply.

For example, in May, Steven Anderegg, a software engineer from Wisconsin, was charged with transferring indecent material using the AI Stable Diffusion model to produce explicit children's images and share them with a teenager.

Meanwhile, a US Army soldier, Seth Herrera, was also charged with child pornography for allegedly using an AI chatbot to turn photos of children he knew into images of sexual violence.

Legal experts say that although explicit images of children that are actually protected by child pornography laws, the legal status for images produced entirely by AI is still unclear.

The US Supreme Court in 2002 declared unconstitutional a law that criminalizes images that appear to show children engaging in sexual activity, including computer-generated ones.

In addition to law enforcement efforts, child safety advocates also focus on prevention. Some nonprofit groups have succeeded in gaining commitment from large AI companies, such as Google, Amazon, Meta, OpenAI, and AI Stabilities, to avoid training on their AI models using images of child sexual abuse and monitoring their platforms to prevent the spread of the material.

"I don't want to describe this as a matter of the future, because this is happening now," said Rebecca Portnoff, Vice President of Data Science attenuation.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)