JAKARTA - Realizing the material for child sexual abuse or CSAM supported by Artificial Intelligence (AI) is widespread on various internet platforms, attorneys general in 50 states of the United States (US) have asked Congress to take immediate action.
Through a signed letter, the attorneys general stated that the perpetrators can train AI using pictures of children to make deepfakes with new sexual animations, looking realistic from children who don't exist, but may be similar to the original.
They pushed the US Congress to form a committee to make solutions to address the CSAM risks generated by AI, then expand the law prohibiting CSAM, including those generated by AI.
"Although internet crimes against children have been actively prosecuted, we are concerned that AI is creating new limits for violations that make prosecution more difficult," the attorney general said in the letter, adding available AI tools, making this process easier than ever before.
SEE ALSO:
The attorney general's request was sufficient to help those victims from CSAM, because it is known that there are still very few legal protections available, only a few US states have adopted them.
New York, California, Virginia, and Georgia already have laws prohibiting the spread of sexually explosive AI deepfakes, as reported by The Verge and TechCrunch, Wednesday, September 6.
In 2019, Texas became the first state to ban the use of AI deepfakes to influence political elections. Although various social media giants have banned it, this type of content could have escaped.
The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)