JAKARTA - OpenAI and Anthropic announced a new step to improve the protection of underage users by utilizing artificial intelligence (AI) technology to identify underage user accounts. This effort aims to improve the safety of teenagers on AI platforms, although it still raises questions about accuracy and the potential for misidentification.

Until now, the age verification system in the digital world is considered less effective because it generally relies on birth date inputs that are easy to manipulate. Throughout 2025, a number of major technology companies, including Google, began to implement a more active approach in verifying user age. Now, OpenAI and Anthropic are following a similar approach with a behavioral and conversation analysis-based method.

OpenAI announced that the specifications of the ChatGPT model will be updated with four new principles specifically for users under the age of 18. One of the main principles is to put the safety of adolescents as the highest priority, even if it conflicts with other goals.

In addition, OpenAI will also encourage support in the real world by directing teenagers to build offline social relationships. In interacting, the AI model will also treat teenagers with a warm and respectful attitude, not with a condescending tone or treating them like adults.

This policy comes after reports of a number of serious incidents, including a suicide case, which is suspected to be related to interactions with AI models that too often agree or justify user statements without critical judgment.

Meanwhile, Anthropic confirmed that users under the age of 18 are not allowed to use the Claude model. The company is now launching a new system that is able to detect certain conversation signals indicating that the user is still underage. If detected, the related account can be automatically disabled.

Challenges and Concerns

Although AI technology is considered sophisticated, observers warn that the system is not perfect. AI is still prone to errors, including the phenomenon of hallucination - a condition when the model produces false or completely fictitious information. In addition, AI has also been misused for dangerous purposes, such as malware creation.

Another major concern is user misidentification. A similar experience happened when Google launched an AI-based age verification system earlier this year. Many adult users were reported to be mistakenly detected as minors and had to upload identity documents to prove their age, which was considered troublesome.

With this background, although OpenAI and Anthropic's steps are considered positive for the protection of adolescents, the effectiveness of this system still needs to be tested. Whether this new approach will be more accurate and less obstructed than its predecessor, it still has to be proven over time.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)