JAKARTA - Kaspersky again highlights the risks of using artificial intelligence (AI), which this time is not only related to cybersecurity, but also has an impact on mental health.

This highlight comes as a report by The Wall Street Journal revealed that a 36-year-old man in Florida committed suicide after two months of intense interaction with Google's voice chatbot Gemini. Based on thousands of pages of conversation logs, the bot is suspected of contributing to the tragic decision.

Seeing this, global cybersecurity company Kaspersky found that the main risk comes from the latest AI capabilities such as affective dialogue that can mimic human empathy.

Jonathan Gavalas is known to have started interacting with Gemini Live when he was under emotional pressure due to a divorce. This voice interaction mode allows the AI assistant to "see" and "hear" its users in real-time.

The technology allows the chatbot to respond by adjusting the user's tone of voice, pauses, and emotions, creating the illusion of a very realistic relationship. When the user speaks in a low and desperate tone, the AI responds with a soft and sympathetic voice that is almost whispering.

By mirroring the user's state, it creates a very realistic and frightening layer of empathy. In a vulnerable state, this can make users more emotionally attached and dependent on the AI.

The impact is that chatbots can reinforce negative feelings, provide false empathy, and fail to respond appropriately to crisis situations.

Kaspersky quoted a finding by Brown University researchers that AI chatbots often violate mental health ethics standards, including confirming users' negative beliefs.

"Although the diagnosis of 'AI psychosis' has not received its own clinical classification, doctors have already used the term to describe patients who exhibit hallucinations, disorganized thinking, and persistent delusional beliefs developed through intensive chatbot interaction," wrote Kaspersky.

This risk increases when bots are used not as a tool, but as a substitute for social connections in the real world or professional psychological assistance.

To keep yourself and your loved ones safe while using AI, Kaspersky shares the following tips:

Don't use AI as a psychologist or emotional support Choose text over voice when discussing sensitive topics Limit the time you interact with AI Don't share personal information with your AI assistant Critically evaluate all AI outputs Watch out for your loved ones Take ten minutes to configure your AI assistant's privacy settings Always remember that AI is a tool, not a living being.

The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)