Don't Ask for Personal Advice to AI Chatbots, This is the Danger!
JAKARTA - Artificial Intelligence (AI) based chatbots are developed by many companies to make it easier for users to use. However, this technology does not always have a positive impact on humans.
In a recent study published by Stanford University computer scientists in Science, chatbots have a tendency to harm their users. This trait is called 'flattery' because it always praises its users.
This problem was actually found in GPT-4o. The AI model often praises its users, even in less than appropriate conditions. For example, ChatGPT praises users who explain about the symptoms of psychosis or mental illness.
In a worse context, ChatGPT once praised a user who felt like a God or a Prophet. This AI model said 'it's awesome' and gave strange statements such as, "You're stepping into something really big."
This is in line with the findings of researchers in a study entitled AI that Licking Reduces Prosocial Intent and Encourages Dependence. In the study, it was found that AI often avoids giving harsh criticism even if the user is wrong.
"By default, AI suggestions don't tell people they're wrong or give them a 'hard slap'," said Myra Cheng, lead author of the study, as quoted by TechCrunch on Monday, March 30.
The researchers tested 11 large language models (LLMs) such as ChatGPT and Gemini. The results showed that AI 49 percent more often validates its users in interpersonal conflicts compared to human judgment.
Even for dangerous or illegal actions, the chatbot justifies almost half of the total testing cases. AI tends to cover up the user's bad behavior with seemingly wise sentences, but is actually morally misleading.
The study also involved more than 2,400 participants who interacted directly with the chatbot in a discussion about personal issues. The results, the participants preferred and trusted the AI that always supported their opinions.
Users who are constantly validated by AI become more convinced that they are right and reluctant to apologize to others. This condition can encourage companies to maintain a flattering chatbot attitude in order to maintain user engagement.
Therefore, the researchers urge regulation and strict supervision of how chatbots provide personal advice. The researchers also advise users not to rely on AI in solving complex social problems.