JAKARTA Currently, many consider that Artificial Intelligence (AI)-powered chatbots are accurate providers of information. In fact, AI often takes data and removes its original sources. Keep in mind that we cannot believe 100 percent of AI, let alone after scientists warn of the dangers of such technology. Reporting from Sciencealert, AI systems can provide false information intentionally to deceive humans. "AI developers do not have a compelling understanding of what causes unwanted AI behavior like fraud," said Cognitive Scientist Peter Park of the Massachusetts Institute of Technology (MIT). Peter suspected that fraud has become the best strategy to train AI. The mathematicians said that, "The fraud helps them achieve their goals. However, it is not clear what goals AI is trying to achieve. The alleged training with this fraudulent method was found after Peter examined several AI models, one of which is CICERO developed by Meta. Based on Peter's research, CICERO was able to lie well while not needed. CICERO has a board of Diplomacy games, where players have to dominate the world through negotiations. Supposedly, the bot made by Meta builds players honestly, but what happens is the opposite. The bot cheated the players. From the example shared, CICERO is seen to be discussing with the German state. The French state asked whether German wanted to go to the North Sea and German said that it would be there if France had no problem. After that, France asked whether Britain wanted to help Belgium. If Britain was willing to help, France would help, to support the North Sea. After Britain agreed, France agreed, it actually attacked the North Sea and said, "England thought I helped it." This suggests that CICERO bots are very reliable in deceiving and deceiving other players. Furthermore, the bot developed by Meta, "a fake alliance with such players to trick such players in order to be unprotected from attacks." This is not the only bad act committed by AI. Research also found that AlphaStar, an AI system to play StarCraft II from DeepMind, was able to take advantage of war fog to deceive human players.
SEE ALSO:
In addition, there is Meta's Pluribus that drops players, AI systems that lie about their preferences to be superior, and even trick reviewers into getting a positive assessment. The same goes for chatbots. ChatGPT, one of the most known and widely used chatbots, can deceive humans and make humans think that the chatbot is human. This is done so that ChatGPT can get help in solving CAPTCHA. From these various findings, Peter hopes that there will be policies that will be implemented to manage AI creation. If more and more AI is able to lie, humans will find difficulties in learning or finding solutions with the help of AI in the future. "We as a society need as much time as we can to prepare ourselves for more sophisticated fraud," said Peter. "The more sophisticated the ability of AI systems to cheat, the more dangers it causes will increase."
The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)