JAKARTA - Many artificial intelligence (AI) systems are currently skilled at defrauding and manipulating humans and this can be "rotating" in the future. This has begun to be warned by experts. In recent years, the use of AI has grown exponentially, but some systems have learned to cheat, even though they have been trained to be able to help and be honest, scientists say.

In a review article, a team from the Massachusetts Institute of Technology described the risk of fraud by AI systems and called on the government to develop strong regulations to deal with these issues as quickly as possible.

The researchers analyzed previous studies focused on the ways in which AI spread false information through the fraud studied, meaning they were systematically learning how to manipulate others.

The most striking example of the AI fraud they found was CICERO from Meta, a system designed to play the Diplomacy game involving alliance build-up. Although this AI is trained to be'most honest and helpful' and 'never intentionally betray' its human allies, the data shows this AI is not fair and has learned to be an expert in fraud.

Another AI system demonstrates the ability to bully in Texas holder game em against professional human players, to fake attacks during Starcraft II's strategy game to beat opponents, and to wrongly represent their preference for profit in economic negotiations.

According to experts, although it may seem harmless if the AI system cheats in the game, it could lead to a 'break-up in AI fraud capability' that could develop into a more advanced form of AI fraud in the future.

Some AI systems have even learned to cheat tests designed to evaluate their security. In one study, AI organisms in digital simulators 'pretend to die' to cheat tests built to eliminate fast replica AI systems.

"This shows that AI can'make humans feel fakely safe," the researchers said.

Researchers also warn that the main short-term risk of AI fraud includes making it easier for people to commit fraud and undermine general elections. In the end, if this system can perfect this unpleasant skill, humans could lose control of them, they added.

"AI developers do not have a confident understanding of what causes unwanted AI behavior such as fraud. But in general, we think that fraud arises because fraud-based strategies turn out to be the best way to perform well in AI training tasks provided. Fraud helps them achieve their goals," said principal researcher Peter Park, an expert in AI's existential security.

"We as people need as much time as possible to prepare for fraud that is more advanced than future AI products and open source models. Because the ability to scam AI systems is more advanced, the dangers they cause for society will become more serious," he added.

Commenting on this review, Dr. Heba Sailem, head of the Biomedical AI Research Group and Data Sciences, said: "This paper emphasizes critical considerations for AI developers and emphasizes the need for AI regulation. The main concern is that AI systems may develop fraudulent strategies, even when their training is deliberately aimed at maintaining moral standards."

"As the AI model becomes more autonomous, the risks associated with this system can increase rapidly. Therefore, it is important to raise awareness and provide training on potential risks to various stakeholders to ensure the safety of AI systems," said Sailem.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)