OpenAI Thwarts Covert Operations Using Its AI Model For Fraud

JAKARTA - In the last three months, OpenAI announced that it has succeeded in disrupting covert operations that are trying to use their AI model to support fraudulent activities on the internet.

Scams use AI usually to make short comments and long articles in various languages, create names and bios for social media accounts, conduct open source research, decode simple codes, and translate and correct text.

The content posted by these operations focused on various issues, including Russia's invasion of Ukraine, the conflict in Gaza, India's elections, politics in Europe and the United States, and criticism of Chinese governments by Chinese dissidents and foreign governments.

The operation is usually meant to increase the involvement or reach of significant audiences. However, thanks to OpenAI's efforts, they saw that throughout May, the operation failed to get a large audience.

Thus, OpenAI re-strengthens its commitment to enforcing policies that prevent abuse and increase transparency around AI-generated content.

"We are committed to developing a safe and responsible AI, which involves designing our model by considering safety and proactively intervening in malicious use," the company said.

OpenAI also continues to carry out in-depth investigations together with other companies, to create the development of responsible AI.

"We are dedicated to finding and mitigating this abuse on a large scale by utilizing the power of a generative AI," he concluded.