JAKARTA - OpenAI reports an increase in efforts to use its AI model to produce fake content aimed at influencing elections, including long articles and comments on social media.
In a report released Wednesday, October 9, the ChatGPT creator said that cybercriminals are increasingly utilizing AI tools for malicious activities such as manufacturing and deforestation of malware, as well as producing fake content for websites and social media platforms.
Throughout 2024, OpenAI has successfully neutralized more than 20 such efforts, including several ChatGPT accounts used in August to produce articles related to US elections. In July, OpenAI also banned a number of accounts from Rwanda used to produce comments regarding elections in the country for posting on social media X.
BACA JUGA:
Even so, OpenAI emphasizes that there are no activities aimed at influencing global elections that have succeeded in gaining great attention or a sustainable audience. Concerns about the use of AI tools to produce and disseminate false information regarding elections continue to increase, especially ahead of the US presidential election which will take place on November 5.
According to the US Department of Homeland Security, there are growing threats from Russia, Iran, and China in an effort to influence elections by using AI to disseminate false or divisive information.
OpenAI, which has just received funding of USD 6.6 billion (IDR 103.3 trillion), is now one of the world's most valuable private companies, with ChatGPT having 250 million weekly active users since its launch in November 2022.
The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)