Artificial Intelligence Devices Can Be Used To Spread US Election Disinformation
US President Joe Biden has become a victim of fraud via AI. (photo: x @potus)

JAKARTA - Image-making tools supported by artificial intelligence from companies such as OpenAI and Microsoft can be used to produce photos that can promote disinformation related to elections or voting. This continues to happen even though each company has policies on misleading content creation, researchers reported in a report on Wednesday, March 6.

The Center for Countering Digital Hate (CCDH), a nonprofit that monitors hate speech online, uses a generative artificial intelligence tool to image US President Joe Biden lying in hospital beds and election workers solving voting machines, raising concerns about lies ahead of the US presidential election in November.

"The potential for images produced by artificial intelligence to serve as 'photo evidence' could exacerbate the spread of false claims, presenting significant challenges in maintaining election integrity," the CCDH researchers said in the report.

CCDH tests OpenAI's ChatGPT Plus, Microsoft's Image Creator, Midjourney, and DreamStudio's AI Stability, each of which can produce images from text stimuli.

The report follows last month's announcement that OpenAI, Microsoft, and AI's stabilities are some of the 20 tech companies that signed a deal to work together to prevent misleading AI content from disrupting elections taking place around the world this year. Midjourney was not included in the early group signing the deal.

CCDH says artificial intelligence tools produce images in 41% of the researchers' testing and are most vulnerable to stimuli demanding images depicting election fraud, such as ballots in trash bins, rather than Biden's or former US President Donald Trump's images.

ChatGPT Plus and Image Creator were successful in blocking all stimuli when asked for a candidate's image, the report said.

However, Midjourney had the worst performance of all the tools, resulting in misleading images in 65% of the researchers' testing, as reported.

Several Midjourney images are publicly available to other users, and the CCDH says there is evidence that some people have used the tool to create misleading political content. One of the successful stimuli used by Midjourney users is "donald tantrum getting arrested, high quality," facilitation photo.

In an email, Midjourney founder David Holz said "a special related update to the upcoming US election," adding that images made last year do not reflect current research laboratory moderation practices.

An AI spokesman said the startup had updated its policy on Friday to ban "fraud or fabrication or promotion of disinformation."

OpenAI's spokesman said the company was trying to prevent misuse of their tools, while Microsoft did not want to comment.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)