Two top officials from the Federal General Elections Commission (FEC) have different opinions about whether political advertisements on radio and television broadcasts are required to reveal whether the content is produced by artificial intelligence (AI).

FEC vice chairman Ellen Weintraub, on Thursday 6 June, backed a May proposal submitted by the US Federal Communications Commission (FCC) Chair, Jessica Rosenworcel, who asked the commission to advance the proposed rules, which would demand the disclosure of AI content in candidate ads and issues. FEC chairman Sean Cooksey criticized the plan.

The proposal will not prohibit content generated by AI in political advertising.

There are growing concerns in Washington that AI-generated content could mislead voters in presidential and congressional elections in November. The FCC said AI will most likely play an important role in political advertising in 2024.

Rosenworcel highlights the potential of "deep fake" or "referred images, videos, or audio recordings showing people doing or saying things that are not actually done or spoken."

"This is about disclosure," Rosenworcel said on Thursday. He said that the FCC since the 1930s had required disclosure and had sufficient legal authority. "We have decades of experience doing this," Rosenworcel said, quoted by VOI from Reuters.

Weintraub said in a letter to Rosenworcel that "the public will benefit from greater transparency about when AI-generated content is used in political advertising."

He said it would be useful for FEC and the FCC to make regulatory efforts. "It's time to act," Weintraub said.

However, Cooksey said mandatory disclosure would "directly oppose existing laws and regulations, and cause chaos among political campaigns for future elections."

The rules will require disclosure in the air and written and include cable operators, satellite TV providers, and radio. The FCC does not have the authority to regulate internet ads or social media or streaming services. The agency has taken steps to counter the misleading use of AI in political robokal calls.

Republican FCC commissioner Brendan Carr criticized the proposal, saying that "the FCC will only complicate the situation. Political ads generated by AI broadcast on broadcast TV will be complemented by government-required warnings but exactly the same or similar ads broadcast on streaming services or social media sites are not?"

AI content in the context of elections drew attention in January following a fake robokal call imitating President Joe Biden attempted to scare people from voting for him in the preliminary election of Democrat New Hampshire, prompting the state to sue Democratic political consultants behind the call.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)