Deepfake Automatically Calls Shake Elections In New Hampshire, Allegedly Intervening In The 2024 Presidential Election
There was a deepfake in the US Presidential Election using Joe Biden's voice (photo: x @potus)

JAKARTA - Residents in New Hampshire received unusual political requests on the weekend of January 20-21. Automatic calls featuring many voices were seen as the voice of United States President Joe Biden telling them not to vote in the preliminary elections on January 23.

The automatic message appears to be produced by a deepfake artificial intelligence (AI) tool with a clear purpose to interfere in the 2024 presidential election. In the audio footage taken by NBC, residents are told to stay at home during the election.

Voted on Tuesday only to empower Republicans in their efforts to vote for Donald Trump again. Your voice affected in November, not on Tuesday."

The US Attorney General's Office issued a statement condemning the call as misinformation. They added that "New Hampshire voters should completely ignore the contents of this message." Meanwhile, a spokesman for former US President Donald Trump denied the involvement of Republican candidates or his campaign.

Investigators do not seem to have identified the source of the call, but an investigation is ongoing.

In related news, another political scandal related to audio deepfakes occurred over the weekend when an AI audio mitigating Manhattan Democratic Party leader Keith Wright appeared on January 21. The deepfake audio features a copy of Wright's voice demeaning another Democratic Assembly member, Inez Dickens.

According to a report from Politico, some people thought the audio was fake, but at least one political figure was briefly fooled into being real.

Manhattan Democrats and former City Council chairman Melissa Mark-Viverito told Politico they initially thought the deepfake was credible.

"I thought, 'oh no.' I think it's real," he said.

Experts believe that malicious actors choose audio deepfakes over videos because consumers tend to be more careful when it comes to visual counterfeiting. As AI adviser Henry Ajder told the Financial Times, "Everyone is used to Photoshops or at least know it exists."

Until the time this article was published, no universal method could detect or prevent deepfakes. Experts recommend being careful when dealing with media from unknown or doubtful sources, especially when outstanding claims are involved.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)