JAKARTA - Approaching the presidential election cycle in 2024, the United States faces new challenges in the form of increasingly sophisticated election fraud using deepfakes, which require voters to acquire new skills to differentiate between the original and the fake.

On February 27, the chairman of the US Senate Intelligence Committee, Mark Warner, stated that America was less prepared to face election fraud for the 2024 presidential election compared to the previous election in 2020.

This decline in readiness was largely due to increased use of deepfakes in the US over the past year. According to data from Sum Sub, identity verification services, there was an increase of 1,740% in deepfakes in North America, with a 10-fold increase in the number of deepfakes detected worldwide by 2023.

A real example of this deepfake threat was when New Hampshire residents reported receiving automatic calls with US President Joe Biden's voice, telling them not to vote in the state's preliminary elections on January 23.

The incident sparked regulatory action in the US to ban the use of AI votes in automated phone fraud, making it illegal under US phone marketing law.

Although legal measures have been taken, fraudsters continue to look for loopholes. As the US prepares for Super Tuesday on March 5, concerns are growing over false information generated by AI and deepfakes.

Pavel Goldman Kalaydin, head of AI and machine learning in Sum Sub, to better understand how voters can prepare themselves to identify deepfakes and deal with deepfake identity fraud situations.

Kalaydin emphasized the importance of voters being vigilant in checking the content they consume on social media and verifying information sources. He also provides some distinctive signs to watch out for in deepfakes, such as unnatural movements, artificial backgrounds, and lighting changes.

While understanding of deepfake is increasing, Kalaydin warns that this technology will continue to develop rapidly, making it increasingly difficult to detect by the human eye without special detection technology.

One of the biggest challenges is generation and distribution of deepfakes. Easy access to AI technology has opened the door to the spread of fake content, while the lack of legal regulations makes it difficult to stop its online spread.

To overcome this problem, Kalaydin suggested that social media platforms implement automatic checks for AI content or deepfakes and take advantage of deepfake detection technology to ensure content authenticity.

Meanwhile, governments around the world are starting to take action. India, for example, released guidelines for local technology companies to obtain approval before releasing unreliable AI tools. In Europe, the European Commission has created an AI misinformation guide for platforms operating in the region.

As the US prepares for the 2024 election, efforts to protect voters from increasingly sophisticated election fraud with deepfakes should be a top priority for the government, social media platforms, and society as a whole.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)