JAKARTA - A new scam appears in the northern region of China. This scam uses advanced "deepfake" technology. Fraudsters try to convince a man to transfer money to a friend who doesn't actually exist. This has sparked concerns about the potential artificial intelligence (AI) technique in helping financial crime.
China has stepped up surveillance of such technologies and applications in response to increased AI-driven fraud, particularly involving voice and facial data manipulation. The bamboo curtain country adopted a new rule in January to protect victims legally.
Police in the city of Baotou, in the Inner Mongolia region, said the perpetrator used AI-powered facial replacement technology to disguise himself as a victim's friend during a video call and received a transfer of 4.3 million yuan (Rp 9.1 billion).
"He transferred the money with the belief that his friend needed to deposit during the bidding process," the police said in a statement on Saturday, May 20.
The man just realized he had been cheated after his friend admitted he didn't know anything about the situation, police added, saying they had returned most of the stolen funds and were working to track down the rest.
The case sparked discussions on the Weibo microblogging site about threats to online privacy and security, with the hashtag "#AI Fraud is widespread across the country" getting more than 120 million views on Monday.
"This shows that all photos, sounds and videos can be used by scammers," wrote one user. "Can information security rules follow these people's techniques?"
The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)