Sound Cloning Threatens Online Security, Deepfakes Used In Fraud And Kidnapping
Dane Sherets, a Solutions Architect on HackerOne (photo: x @DaneSherrets)

JAKARTA - A new phenomenon in the world of technology known as voice cloning has threatened online security, where hackers use artificial intelligence (AI) to simulate a person's voice.

Famous victims including Stephen Fry, Sadiq Khan, and Joe Biden have been victims of voice cloning, WHERE one of the unnamed CEOs was even deceived into transferring $223.000 to a fraudster after receiving a fake phone call.

Sound cloning is an AI technique that allows hackers to take someone's audio recordings, train an AI device on its voice, and reproduce it. Dane Sherrets, a Solutions Architect on HackerOne, explains that this technology was originally used to make audiobooks and help people who lost their voices for medical reasons, but is now increasingly being used by the Hollywood film industry and scammers.

Initially, the use of this technology was limited to experts with in-depth knowledge of AI, but over time, this technology became more accessible and affordable. In fact, people with very limited experience can clone votes in less than five minutes with some tools available for free and open source.

To clone my voice, Sherrets only needed a five-minute recording of someone talking. After sending the clip, he uploaded it to a tool that could then be "trained" on certain votes. As a result, the clone's voice was very convincing, even without any additional pauses or inflections.

However, this technology also has serious potential dangers. Some people have been victims of fake calls related to kidnapping, where their child's voice is thought to have been kidnapped, asking for ransom payments in a very stressful voice. Hackers can also use this technology to carry out more targeted social attacks against companies and organizations, such as falsifying CEO votes to obtain classified information or access to the system.

To protect oneself from the threat of voice cloning, it is important to pay attention to key signs in audio, such as unnatural pauses, unnatural phrases, or background noise. It is also advisable not to hesitate to ask questions that only natives can actually answer and establish safe passwords with family and friends.

In addition, it is important to be aware of our digital footprint and limit personal information uploaded online, because any information we share can be used to train AI and use against us in the future.

With the right awareness and precautions, it is hoped that we can protect ourselves from the threat of voice cloning and maintain our online security.

The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by