JAKARTA - The rapid development of AI has resulted in wider use by individuals and businesses. At the same time, AI also opens up opportunities to create more sophisticated cyber attacks.

Cyber threat actors can take advantage of AI to automate attacks, speed up routines, and carry out more complex operations to achieve their goals.

Cybersecurity company Kaspersky has observed several ways cybercriminals use AI. First, ChatGPT is often used to write malicious software and automate attacks on multiple users.

In addition, the AI program can log user devices by analyzing acceleration sensor data, which has the potential to capture messages, passwords, and banking codes.

Furthermore, Kaspersky also said that AI can also be used for social engineering to produce content that seems reasonable, including text, images, audio, and videos.

Cyber threat actors will usually use large language models such as ChatGPT-4o to generate fraudulent text, such as sophisticated phishing messages.

One of them is deepfake fraud, an AI scam that is still happening a lot today. Where cybercriminals have deceived many people with celebrity identity scams, causing significant financial losses.

"Deepfakes are also used to steal user accounts and send audio requests for money using the voice of the account owner to friends and relatives," Kaspersky concluded in a statement quoted on Sunday, August 11.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)