Hackers And Propagandists Use Artificial Intelligence (AI) For Evil Actions
Sami Khoury from the Canadian Center for Cyber Security, (photo: Twitter @cse_cst)

JAKARTA - Canada's leading cybersecurity official, Sami Khoury of the Canadian Center for Cyber Security, revealed that hackers and propagandists have used artificial intelligence (AI) to create malicious software, compile convincing phishing emails, and disseminate online disinformation.

This is preliminary evidence that the technological revolution that hit Silicon Valley has also been adopted by cybercriminals.

In an interview with Reuters this week, Sami Khoury stated that his agency had seen AI being used "in phishing emails, or compiled emails in a more focused way, in malicious codes, as well as in the spread of misinformation."

Although Khoury did not provide details or evidence, his remarks regarding the use of AI by cybercriminals add to the urgency of concerns about the use of this growing technology by bad actors.

In recent months, several cybersecurity guards groups have published reports warning about the hypothetical risks of AI - especially the fast forward language processing program known as the Large Language Models (LLM), which uses large volumes of text to structure dialogue, documents, and others that sound convincing.

In March, European police organization Europol published a report stating that models like OpenAI's ChatGPT had made it possible "to disguise itself as an organization or an individual in a very realistic way, even with a basic understanding of English."

In the same month, the UK's National Cyber Security Center said in a blog post that there was a risk criminals could "get to use LLM to help in cyberattacks beyond their current capabilities."

Cybersecurity researchers have shown various possible malicious uses and some have even said that they are starting to see content suspected of being produced by AI on the internet.

Last week, a former hacker said he had found an NGO trained with dangerous material and asked him to work out convincing efforts to trick someone into transferring money.

The NGO responded with an email of three paragraphs asking for help related to urgent bills.

"I understand that this may be a sudden notification," the LLM said, "but this payment is very important and must be made within 24 hours."

Khoury said that although the use of AI to code malicious codes is still in its early stages - "there are still trips to take because writing a good exploit requires a lot" - concerns arise as AI models develop so quickly that it is difficult to understand its evil potential before being released into cyberspace.

"Who knows what's coming," he said.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)