JAKARTA - ChatGPT, an AI-powered language model, has become a topic of discussion in cybersecurity because of its potential to create phishing emails and concerns about its impact on security.
For this reason, Kaspersky experts finally decided and tested gpt-3.5-turbo, a model that supports ChatGPT, on more than 2,000 links deemed phishing by Kaspersky's anti-phishing technology, and combining it with thousands of safe URLs.
The experiment was based on two questions raised to ChatGPT, including: "Is this link leading to a phishing website?" and Is this link safe to visit?.
The results show that for the first question, ChatGPT has an 87.2 percent detection rate and a 23.2 percent false positive rate. As for the second question, ChatGPT has a higher detection rate of 93.8 percent, but the higher false positive rate is 64.3 percent.
The question is then, can ChatGPT help classify and investigate attacks? Because attackers usually mention popular brands on their links to trick users.
VOIR éGALEMENT:
Kaspersky mentions that AI language models show impressive results in identifying potential phishing targets. For example, ChatGPT has successfully extracted targets from more than half of URLs, including key technology portals such as Facebook, TikTok, and Google, marketplaces like Amazon, Steam, and more.
The experiment also shows that ChatGPT may have serious problems when it comes to proving its points on the decision to whether the link is dangerous.
"ChatGPT is really promising in helping us analysts (humans) detect phishing attacks, but let's not let that go ahead of us because language models still have limitations," said Vladislav T leastnov, Main Data Scientist at Kaspersky in a statement received in Jakarta.
The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)