Europol Warns Potential Abuse Of ChatGPT Chatbots For Phishing And Disinformation
JAKARTA - Europol's European Police on Monday 27 March warned of potential abuse of Chatbot ChatGPT supported by artificial intelligence for phishing, disinformation, and cybercrime efforts. This adds to a range of concerns ranging from legal issues to ethics.
Since its launch late last year, OpenAI's ChatGPT, backed by Microsoft, has sparked a technological trend, prompting competitors to launch similar products and companies to integrate them or similar technologies into their applications and products.
🚨📄 New #TechWatch flash report: “ChatGPT - the impact of Large Language Models on Law Enforcement.”➡️ The report provides an overview on the potential criminal misuse of LLMs and discusses the impact they might have on law enforcement.Read it here ⤵️https://t.co/M8X8EQis26 pic.twitter.com/8ZZoiLsdzX
— Europol (@Europol) March 27, 2023
"As LLM (big language model) capabilities such as the active ChatGPT are being upgraded, the potential exploitation of this type of AI system by criminals provides a grim view," Europol said when presenting its first technology report that started with the chatbot.
Europol highlighted the detrimental use of ChatGPT in three crime areas.
"ChatGPT's ability to make highly realistic text makes it a useful tool for phishing purposes," Europol said.
With its ability to reproduce language patterns to mimic certain individual or group speaking styles, these chatbots can be used by criminals to target victims, the EU law enforcement agency said.
Europol says ChatGPT's ability to produce authentic text at speed and scale also makes it an ideal tool for propaganda and disinformation.
"This allows users to generate and spread messages that reflect certain narratives with relatively few businesses," Europol added.
According to Europol, criminals with little technical knowledge can switch to ChatGPT to produce malicious codes.