British Intelligence Agency Warns Other AI ChatGPT And Chatbots To Be Dangerous, Can Spread Malware!
JAKARTA - While everyone is competing to create a generative Large Language Model (LLM) or Artificial Intelligence (AI), British intelligence agencies warn of potential security threats in ChatGPT and other competing chatbots.
The National Cyber Security Center (NCSC) part of the UK's Government Communications Headquarters (GCHQ) published a post on an official blog, which states it has investigated a generating AI mechanism.
Although, they say LLM is undoubtedly impressive, it is not artificial common intelligence and contains some serious weaknesses.
NCSC recommends users not to enter personal or sensitive information into the software, be it OpenAI or others.
According to NCSC, there is a potential for privacy leakage and the use of illegal data by cybercriminals. This is because LLM is trained in large datasets (all over the internet), but once the information is digested, they don't continue to learn from the instructions included by users, where ChatGPT receives millions of information per day.
Currently, there is no risk of chatbots repeating user queries as part of the answer to other people, all queries are stored by their developers.
But one day, NCSC argues, developers can use this stored query to develop a further LLM model.
"A question may be sensitive because the data is included in the query, or because who asked the question (and when). Also remember the aggregation of information in several queries using the same entry info," NCSC said, as quoted by DailyMail, Thursday, March 16.
Apart from being used by developers, it is even more dangerous that queries can also be hacked, leaked, or accidentally published. Although web browsers usually store search history and are prone to similar situations, users can delete previous searches.
In a more malicious use case, NCSC also stated that LLM-based chatbots will be able to help hackers or fraudsters create more convincing phishing emails in various languages.
In addition, the tool can also help attackers in writing malware more sophisticatedly than they have ever tried before. Others, less skilled attackers can also create high-capacity malware through LLM.
About the drawbacks in LLM as opposed to artificial general intelligence, NCSC developer Holy Grail highlighted a number of issues, including bots making mistakes and hallucinating false facts, showing bias and becoming easily deceived, and being persuaded to create toxic content.
"LLM is undoubtedly impressive because of their ability to produce large amounts of convincing content in various human and computer languages," said Grail.
Since its launch in November last year, ChatGPT has been used by millions of people around the world, adopted by school and business children, helping with homework, and making poetry.
Then not long after, big players appeared like Microsoft which included ChatGPT in the Bing search engine. Its Edge web browser will soon include the ChatGPT.
In February, Google also launched its own LLM chatbot, Bard. Likewise, Meta, which has a LLAMA chatbor, is only intended to be used by those in the AI community. Now, the company is working on a public chatbot.