UK Warns of the Risks of Using Artificial Intelligence-Based Chatbots
Illustration of the use of LLM (photo: ncsc/gov/uk)

JAKARTA - British officials are warning organizations about the integration of artificial intelligence (AI)-based chatbots into their businesses. They say research is increasingly showing that chatbots can be tricked into performing malicious tasks.

In a pair of blog posts set to be published on Wednesday, Aug. 30, the UK's National Cyber ​​Security Center (NCSC) says experts do not yet fully understand the potential security issues associated with algorithms that can produce human-sounding interactions - the so-called language model major, or LLM.

Those tools powered by artificial intelligence are now beginning to be used as chatbots that some imagine will replace not only internet searches but also customer service jobs and sales calls.

NCSC says that this can carry risks, especially if the models are linked to other elements of the organization's business processes. Academics and researchers have repeatedly found ways to tamper with chatbots by giving them rogue commands or tricking them into circumventing their own internal safeguards.

For example, a chatbot powered by artificial intelligence and used by a bank may be tricked into making unauthorized transactions if a hacker designs their questions correctly.

"Organizations building services using an LLM need to be careful, as they would if they were using a product or code library that is still in beta," the NCSC said in one of its blog posts, referring to experimental software releases.

"They probably wouldn't allow the product to engage in making transactions on behalf of customers, and shouldn't fully trust them. Similar caution should be exercised with LLMs," the NCSC said, quoted by Reuters.

Authorities around the world are struggling with the increasing use of LLMs, such as OpenAI's ChatGPT, which businesses are integrating into a variety of services, including sales and customer service. The security implications of artificial intelligence are also still being developed, while authorities in the US and Canada say they have seen hackers adopt the technology.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)