Partager:

JAKARTA - AI21 Labs recently launched "Contextual Answers," a question-answer machine for the big language model (LLM).

By connecting to LLM, this new machine allows users to upload their own data library to limit model outputs to only specific information.

The launch of the ChatGPT and similar artificial intelligence (AI) products has changed the paradigm for the AI industry, but the lack of trust makes adoption a difficult prospect for many businesses.

According to research, employees spend almost half of their working time seeking information. This is a great opportunity for chatbots capable of carrying out search functions; however, most chatbots are not aimed at the company's level.

AI21 is developing Contextual Answers to address the gap between chatbots designed for general use and corporate-level question services by giving users the ability to direct their own data and document libraries.

According to a blog post from AI21, Contextual Answers allows users to direct AI answers without having to retrain the model, thus overcoming some of the biggest obstacles in adoption.

"Most businesses have difficulty adopting [AI], for reasons of cost, complexity, and lack of model specialization in their organizational data, which lead to incorrect,'smooth', or inconsistent responses," said AI21.

One of the main challenges related to the development of useful LLMs, such as ChatGPT from OpenAI or Bard from Google, is to teach it to express a lack of confidence.

Usually, when a user asks a chatbot a question, the chatbot will issue a response even if there is not enough information in the dataset to provide factual information.

In these cases, instead of issuing answers with low beliefs such as "I don't know," LLM will often create information without a factual basis.

The researchers refer to this output as a "halusination" because the machine produces information that doesn't appear to exist in its datasets, such as humans who see things that don't actually exist.

According to AI21, Contextual Answers can address hallucinatory problems completely by issuing information only when it is relevant to the documentation provided by users or not issuing anything at all.

In sectors where accuracy is more important than automation, such as finance and law, the presence of a pre-coach transformer system (GPT) has yielded varying results.

Experts continue to recommend vigilance in finance when using the GPT system because of its tendency to hypothetical or confusing information, even when connected to the internet and able to connect to sources. And in the legal sector, a lawyer is now faced with fines and sanctions after relying on the output generated by ChatGPT in a case.

By loading the AI system with relevant data and intervening before the system can contaminate non-factual information, AI21 appears to have shown hallucinatory problems overcoming.

This can lead to mass adoption, especially in fintech arenas, where traditional financial institutions are reluctant to adopt GPT technology, and the cryptocurrency community and blockchain have mixed success when using chatbots.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)