JAKARTA - OpenAI pebesut ChatGPT, has begun to take steps to fight hallucinations created by Artificial Intelligence (AI), where the technology is only far-fetched in presenting answers.

"Even sophisticated models still produce logic errors, which are often called hallucinations. Reducing hallucinations is an important step to build a harmonious Artificial General Intelligence (AGI)," OpenAI said in an official statement, quoted Thursday, June 1.

AI halusinasi mengacu pada kejadian, ketika teknologi ini menghasilkan hal yang tidak terduga dan tidak benar yang tidak didukung oleh data dunia real. Itu dapat berupa konten, berita, informasi paling tentang seseorang, peristiwa, atau fakta.

In a paper published this week, OpenAI trains a gift model to detect hallucinations using two methods, namely monitoring results that provide feedback based on final results, and monitoring processes, providing feedback for each individual step in a series of human-like thinking.

The company made detailed comparisons of the two methods, using a collection of mathematical data. They found, the supervision of the process resulted in a much better performance, even when judged from the results.

"The supervision of the process has several advantages of aligning over the supervision of the results. This directly gives appreciation to the model for following a chain of aligned thinking, every step in the process receiving proper supervision," said OpenAI.

"Procedural supervision is also more likely to produce interpretable reasoning, as it encourages models to follow human-approved processes. On the other hand, monitoring results can value misaligned processes, and are generally more difficult to research," he added.

In some cases, OpenAI explains, a safer method for AI systems can lead to a decline in performance, a cost known as the alignment tax.

In general, any alignment tax can hinder the application of the alignment method, due to pressure to use the most capable models.

The results of OpenAI's research show that monitoring the process actually creates a negative alignment tax, at least in the domain of mathematics.

"This can increase the implementation of process supervision, which we believe will have positive side effects of alignment," said OpenAI.

OpenAI also evaluates the process-supervised award model and supervised results using problems from a series of mathematical tests.

"We produce many solutions to each problem and then select the highest rank solution of each gift model. The graph shows the percentage of selected solutions that achieve the correct final answer, as a function of the number of solutions considered," OpenAI explained.

"Not only is the prize model being supervised, the process is working better overall, but the performance gap is widening as we consider more solutions per problem. This shows us that the process-supervised reward model is much more reliable," he added.

However, it remains to be seen whether this step will help treat hallucinations more generally, as hallucinations may be the number one problem with AI at this time.

In fact, OpenAI clearly warns users not to trust its chatbots by saying, "ChatGPT can produce inaccurate information about people, places, or facts."

This week, a US-based lawyer became a victim of a ChatGPT, where he used a chatbot for his work and sent false information regarding a similar case he was investigating. Chances are, he will face sanctions.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)