JAKARTA - OpenAI's efforts to produce more accurate output from ChatGPT chatbots are still not enough to ensure full compliance with EU data rules. This is said by the task force in the EU's privacy control agency.

"Although the steps taken to comply with the principle of transparency are useful to avoid misunderstandings against ChatGPT output, these measures are not enough to comply with the principles of data accuracy," said the task force in a report released on its website on Friday, May 24.

The agency that brings together Europe's national privacy watchdog formed a task force on last year's ChatGPT after national regulators led by Italian authorities expressed concern about this widely used artificial intelligence service.

Various investigations launched by national privacy watchdogs in several member states are still ongoing, the report said, adding that it is therefore not possible to provide a complete description of the outcome. The findings should be understood as a "general denominator" among national authorities.

Data accuracy is one of the guiding principles of a set of EU data protection rules.

"As a fact, due to the probabilistic nature of this system, the current training approach yields a model that can also produce biased or fabricated outputs," the report said.

"In addition, the output provided by ChatGPT tends to be considered an accurate fact by end users, including information related to individuals, regardless of true accuracy."


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)