JAKARTA DeepSeek, a popular Chinese Artificial Intelligence (AI) platform, is being investigated by Italian regulators. This investigation was carried out due to the safety concerns of the AI model.
Reporting from a Reuters report, the Italian Business Competition Authority (AGCM) opened the process of investigating DeepSeek on Monday, June 16. DeepSeek activity was monitored because the platform failed to warn of hallucinatory issues to users.
Hallucination is a problem that may occur in any AI model and DeepSek should warn of this. The platform may be able to generate false information.
Therefore, DeepSeek should provide a clear warning regarding this hallucinatory problem. This needs to be done so that users do not immediately believe in all the information DeepSeek provides.
SEE ALSO:
AGCM, in a statement, said that DeepSek did not give a "clear enough, direct, and understandable" warning regarding AI-produced content. It is not explained how long the Italian regulator will investigate DeepSek.
This isn't the first time DeepSek has been caught in trouble in a country. When the platform was popular across the country, South Korea decided to block the AI model in February.
This blocking was carried out because DeepSeek allegedly transferred user data in South Korea to a Chinese company. This action is considered a violation of data privacy rules. However, after the problem was corrected, DeepSek resumed operations in the country.
The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)