JAKARTA - The Federal Trade Commission (FTC), United States (US) has begun to open an investigation into OpenAI, the startup creator of ChatGPT, questioning whether chatbots harm consumers through data collection and publishing false information to individuals.
In a 20-page letter sent to a company based in San Francisco, United States (US) this week, the FTC said investigations would focus on whether OpenAI had been involved in unfair and deceptive data privacy or security practices.
Not to be missed, the FTC will see if OpenAI is involved in unfair or deceptive practices relating to the risk of harm to consumers, including reputational damage, which violates Section 5 of the FTC Law.
The request for information, which is considered a type of administrative court call by the FTC, is also seeking testimony from the company about any complaints it receives from the public, a list of lawsuits involving it, and details of data leaks disclosed by OpenAI in March 2023.
The FTC also stated that it was temporarily requesting to open a chat history and user payment data.
This means that the US agency needs a description of how OpenAI tests, changes, manipulates its algorithms, especially to generate different responses or to respond to risks, and in different languages.
The request also asks the company to explain any steps that have been taken in handling hallucinatory cases, an industry term that explains the results in which AI produces false information.
It is known that FTC investigations emerged after they repeatedly warned businesses not to make excessive claims about AI or misuse technology in a discriminatory manner.
In a recent blog post, the FTC revealed companies using AI will be held accountable for the unfair and deceptive practices associated with technology.
As a national consumer protection watchdog, the FTC is empowered to demand privacy violations, improper marketing and other losses.
اقرأ أيضا:
In response to the FTC, OpenAI CEO Sam Altman said he was disappointed at the agency's request which implied distrust of the company, as quoted by CNN International and CNBC International, Friday, July 14.
"It's disappointing to see the FTC demand start with a leak and not helping build trust. We built the GPT-4 over years of safety research and spent 6+ months after we completed the initial training made it safer and more harmonious before releasing it," Altman said on his official Twitter account.
Altman also emphasized that the technology is very safe and protects consumers and obeys the law, "Our technology is safe and pro-consuming, and we believe we comply with the law, of course we will cooperate with the FTC," he said.
"We protect user privacy and design our system to learn about the world, not individuals. We are transparent about the limitations of our technology, especially when we fail," he added.
The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)