FTC Starts Investigation Of Consumer Protection Law Violations Against OpenAI
JAKARTA - The United States Federal Trade Commission (FTC) has opened an investigation into OpenAI over allegations of violating consumer protection laws that threaten personal reputation and data risk. This is the strongest regulatory threat faced by this Microsoft-backed startup.
The FTC this week sent a 20-page request for a note on how OpenAI - the developer of ChatGPT's generative artificial intelligence chatbots - addresses risks associated with its artificial intelligence models.
This institution is investigating whether OpenAI is involved in unfair practices that result in "repute losses" for consumers.
This investigation is another high-level effort by the FTC, chaired by progressive Lina Khan, to control tech companies, days after the agency suffered a major court defeat in their bid to prevent Microsoft from acquiring Activision Blizzard. The FTC said it would appeal the court's decision.
One of the questions raised by the FTC to OpenAI relates to the steps the company is taking to address their product potential in "producing false, misleading, or degrading statements about real individuals."
The Washington Post first reported on this investigation. The FTC declined to comment, while OpenAI did not respond to requests for comment.
OpenAI CEO Sam Altman said in a series of tweets on Twitter on Thursday 13 July that the latest version of their company's technology, GPT-4, was built based on years of security research and its system is designed to study the world and not a private individual.
"Of course we will cooperate with the FTC," he said, quoted by Reuters.
OpenAI launched a ChatGPT in November, an amazing consumer and sparked competition in big tech companies to show how their enriched products with artificial intelligence will change the way society and businesses operate.
The race of artificial intelligence has raised widespread concerns about potential risks and regulatory oversight of the technology.
另请阅读:
Global regulators are trying to implement existing rules that cover everything from copyright and data privacy to two main issues: data incorporated in the resulting models and content, as reported by Reuters in May.
In the United States, Senate majority leader Chuck Schumer has proposed "comprehensive laws" to advance and ensure protection in the field of artificial intelligence. He also promised to host a series of forums by the end of this year aimed at "producing a new basis for artificial intelligence policy."
In March, OpenAI encountered problems in Italy, where regulators deactivated ChatGPT on charges that the company violated the European Union's General Data Protection Regulation (GDPR) - a broad privacy regulation imposed in 2018.
ChatGPT was then reactivated after the United States company agreed to install an age verification feature and allowed European users to block the use of their information in training the artificial intelligence model.