Technology Ethics Group Asks FTC To Stop Commercial Release Of GPT-4 From OpenAI, Here's Why!
JAKARTA - The Center for Artificial Intelligence and Digital Policy technology ethics group asked the United States' Federal Trade Commission (FTC) to stop OpenAI from issuing a new commercial release from the GPT-4.
The GPT-4, OpenAI's artificial intelligence program, has impressed some users and caused anxiety for others with its rapid and human-like responses in answering questions.
In a complaint to the agency on Thursday, March 30, a summary on the group's website called the GPT-4 "bias, misleading, and at risk of public privacy and security."
OpenAI, based in California and supported by Microsoft Corp., introduced iterations of its fourth artificial intelligence program, GPT (Generative Pre-trained Transformer) AI in early March, which impressed users with conversations that resemble humans, songwriting, and a long summary of documents.
Official complaints submitted to the FTC follow an open letter sent to Elon Musk, artificial intelligence experts, and industry executives calling for a six-month hiatus in developing a more powerful system than OpenAI's newly launched GPT-4, citing potential risks to society.
In its complaint, the group said that OpenAI's ChatGPT-4 did not meet the FTC standards that had to be "transparent, explainable, fair, and based on evidence while fostering accountability."
"The FTC has a clear responsibility to investigate and ban unfair and misleading trade practices. We believe that the FTC should pay attention to OpenAI and GPT-4," said Marc Rotenberg, president of CAIDP and veteran privacy defenders, in a statement on the website.
Rotenberg is one of more than 1,000 people who signed an open letter calling for a pause in artificial intelligence experiments.
The group urged the FTC "to open an investigation into OpenAI, contain the next GPT-4 commercial release, and ensure the necessary security development to protect consumers, businesses and commercial markets."