Partager:

JAKARTA - Four artificial intelligence experts (AI) expressed concern after their work was cited in an open letter signed by Elon Musk demanding urgent delays in stronger AI research of Microsoft-backed latest product OpenAI, GPT-4. The letter asks for a six-month gap in the development of a stronger AI system than GPT-4, which can have human-like conversations, create songs, and summarize long documents.

The open letter said the AI system with "human-equivalent intelligence" poses a deep risk to humanity, citing 12 studies from experts including university academics and formers and currently employees of OpenAI, Google, and its subsidiary DeepMind.

Since its predecessor ChatGPT was launched last year, competing companies have been competing to launch similar products. Civil society groups in the US and the EU have since urged policymakers to limit OpenAI research. OpenAI did not immediately respond to a request for comment.

Critics have accused the Future of Life Institute (FLI), the organization behind the letter funded primarily by Musk Foundation, of prioritizing apocalyptic scenarios compared to more pressing concerns about AI, such as racial bias or sexist programmed into machines.

Among the research quoted was "On the Dangers of Stochastic Parrots", a well-known paper co-authored by Margaret Mitchell, which previously oversees ethical AI research on Google.

Mitchell, now chief ethical scientist at the AI Huagging Face company, criticized the letter, told Reuters it was unclear what was considered "stronger than the GPT-4".

"By treating many contradictory ideas as a reality, the letter affirms a series of priorities and narratives about AI that benefit FLI supporters," he said. "ignoring the current dangers is a privilege some of us don't have."

His co-authors, Timnit Gebru and Emily M. Bender, criticized the letter on Twitter, with the latter calling some of its claims "crazy".

FLI president Max Tegmark told Reuters the campaign was not an attempt to hinder OpenAI's corporate gains. "This is quite funny. I saw people say, 'Elon Musk tries to slow competition,'" he said. He also added that Musk had no role in compiling the letter. "It's not about one company."

Shiri Dori-Hacohen, an assistant professor at the University of Connecticut who considers the current use of AI to pose a serious risk to decisions related to climate change, nuclear war, and other existential threats.

Dori-Hacohen stated that AI does not need to achieve human-level intelligence to exacerbate these risks. The risks generated by AI are not only existential, but also very important risks but do not get the same attention as the existential risk.

However, Tegmark from the Future of Life Institute (FLI) which issued the open letter stated that both the short-term and long-term risks of AI must be taken seriously. He also added that if someone is mentioned in the open letter, it just means they agree with certain sentences in the letter, not that they agree with the entire contents of the letter.

The open letter also warns that AI's generating tools can be used to flood the internet with "propaganda and lies". However, Dori-Hacohen felt that Elon Musk's actions in buying the Twitter platform had a negative impact on the dissemination of misinformation on the platform, making it difficult for researchers studying misinformation and disinformation.

Twitter will also soon launch a new cost structure for access to their research data, which could hamper research on the topic. Until this news was written, neither Musk nor Twitter had yet to provide a response to criticism from the experts.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)