AI Safety Team At OpenAI Disbanded After A Series Of Resignations

JAKARTA - The entire OpenAI team focused on the existential danger of AI has resigned or reportedly joined other research groups within the company.

Days after Ilya Sutskever, chief scientist of OpenAI and co-founder of the company, announced his resignation, Jan Leike, a former DeepMind researcher who is also the co-lead of OpenAI's superalignment team, announced his resignation on platform X.

According to Leike, his departure from the company was caused by concerns about OpenAI's priorities, which he said focused more on product development than AI safety.

In a series of posts on X, Leike said that OpenAI's leadership was wrong in choosing core priorities and should emphasize safety and readiness in line with the development of artificial general intelligence (AGI).

AGI is a term for hypothetical artificial intelligence that can perform the same or better tasks of humans in a variety of fields.

After three years at OpenAI, Leike criticized the company for prioritizing product development that attracts attention rather than building a strong AI safety culture and adequate processes. He stressed the need for urgent resource allocations, especially computing power to support vital safety research, which he said was ignored.

"I disagree with OpenAI's leadership on the company's core priorities for a long time until we finally reached the summit. Over the past few months, my team has sailed against the wind," said Leike.

OpenAI formed a new research team in July 2023 to prepare for the emergence of advanced AI that can be smarter and stronger than its creators. Sutskever was appointed co-lead of this new team, which received 20% of OpenAI's computing resources.

Following the recent resignation, OpenAI chose to dissolve the "Superalignment" team and integrate its functions into other research projects within the organization. This decision is reportedly a consequence of the ongoing internal restructuring, which began in response to the governance crisis in November 2023.

Sutskever was part of a successful effort to get the OpenAI board to push Sam Altman out as CEO in November 2023 before he was later hired back into the role after receiving a backlash from employees.

According to The Information, Sutskever informed employees that the council's decision to remove Altman fulfills their responsibility to ensure that OpenAI develops AGI which is beneficial to all mankind. As one of the six council members, Sutskever emphasized the board's commitment to align OpenAI's goals with larger interests.