JAKARTA - Leopold Aschenbrenner, a former security researcher at the creator of ChatGPT, OpenAI, believes that an artificial intelligence laboratory will be able to train the GPT-4 model within one minute in 2027, compared to three months in 2023.

Leopold Aschenbrenner, a former security researcher at the creator of ChatGPT, OpenAI, has doubled his confidence in common intelligence (AGI) in his latest series of essays on artificial intelligence (AI).

Named "Situational Awareness," the series provides an overview of the condition of the AI system and its promising potential in the next decade. A complete essay series was collected in the 165-page PDF files updated on June 4.

Full series as PDF: https://t.co/sVfNoriSoWRead online: https://t.co/YrKqmElMG0

In his essay, this researcher paid special attention to AGI, the type of AI that is parallel or exceeds human ability in various cognitive tasks. AGI is one of many types of artificial intelligence, including narrow artificial intelligence, or ANI, and super artificial intelligence, or breast milk.

"AGI in 2027 is very likely," Aschenbrenner said, predicting that the AGI engine would surpass college graduates by 2025 or 2026.

At the end of the decade, they [AGI machines] will be smarter than you or me; we will have super intelligence, in real sense. During this process, an unprecedented national security force in half a century will be released [...] said Aschenbrenner.

According to Aschenbrenner, the AI system has the potential to have intellectual abilities comparable to a professional computer scientist. He also made another bold prediction that the AI laboratory will be able to train multipurpose language models in a matter of minutes.

To give an idea, suppose GPT-4 training takes 3 months. In 2027, a leading AI laboratory will be able to train the GPT-4 level model in one minute, "explained Aschenbrenner.

Predicting AGI's success, Aschenbrenner invites the community to face the reality of AGI. According to this researcher, "the coolest people" in the AI industry have converged to a perspective he calls "AGI's realism," which is based on three basic principles related to national security and the development of US AI.

Aschenbrenner's AGI series came moments after he was reportedly fired for alleged "bocor" information from OpenAI. Aschenbrenner was also reported as an ally of OpenAI chief scientist Ilya Sutskever, who reportedly participated in efforts to bring down OpenAI CEO Sam Altman, in 2023. Aschenbrenner's latest series was also dedicated to Sutskever.

Aschenbrenner has also recently founded an investment company focused on AGI, with major investments from figures such as Stripe CEO Patrick Collison, according to his blog.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)