JAKARTA Ilya Sutskever, co-founder and former chief scientist at OpenAI, has launched a new company called Safe Super Intelligence (SSI). The company aims to develop an artificial intelligence (AI) system that goes far beyond human capabilities, but with a major focus on security and ethics.

Sutskever, 37, is known as one of the most influential figures in the world of AI. He studied under the guidance of Geoffrey Hinton, nicknamed "Father of AI," and was the initial supporter of the concept of "scaling" the idea that AI performance increases with great computing power. This concept is the basis for the progress of generative AI such as ChatGPT. However, Sutskever stated that SSI will take a different approach from OpenAI in terms of scaling.

In an exclusive interview with Reuters, Sutskever explained his motivation for establishing the SSI. He emphasized that the company wants to explore new areas in AI development, a "mounta" that is different from the previous work. According to Sutskever, reaching the top of this mountain will re-revolution the world of AI, and by then, super intelligence security will be the most crucial challenge.

"Our first product will be a safe super intelligence," said Sutskever, emphasizing the importance of ensuring AI provides benefits to mankind.

When asked if SSI would release an AI system that was equivalent to human intelligence before achieving super intelligence, Sutskever answered carefully. He stressed that the main question is whether the AI is safe and provides good for the world. He acknowledged that it was difficult to predict how the world would change as the AI system developed, and the final decision may not be entirely in the hands of SSI.

"The world will be a very different place," said Sutskever. He hinted that conversations about AI would be more intense as technology advances.

When asked about the safe definition of AI, Sutskever admitted that there was no definite answer. He stressed the need for significant research and experimentation to determine the right safety measures in line with the development of AI capabilities.

"There is still a lot of research that needs to be done," he said, confirming that the SSI mission was to find answers to these critical questions.

Sutskever also discussed the concept of scaling, which has driven a lot of progress in AI over the past decade. He pointed out that the current understanding of scaling is based on a specific formula, but this could change along with the development of AI technology. When this change occurs, questions about security will become increasingly important.

Regarding the possibility of open-source SSI research, Sutskever explained that, like most AI companies, SSI will not share the main results of their research openly. However, he hopes that certain aspects of super intelligence security research can be opened to the public, depending on various factors.

Despite setting up a company with a focus on security, Sutskever talks positively about the efforts of other AI companies. He believes that as industry progresses, all parties will recognize the challenges of developing a safe AI and contributing to a larger mission to ensure the benefits of AI for mankind.

This new Sutskever move marks an important milestone in AI's growing security landscape, with SSI positioning itself as a key player in the development of a secure superintelligence system. Along with the world's transformation by AI, research and innovation from this company could have a major impact on the future.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)