OpenAI Wants To Use Artificial Intelligence To Supervise AI Capabilities
JAKARTA - OpenAI, the company that created ChatGPT, plans to invest significant resources and form a new research team that will ensure their artificial intelligence (AI) remains safe for humans, and ultimately uses AI to keep an eye on itself. They conveyed this on Wednesday, July 5.
"The enormous superinteligence system can... lead to human weakness or even human extinction," wrote OpenAI co-founder Ilya Sutskever, and its head of alignment, Jan Leike, in a blog post. "Currently, we don't have a solution to direct or control potential AI superinteligence, and prevent it from acting destructively."
AI superinteligence - a system that is smarter than humans - can emerge this decade, the authors of the blog post predict. Humans will need better techniques than currently available to be able to control such AI superinteligence, therefore a breakthrough is needed in research "alignment" focused on ensuring AI remains beneficial to humans, according to the authors.
OpenAI, backed by Microsoft, will allocate 20% of the computing power they have secured over the next four years to resolve the issue, they wrote. In addition, the company will form a new team that will focus on this endeavor, called the Superalignment team.
The team's goal is to create human-level AI alignment researchers, and then amplify them through great use of computing power. OpenAI says this means they will train AI systems with human feedback, train AI systems to help human evaluation, and eventually train AI systems to actually conduct alignment research.
AI's security supporterpanfabrity said the plan had a fundamental flaw as early-level human AI could go out of control and create chaos before being forced to solve AI's safety problems.
"You have to solve the alignment before you build human-level intelligence, otherwise you won't control it," he said in an interview. "Personally, I don't think this is a good or safe plan."
The potential danger of AI is of primary concern to AI researchers and the general public. In April, a group of industry leaders AI and experts signed an open letter asking for a six-month delay in developing a more powerful system than OpenAI's GPT-4, citing potential risks to society. In May, a Reuters/Ipsos poll found that more than two-thirds of Americans were concerned about a possible negative impact on civilization, and 61% believed that AI could threaten civilization.
The potential danger of AI has become a major concern for both AI researchers and the general public. However, OpenAI is committed to making up resources and efforts to ensure AI remains safe and beneficial for humans.
另请阅读:
Nonetheless, some observers are skeptical of OpenAI's plans. They are concerned that developing human-level AI without solving adequate alignment problems can pose a serious risk.
While OpenAI seeks to create an effective AI monitoring mechanism, there are concerns that uncontrolled human-level AI could cause unexpected and harmful consequences for humans and society.
While there are still debates and challenges in ensuring AI security, OpenAI's steps to invest in alignment research and developing a Superalignment team show their seriousness in maintaining AI safety and ensuring that AI operates within beneficial limits for humans.
Disputes and debates surrounding AI development and its security will continue, with the ultimate goal of ensuring that artificial intelligence is developed in a safe, responsible and beneficial way for all mankind.