JAKARTA - The dismissal of Sam Altman, CEO of OpenAI, who is known as the brain behind the ChatGPT chatbot, has raised fundamental disagreements about security in the development of artificial intelligence (AI). The differences between groups that support rapid development and publication of AI, such as those held by Altman, and groups that support thorough testing in laboratories, are highlighted.
Altman, fired last Friday, was considered the human face of generative artificial intelligence. Some worry that super-smart software like ChatGPT could get out of hand, leading to potential disaster. This reflects concerns among advocates of "effective altruism," who want advances in artificial intelligence that benefit humanity.
Similar contradictions arise in the development of autonomous cars, which are also controlled by AI. Some developers say there is a need to release autonomous cars in urban environments to understand their capabilities and shortcomings, while others advocate caution against unforeseen risks.
Ouster Altman raised concerns about security as OpenAI recently announced new products, including a version of ChatGPT-4 and a virtual agent. Ilya Sutskever, OpenAI's principal scientist who agreed with Altman's dismissal, felt OpenAI's software was introduced too quickly to users, compromising safety.
Sutskever raised concerns in a blog in July, citing the inability of humans to police smarter AI. Altman, although OpenAI was initially founded as a non-profit organization to discourage profit motivation, helped create a profit-oriented entity within the company.
The importance of the fate of OpenAI is in focus, it is seen as crucial in the development of AI. Altman, a key figure in releasing ChatGPT in November last year, attracted major investment, including 10 billion US dollars (IDR 154 trillion) from Microsoft to OpenAI. Discussions to return Altman to his post ended without an agreement.
VOIR éGALEMENT:
The replacement of interim CEO by Emmett Shear, the former head of the Twitch platform, represents a push to slow AI development. In a September post, Shear advocated a slowdown if we were at the current 10 speed, wanting a reduction to 1-2.
As regulators try to adapt to developments in AI, the direction of developing artificial intelligence that is safe for humanity is becoming increasingly in-depth. While most people use generative AI software, such as ChatGPT, as a complement to their work, concerns continue to grow about the possible emergence of general artificial intelligence (AGI) that can perform complex tasks without triggers.
In this context, OpenAI faces tough challenges ahead, with important questions about whether the development of artificial intelligence should continue to accelerate or slow down to ensure safety and long-term benefits for society.
The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)