JAKARTA - Elon Musk and a group of artificial intelligence experts and industry executives asked to take a six-month break in developing a more robust system from OpenAI's newly launched GPT-4 artificial intelligence program, in an open letter citing potential risks to society and humanity.

This month, Microsoft-backed OpenAI announced the fourth version of its GPT (Generative Pre-trained Transformer) artificial intelligence program, which has impressed its users with a wide range of applications, from communicating with users like a human to composing songs and summarizing long documents.

The letter, issued by the nonprofit Future of Life Institute and signed by more than 1,000 people including Musk, calls for a pause in the development of more advanced artificial intelligence until a common safety protocol for such designs is developed, implemented, and audited by independent experts.

"Robust artificial intelligence systems should be developed only when we are confident that the effects will be positive and the risks manageable," the letter said.

OpenAI did not immediately respond to a request for comment from Reuters, on the report.

The letter details the potential risks to society and civilization by human-equivalent artificial intelligence systems in the form of economic and political disruption, and calls on developers to work with policymakers on governance and regulatory authorities.

Co-signatories include the CEO of Stability AI. Emad Mostaque, a researcher at DeepMind which is owned by Alphabet, and artificial intelligence expert Yoshua Bengio, often referred to as one of the "fathers of artificial intelligence", and Stuart Russell, a pioneer of research in the field.

According to the European Union's transparency registry, the Future of Life Institute is primarily funded by the Musk Foundation, as well as the London-based effective altruism group Founders Pledge and the Silicon Valley Community Foundation.

These concerns come as the European Union police force Europol on Monday, March 27 joined in raising ethical and legal concerns about advanced artificial intelligence such as ChatGPT, warning about the potential for misuse of the system in phishing, disinformation, and cybercrime attempts.

Meanwhile, the UK government announced proposals for an "adaptable" regulatory framework around artificial intelligence.

The government's approach, outlined in a policy document published on Wednesday, March 29 would split artificial intelligence (AI) regulatory responsibilities among its regulators for human rights, health and safety, and competition, rather than creating a new body dedicated to the technology.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)