JAKARTA - Artificial intelligence company OpenAI, backed by Microsoft, has detailed a framework for addressing security in its most advanced models. This includes giving the council the power to overturn security decisions, outlined in a plan published on its website on Monday, December 18.

OpenAI will only implement its latest technology if it is deemed safe in specific areas such as Cybersecurity and nuclear threats. The company also created an advisory group to review security reports and send them to company executives and boards. Although the executive will make the decision, the board has the authority to overturn the decision.

Since the launch of ChatGPT a year ago, the potential dangers of artificial intelligence have become a major concern for both artificial intelligence researchers and the general public. Generative artificial intelligence technology has amazed users with its ability to write poetry and essays, but has also sparked security concerns with its potential to spread disinformation and manipulate humans.

In April, a group of artificial intelligence industry leaders and experts signed an open letter calling for a six-month delay in the development of a system more powerful than OpenAI's GPT-4, citing potential risks to society.

A Reuters/Ipsos poll in May found that more than two-thirds of Americans were worried about the possible negative effects of artificial intelligence and 61% believed it could threaten civilization.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)