Partager:

JAKARTA - OpenAI CEO Sam Altman is worried about the increasingly sophisticated ChatGPT. He stated that his Artificial Intelligence (AI) would change people's way of life.

Altman also emphasized how bad actors would use the technology. He warned that there would be other people who did not apply the security limits he implemented.

OpenAI released the AI chatbot ChatGPT to the public at the end of November and became a consumer application with rapid growth in history.

Not long ago they also launched a more capable successor to the so-called GPT-4. The man aged about 37 said regulators and the public needed to be involved with technology to guard against possible negative consequences for mankind.

"We have to be careful here. I think people should be happy that we are a little bit afraid of this," Altman said in an interview with ABC News.

I am very worried that this model can be used for large-scale disinformation. Now that they are getting better at writing computer codes, [they] can be used for cyber offensive attacks," he added.

But despite the dangers, Alman said, ChatGPT can also be the greatest technology ever developed by mankind.

Fear of AI faced by consumers and AI in general, focuses on humans being replaced by machines. However, Altman showed that AI only works under the direction, or input from humans.

"It's waiting for someone to provide input. It's a tool that is highly controlled by humans," Altman explained. But he has concerns about which humans have input control, as quoted by The Guardian, Monday, March 20.

There will be other people who don't apply the security limits we apply. People, in my opinion, have limited time to think about how to react to it, how to set it up, how to handle it," he added.

Tesla, SpaceX and Twitter CEO Elon Musk, one of the first investors at OpenAI when he was still a non-profit company, has repeatedly issued warnings in which AI or Artificial General Intelligence (AGI) is more dangerous than nuclear weapons.

Musk voiced Microsoft's concern that the ChatGPT, which hosts on its Bing search engine, had dissolved the ethics control division.

"There is no regulatory oversight of AI, which is a major problem. I have called for AI security regulations for more than a decade!" Musk tweeted last December.

Musk also criticized this change, "OpenAI was created as open source (that's why I found it 'Open' AI), a nonprofit company that functions as Google's balancer, but has now become a closed source, a company with maximum profit that Microsoft effectively controls," Musk said.

For information, OpenAI last week shared a system card document outlining how its testers deliberately tried to make GPT-4 offer malicious information.

Such as how to make hazardous chemicals using basic materials and kitchen supplies, and how the company fixes the problem before the product launch.

I am very worried that this model can be used for large-scale disinformation. Now that they are getting better at writing computer codes, [they] can be used for offensive cyberattacks," concluded Altman.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)