OpenAI, DeepMind And Dozens Of Industry Leaders Give AI Warnings Can Destroy Humans In The Near Future

JAKRTA - The leaders of the Artificial Intelligence industry (AI), academics, and even celebrities also call for the risk of human extinction due to AI technology, if it bypasses human intelligence. This must be a top global priority.

"Reducing the risk of AI extinction should be a global priority along with other social-scale risks such as pandemics and nuclear warfare," read a statement published by the AI Security Center.

DeepMind Chief Executive, Demis Hassabis, Founder and CEO of OpenAI, Sam Altman, and a computer scientist known as Geoffrey Hinton, among the hundreds of leading figures who signed the statement, was posted on the Center for AI Safety website.

The statement highlighted widespread concerns about the main dangers of uncontrolled AI. Recently, AI has continued to haunt all industries that are predicted to replace human work.

Other concerns, outsmarting humans, being used for crime, and spreading misinformation have also increased with the emergence of a new generation of high-capable AI chatbots such as ChatGPT, which OpenAI is investigating.

Worse yet, AI is also claimed to be used to develop new chemicals, weapons and increase air fighting.

AI experts say the public is still far from developing the type of artificial general intelligence which is science fiction.

For example, the current chatbots mostly reproduce the patterns based on the training data they have provided and don't think about it for themselves, as quoted by CNN International, Wednesday, May 31.

However, many who invest in the AI industry have led those supporting the warning to say regulators and lawmakers should take severe risks more seriously to deal with AI, before major accidents occur.

Director of the AI Security Center, Dan Hendrycks in his Tweet yesterday responded to a statement first filed by David Krueger, an AI professor at the University of Cambridge, London, not preventing the public from dealing with other types of AI risks, such as algorithmic bias or misinformation. In fact, about the technology they create.

People can manage a lot of risks at once; it's not 'one / or' but 'yes / and. From a risk management perspective, just as careless to exclusively prioritize the current danger, it will also be careless to ignore it," said Hendrycks.

Earlier this year, technology leaders had also asked leading AI companies to stop developing their systems for six months, in order to find ways to reduce risk.

The AI system with human competitive intelligence can pose a huge risk to society and humanity, wrote an open letter from the Future of Life Institute, as reported by The Independent.

"Research and development of AI must be refocused to make a strong and sophisticated system today more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and faithful," he added.