Elon Musk Reminds AI Dangers If Used By Extinctionists, Civilizations Can Disappear
Rishi Sunak and Elon Musk, two people who care about AI development. (photo: twitter @AISafetySummit)

JAKARTA - From space travel to autonomous cars, Elon Musk has been very open about his aspirations to push technology to its absolute limits. However, Musk's latest comments suggest that he is drawing limits on artificial intelligence (AI).

While attending the AI Safety Summit at Bletchley Park, Musk claimed that AI security was vital to the future of civilization. He posted the claim on platform X (formerly Twitter), citing British Prime Minister Rishi Sunak's tweet about the meeting.

This statement comes just one day after Musk appeared on Joe Rogan's podcast, where he claims that AI can pose an existential risk if it becomes 'human being'.

Politicians and tech figures are gathering at Bletchley Park today when Oliver Dowden started his first AI security meeting. Musk was present, and has expressed his views on platform X.

Quoting Rishi Sunak's tweet, Musk simply wrote: "AI security is important to the future of civilization."

Rishi Sunak's original post reads: "The global AI Safety Summit started in the UK today. This is what we hope to achieve: Agree on the risks of AI, to inform how we manage it, discuss how we can collaborate better internationally, See how secure AI can be used for good globally."

The Tesla CEO and owner of X claim several "extinctionists" environments that "consider humanity as an epidemic on earth's surface."

He called the founder of the Voluntary Human Extinction movement, Les Knight, which was interviewed by the New York Times last year, an example of this philosophy and claimed some people who worked for tech companies had similar mindsets. Knight believes the best humans can do for the planet is to stop having children.

"You have to ask, "how did AI go wrong?", Well, if AI was programmed by executionists, its function would be the extinction of mankind," Musk said. "They wouldn't even think it's bad, like that person. (Knight)"

Musk also signed a letter calling for a six-month delay in AI development this year. When asked by Rogan about the letter, he said: "I signed a letter written by others, I don't think people will really stop. Making such a kind of super digital intelligence seems dangerous."

He said the "implicit" risk of transmitting AI to believe that "extinction of humanity is something that must be tried" is the "biggest danger" posed by the technology.

"If you take that person who is on the front page of the New York Times and you take his philosophy, which is common in San Francisco, AI can conclude, as he did, where he literally said, "there are eight billion people in the world, it would be better if it didn't exist" and set the results up," Musk said.

When asked if AI could be designed in such a way as to reduce security risks, he said: 'If you say, 'what are the most likely outcomes of AI? I think the most likely outcome to go into detail, is a good result, but it's uncertain. I think we should be careful in AI programming and ensuring that it's not intentionally anti-human."

When asked what to expect from this meeting, he said: "I don't know. I'm just worried about AI security and like, "what should we do about it?" I don't know, (perhaps) having some kind of regulatory oversight?

"You can't just go and make a nuclear bomb in your backyard, it's against the law and you'll be thrown in jail if you do. This, I think, may be more dangerous than a nuclear bomb. We have to worry about AI becoming anti-human. That's an important thing potential," Musk said.

"It's like releasing a genie from a bottle. It's like a magic genie that can make the request come true unless they usually tell that the story doesn't end well for people who release the genie from the bottle," he said.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)

Related News