AI Development That Worries Elon Musk
JAKARTA - The development of Artificial Intelligence (AI) technology, or artificial intelligence, is increasingly sophisticated and also worrying. To the extent that some high-ranking technology companies, OpenAI, led by Elon Musk, are worried about releasing their products to the public.
Launching from Techcrunch, CEO of Tesla and SpaceX, Elon Musk is actually a business leader in the technology sector. Not only developing Tesla or its spacecraft, Elon Musk also founded a digital company based on artificial intelligence that can be used by humans.
Instead of being released to the public, Musk was worried that the AI system he was developing would actually exceed human capabilities and would be misused. He believed this after seeing the ability of the AI system he designed with Sam Altman and his team, which succeeded in learning the language patterns of 8 million web pages quickly.
As a result, an artificial intelligence system called GPT-2 can create high-quality fake text narrative. This system is able to create news articles from unreal topics with a writing style that looks realistic and coherent, as if it were made by humans.
"The first thing we have to assume is that we are very stupid. We can definitely make things smarter than ourselves," Musk said after deciding not to release the AI product to the market.
OpenAI should be more open imo
- Elon Musk (@elonmusk) February 17, 2020
In fact, it does not rule out that AI intelligence is much higher and cannot be controlled by us. This also motivated him to develop an artificial intelligence system to make a regulatory policy.
"What would you do in such a situation? I'm not sure. I wish them well," said Elon.
Furthermore, it turns out that this AI technology is not only criticized by Elon Musk, even the CEO of Google Alphabet, Sundar Pichai has also warned of the dangers of AI. He called for more regulation for the technology before it's too late.
Pichai has also written about the positive developments that AI can bring, such as Google's recent study which found that AI can detect breast cancer more accurately than doctors, or a Google project that uses AI to predict rainfall in local areas more accurately.
"History is full of examples of how the virtue of technology does not guarantee. The Internet also makes it possible to connect with anyone and get information from anywhere, but it is also easier to spread misinformation," Pichai said.
For example, Pichai pointed to existing regulations such as the European General Data Protection Regulation as a starting point for future legislation. He also stressed that the rules around AI must take into account factors such as security when finding ways to balance the potential benefits and dangers of technological developments.
"To get there, we need to agree on core values. Companies like us can't just build on promising new technology and let market forces decide how it's going to be used." said Pichai while recommending the development of regulatory proposals for the use of AI.