Overcome The Dangers Of Autonomous AI, IMD And TONOMUS Develop AI Safety Clock

JAKARTA The Institute of Management Development (IMD) and TONOMUS realize that the development of Artificial Intelligence (AI) is entering an alarming stage. The reason is, several AIs can work autonomously.

This technology is referred to as General Artificial Intelligence (AGI). According to IMD and TONOMUS, AI, which operates independently and does not require human supervision, has the potential to endanger the future so that proper tools are needed.

Therefore, IMD and TONOMUS launched AI Safety Clock to address the adverse effects of AI. Solutions in the form of hours of this indicator can read how high the risk is from AGI's development. AI Safety Clock can alert if AGI gets out of control.

This solution is designed to raise awareness and constructive discussions for the public, policymakers, and business leaders in the AI security sector. According to IMD and TONOMUS, AI Safety Clock can support AI safety practices.

Michael Wade, Director of the Global Center for Digital Business Transformation IMD, stated that AGI has four uncontrolled risk phases, namely low, medium, high, and critical risks. If you read the current AI situation, the world is entering a high risk phase.

"We are currently moving from the moderate to high risk phase of AGI's development. When AGI's growth becomes critical and out of control, it will be a disaster for mankind. The risk is serious, but it's not too late to act," explained Wade in a statement received by VOI.

In line with this situation, Wade explained that regulations against AI, especially AGI, are urgently needed. By making appropriate regulations, the bad risk of AGI can be prevented and technological threats for humans can also be overcome.

"Effective and integrated regulations can limit the worst risks of this technological development without reducing its benefits. To that end, we call on international players and tech giant companies to take precautions," Wade said.