JAKARTA - In addition to Artificial Intelligence (AI), many countries are also starting to develop Artificial General Intelligence (AGI), an AI system that can operate independently without human assistance and supervision, so that it has the potential to be dangerous.

However, The Institute of Management Development (IMD) said that the development of AGI could pose a high risk and become a disaster for mankind.

When AGI's development becomes no longer human controllable, the IMD predicts the risks that will occur. First, AI will take over and control conventional weapons, ranging from nuclear, biological, or chemical weapons.

China is currently accelerating the commercialization of humanoid robots, including their implementation in sensitive infrastructure such as power grids and nuclear power plants, "said Michael Wade, Director of the Global Center for Digital Business Transformation IMD.

Second, he said that AI can be used to manipulate or interfere with financial markets, as well as important infrastructure, such as energy, transportation, communications, water, and so on.

In addition, the use of this AI can also be for manipulation or disrupting political systems, social networks, and biological and environmental ecosystems, even becoming a direct threat to human life.

Therefore, appropriate regulations are needed to control the development of AGI in the world. Currently, there are a number of initiatives such as the EU AI Act, California's SB 1047, and the Council of Europe's Framework Convention on AI which can be used as a reference for AI rules.

In addition to government rules and policies, all stakeholders, especially companies developing AI models such as OpenAI, Meta, and Alphabet, are playing an equally large role to reduce AI risk.

In fact, for AI safety practices, a number of AI development technology companies have tried to implement security regulations.

For example, OpenAI has provided a Preparedness Framework, Alphabet has a Google DeepMind Frontier Safety Framework, and Anthropic has prepared a Respondible Scaling Policy (RSP).

These various frameworks are important steps to maintain AI safety, but transparency and better enforcement of practical steps are still needed, concluded Wade.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)