On Thursday, February 16, the United States government issued a declaration of the use responsible for artificial intelligence (AI) in the military, which would include "human accountability."

"We invite all countries to join us in implementing international norms, as it does with the development and use of the AI military" as well as autonomous weapons.

Jenkins spoke at an international meeting on the use of responsible military AI in The Hague, the Netherlands, which was his first time.

The use of AI in weapons and weapons of mass killers should be avoided and strictly regulated. Although AI can help improve efficiency and precision in military operations, its use in murder weapons can carry enormous and unethical risks.

The use of AI in weapons can raise issues about accountability and control. This is related to the ability of weapons to make decisions that are independent and capable of carrying out attacks without human intervention. If an error or negligence occurs, it is difficult to determine who is responsible and this can have a very serious impact on human security.

In contrast, the use of AI can be applied to many aspects of military operations that do not involve murder weapons. For example, in the process of collecting intelligence data, planning strategies, monitoring conflict areas, and logistics.

In this case, AI can help accelerate and improve the performance of military operations without endangering human security. Therefore, there needs to be strict regulation and good control in the application of AI in military operations.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)