OpenAI Changes Rules For Using ChatGPT For Military Needs
JAKARTA - OpenAI, the company that created ChatGPT, has quietly changed its usage rules and removed bans on the use of chatbots and its artificial intelligence tools for military purposes.
The rule change comes after Wednesday January 17 and includes the abolition of sentences stating that the company will not allow the use of models for 'activities that have a high risk of physical injury, including: arms development, military, and war.'
An OpenAI spokesperson stated that the company, which is in talks to raise funds with a valuation of US$100 billion (Rp1,563.1 trillion), is working closely with the Department of Defense in the development of cybersecurity tools built to protect open source software.
Our policy does not allow our tools to be used to hurt people, develop weapons, surveillance communications, or injure others or damage property. However, there are cases of national security use that are in line with our mission, "said an OpenAI spokesperson, quoted by VOI from the Daily Mail.
"For example, we have collaborated with the Defense Advanced Research Projects Agency (DARPA) to encourage the creation of new cybersecurity tools to protect open source software that is critical and industrial infrastructure dependent," he added.
It is not yet clear whether this useful use case will be allowed in a previous policy that prohibits use for the'military' purpose. In response, the purpose of policy change is to provide clarity and the ability to conduct discussions on this matter.
In the previous year, 60 countries including the United States and China signed an 'action call' to limit the use of artificial intelligence (AI) for military purposes. However, human rights experts in The Hague highlighted that the 'action call' had no legal force and did not address concerns regarding'slapbots' that could kill without human intervention.
The parties who signed stated their commitment to developing and using military artificial intelligence in accordance with "international law obligations and in a way that does not undermine international security, stability, and accountability."
Several countries, such as Ukraine, have used AI-assisted facial recognition and targeting systems in fighting against Russia. In 2020, Libyan government forces launched Turkish-made Cargu-2 autonomous drones that attacked insurgent forces that were on the decline. This is the first attack in history carried out by autonomous drones, according to a UN report.
Anna Maju, Vice President of Global OpenAI Affairs, said in an interview this week that the 'general' provisions were removed to allow for military use in accordance with the company's value.
"Because previously we had a ban that basically involved the military, many people think that it would ban many of these use cases, which people think are very much in line with what we want to see in the world," Maju said, quoted by Bloomberg.
SEE ALSO:
The use of artificial intelligence by the organization 'Big Tech' has previously caused controversy. In 2018, thousands of Google employees protested the Pentagon's contract - Project Maven - which involved using a company-made intelligence tool to analyze drone surveillance footage.
After the protest, Google did not renew the contract. Microsoft employees also protested against a USD 480 million contract to provide soldiers with additional reality headsets.
In 2017, tech leaders including Elon Musk wrote a letter to the United Nations calling for autonomous weapons to be banned, under a law similar to those prohibiting chemical weapons and lasers designed to cripple human vision.
They warn that autonomous weapons can bring in a 'third revolution in war,' after the first revolution with powder, and a second nuclear weapon. Experts also warn that once a fully autonomous weapon's Pandora box is opened, it may not be possible to close it again.