JAKARTA - OpenAI confirmed that it had secured a contract with the Pentagon to provide artificial intelligence (AI) models on a classified military network. This deal comes after the United States government decided to gradually stop using Anthropic's AI technology in federal agencies, this week.
OpenAI CEO Sam Altman defended the decision. He admitted that from the perspective of public perception or "optics", this step looks controversial. However, he emphasized that OpenAI has prepared a layered technical security system or "safety stack" to prevent the misuse of technology.
The Replacement of Anthropic and US Government Policies
This announcement comes just hours after US President Donald Trump announced the termination of the use of Anthropic AI technology in the federal environment. The government is reportedly at odds with Anthropic because the company refused to remove a number of restrictions that limit its military use.
The Pentagon then designated Anthropic as a "supply chain risk," which effectively blacklisted the company for further cooperation. The process of discontinuing the use of Anthropic technology in federal agencies is targeted to be completed within six months.
In addition to OpenAI, Elon Musk's AI company, xAI, also obtained a significant role. xAI's Grok model is reported to have obtained permission to be used in secret operations. Unlike Anthropic, xAI is considered willing to accept government standards for the use of AI technology for "all legitimate purposes according to law".
The Controversy of the "All Lawful Purposes" Clause
The center of the debate lies in the clause "all lawful purposes". Anthropic CEO, Dario Amodei, previously argued that current regulations are not enough to anticipate the potential harmful impact of AI.
In contrast, OpenAI accepted the clause with a different approach. Sam Altman stated that by including a direct reference to United States law in the contract, OpenAI felt that it had stronger protection than simply relying on general internal usage policies.
Rely on Multi-Layered Technical Security
To mitigate ethical concerns, OpenAI insists that they do not rely solely on legal agreements. The company will place engineers to monitor the implementation of AI models in the Pentagon environment.
The "safety stack" system developed includes an AI classifier designed to detect and reject commands that violate certain boundaries, such as illegal domestic surveillance or use of weapons without human supervision. Altman emphasized that if the model rejects an order based on these rules, there is no manual override mechanism that can force it to carry out the task.
This deal triggered mixed reactions. Anthropic's Claude model reportedly jumped to the top of the App Store after a call for a boycott of ChatGPT emerged.
Internally at OpenAI, dozens of employees signed an open letter urging management to continue to prioritize safety principles. Some staff even called the new protection "window dressing" and questioned its effectiveness in the long run, especially when applied in a large-scale military context.
Implications for the AI Industry
Sam Altman described this move as an effort to ease tensions between the government and the AI industry. He assessed that decisions on how technology is used for national defense should be in the hands of elected leaders, as long as constitutional protections are maintained.
With OpenAI and xAI starting to play their respective roles in the defense sector, this development is expected to set an important precedent in determining how private AI technology companies interact with governments in the future.
The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)