JAKARTA - US President Joe Biden's administration is ready to open new fronts in its efforts to protect US artificial intelligence (AI) from China and Russia. They plan to put security around the most advanced AI models, the core of software from artificial intelligence systems such as ChatGPT.

The Commerce Department is considering a push for new regulations to limit the export of a proprietary AI model or a closed source, whose software and data it trains are kept confidential, said three people familiar with the matter.

Any move would complement a series of measures that have been implemented over the past two years to block exports of advanced AI chips to China in a bid to slow Beijing's cutting-edge technology development for military purposes. However, it would be difficult for regulators to pursue the rapid development of the industry.

The Commerce Department declined to comment while the Russian Embassy in Washington did not immediately respond to a request for comment. The Chinese Embassy called the move a "special unilateral economic action and intimidation, which China has firmly opposed." They added that it would take "the necessary steps" to protect its interests.

Currently, nothing can stop US AI giants such as Microsoft, Microsoft-backed OpenAI, Alphabet's Google DeepMind, and competitors Anthropic, who have developed some of the most powerful closed source AI models, from selling them to nearly anyone in the world without government oversight.

Government and private researchers fear US enemies could use these models, which mine large amounts of text and images to summarize information and generate content, to launch aggressive cyberattacks or even create effective biological weapons.

One source said that new export controls will most likely be aimed at Russia, China, North Korea and Iran. Microsoft said in a February report that it had tracked down an affiliated hacker group with the Chinese and North Korean governments as well as Russian military intelligence, and Iran's Revolutionary Guard, as they tried to improve their hacking campaign using a large language model.

COMPUTING STRENGTH

To develop export control over AI models, sources said the US might use the threshold contained in an AI executive order issued last October based on the amount of computing power required to train the model. When that level is reached, developers must report their AI model development plans and provide trial results to the Commerce Department.

The threshold of computing power could be the basis for determining which AI models would be subject to export restrictions, according to two US officials and other sources informed about the discussions. They declined to be named because details had not been published.

If used, it will likely only limit exports of unreleased models, as no one is believed to have reached that threshold, although Gemini Ultra from Google is considered imminent, according to EpochaI, a research institute that tracks AI trends.

The agency is far from finalizing the regulatory proposal. But the fact that such a move is being considered shows that the US government is trying to close a gap in its efforts to stop Beijing's AI ambitions, despite the serious challenges in imposing a strong regulatory regime on developing technology quickly.

As the Biden administration pays attention to competition with China and the dangers of advanced AI, AI models "clearly are one of the tools, one of the potential chokepoints you should think about here," said Peter Harrell, a former National Security Council official. "Are you really, in practice, being able to turn it into an export controllable chokepoint, remains to be seen," he added.

BIOLOGICAL WEAPONS AND CYBER ATTACKS?

The American intelligence community, think tank, and academics are increasingly concerned about the risks posed by foreign actors who gain access to advanced AI capabilities. Researchers at Gryphon Scientific and Rand Corporation note that advanced AI models can provide information that can help create biological weapons.

The Department of Homeland Security said cyber actors are likely to use AI to "develop new tools" to "allow a larger, faster, more efficient, and more difficult-to-detect cyberattack" in its 2024 domestic threat assessment.

"The potential explosion for [AI's] use and exploitation is radical and we are really having a hard time following it," said Brian Holmes, official at the Office of the Director of National Intelligence, at an export control meeting in March, highlighting China's progress as a special concern.

AI RESTRICTIONS

To address these concerns, the US has taken steps to stop the flow of US AI chips and tools to make them to China.

There have also been proposed rules to sue US cloud companies to notify governments when foreign customers use their services to train powerful AI models that could be used for cyberattacks.

However, until now no one has handled the AI model itself. Alan Estevez, who oversees US export policy in the Commerce Department, said in December that the agency was looking for options to regulate exports of large-source language models before seeking industry feedback.

TheASH team, an AI policy expert at the Washington DC-based CNAS think tank, said the threshold "is a good interim step until we develop a better method to measure the capabilities and risks of new models."

Jamil Jaffer, a former White House official and the Department of Justice, said the Biden administration should not use the threshold of computing power but opt for control based on the capabilities and uses intended from the model. "Privating national security risks rather than technology thresholds is a more sustainable and threatening step," he said.

The threshold is not equivalent. One source said that the Commerce Department might get a lower floor, combined with other factors, such as data types or potential use for AI models, such as the ability to design proteins that can be used to manufacture biological weapons.

Despite thresholds, exports of AI models will be difficult to control. Many models are open sources, meaning they will remain beyond the export control being considered.

Even imposing controls on more sophisticated propietary models would be difficult, as regulators are likely to find it difficult to define the right criteria to determine which models should be controlled at all,ACORDING that China is likely to be only about two years behind the United States in developing their own AI software.

Export control under consideration will affect access to backend software that prompts some consumer apps like ChatGPT, but does not restrict access to the derivative app itself.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)