Researchers Suggest Government Controls AI Use By Controlling Access To Hardware

JAKARTA - A research team from OpenAI, Cambridge, Oxford, and several other universities and institutions has come to the conclusion that the only way to counter the use of malicious artificial intelligence (AI) may be to continue developing stronger AI and hand it over to the government.

In a paper entitled "Computing power then governance of artificial intelligence," scientists investigate the current and potential challenges involved in regulating AI's use and development.

The main argument in this paper is that the only way to control who has access to the most powerful AI systems in the future is to control access to the hardware needed to train and run those models.

In this context, "coupling" refers to the basic hardware needed to develop AIs such as GPUs and CPUs.

The researchers propose that the best way to prevent people from using AI to cause damage is to cut off their access from the source. This implies that the government needs to develop a system to monitor development, sales, and hardware operations that are considered important for advanced AI development.

However, the study also shows that a naive or inappropriate approach to regulating access to hardware carries significant risks in terms of privacy, economic impact, and centralization of power.

The researchers also noted that recent progress in efficient training in communications could lead to the use of decentralized "computes" to train, build and run AI models. This can make it even more difficult for governments to find, monitor and close hardware related to illegal training efforts.

According to the researchers, this could leave the government with no choice but to take an arms race against the illegal use of AI. "People must use stronger computes and can be regulated on time and wisely to develop defenses against the new risks posed by irregular computes."