UK Artificial Intelligence Team Advisor: Humans Have Two Years To Control And Regulate AI Before It's Too Strong
JAKARTA - A team of artificial intelligence (AI) adviser from the British Prime Minister revealed that humans have about two years to control and regulate AI before it becomes too strong.
In an interview with local media in the UK, Matt Clifford, who also serves as chairman of the UK's Advanced Research and Invention Agency (ARIA), stressed that the current systems are increasingly "able with a rapidly increasing rate."
He went on to say that if officials had not considered security and regulations now, these systems would be "very strong" in two years.
"We have two years to build a framework that makes the control and regulation of these big models much more likely than it is today," he was quoted as saying by Cointelegraph.
Clifford warned that there are "many types of risks" associated with AI, both in the short and long term, which he calls "scary enough."
The interview comes after a recent open letter published by the Center for AI Safety, signed by 350 AI experts, including OpenAI CEO Sam Altman, stating that AI should be considered the same existential threat as nuclear weapons and pandemics.
"They talk about what happened after we effectively created a new species, which is greater intelligence than humans," Clifford added.
The artificial intelligence team adviser also said that the threats posed by AI could be "very dangerous" and could "kill many humans, not all humans, just from expectations where we expect the models to be in the next two years."
According to Clifford, the main focus of regulators and developers must understand how to control these models and then implement regulations on a global scale.
Currently, he says his biggest fear is a lack of understanding of why AI models behave as they do.
"People who build the most capable systems freely admit that they don't understand exactly how [the AI system] shows the behavior they have," he said.
Clifford highlighted that many organizational leaders who built AI also agree that powerful AI models must go through audit and evaluation processes before being implemented.
Currently, regulators around the world are trying to understand this technology and its consequences, while trying to make regulations that protect users and still allow innovation.
On June 5, officials in the European Union even proposed requiring all content generated by AI to be labeled AI content to prevent disinformation.
In the UK, a member of the Labor Party front-bench in opposition also repeated the views expressed in the Center for AI Safety letter, saying that technology should be regulated such as drugs and nuclear energy.
In this context, regulators around the world are trying to understand AI technology and its impacts, while creating regulations that protect users but still allow innovation.
On June 5, officials in the European Union even proposed requiring all content generated by AI to be labeled AI content to prevent the spread of disinformation.
In the UK, a team member of the Labor Party who is at odds with the government, also repeated the views expressed in the Center for AI Safety letter, stating that technology should be regulated such as drugs and nuclear energy.