US, UK, And 18 Other Countries Agree First International Guidelines For AI Security
JAKARTA - The United States, Britain, and more than a dozen other countries on Sunday 26 November announced what senior US officials described as the first detailed international agreement on how to keep artificial intelligence safe from rogue actors. In addition, it also encourages companies to create a "designally secure artificial intelligence system."
In the 20-page document announced on Sunday, the 18 countries agree that companies designing and using artificial intelligence need to develop it and implement it in a way that keeps customers and the general public safe from abuse.
The agreement is non-binding and has mostly brought general recommendations such as monitoring artificial intelligence systems for abuse, protecting data from counterfeiting, and examining software suppliers.
However, US Director of the Cybersecurity and Infrastructure Agency, Jen Easterly, said it was important that so many countries agreed on the idea that artificial intelligence systems should put safety as a priority.
"This is the first time we've seen affirmations that this capability isn't just about cool features and how quickly we can bring it to the market or how we can compete to lower costs," Easterly said. He said the guidelines represent "the most important thing to do at the design stage is security."
This agreement is the latest initiative in a series of efforts - a small part of which is binding - by governments around the world to shape the development of artificial intelligence, whose influence is increasingly felt in industry and society in general.
Apart from the United States and Britain, 18 countries that signed these new guidelines include Germany, Italy, the Czech Republic, Estonia, Poland, Australia, Chile, Israel, Nigeria, and Singapore.
This framework addresses questions about how to keep artificial intelligence technology from being taken over by hackers and includes recommendations such as simply releasing models after appropriate security testing.
This does not address the difficult questions surrounding the proper use of artificial intelligence, or how the data that fills this model is collected.
SEE ALSO:
The emergence of artificial intelligence has raised a number of concerns, including concerns that it could be used to disrupt democratic processes, increase fraud, or cause dramatic loss of employment, among other losses.
Europe is ahead of the United States in regulation of artificial intelligence, with legislators there designing artificial intelligence rules. France, Germany, and Italy have also recently reached an agreement on how to regulate artificial intelligence that supports "obligatory self-regulation through code of ethics" for the basic models of artificial intelligence, designed to generate multiple outputs.
President Joe Biden's administration has prompted legislators to regulate artificial intelligence, but the polarized US Congress is making little progress in passing effective regulations.
The White House seeks to reduce the risk of artificial intelligence for consumers, workers and minority groups while strengthening national security with a new executive order in October.