Global AI Meeting At Bletchley: Highlight Open Source AI Security Controversy
British Prime Minister Rishi Sunak led a series of important deals after hosting the first artificial intelligence safety meeting (AI) last week. But the global plans to oversee the technology are far from being agreed.
During two days of talks between world leaders, business executives, and researchers, tech CEOs such as Elon Musk and Sam Altman from OpenAI joined United States Vice President Kamala Harris and European Commission Chief Ursula von der Leyen to discuss AI's future regulations.
Leaders from 28 countries, including China, signed the Bletchley Declaration, a joint statement acknowledging the risks of the technology. The United States and Britain both announced plans to launch their respective AI safety institutes. In addition, two other meetings are announced to be held in South Korea and France next year.
Despite the agreement on the need to regulate AI, disputes remain about how it should be done, and who will lead the effort.
Risks around rapidly growing AI have become a higher priority for policymakers since Microsoft-backed OpenAI released ChatGPT to the public last year.
The unique ability of the chatbot to respond smoothly like humans has led some experts to call for pauses in the development of such a system, with warnings that AI can acquire autonomy and threaten humanity.
Sunak spoke of a "right and enthusiastic" to host Tesla founder Musk, but European legislators warned of too much technology and data controlled by a small number of companies in one country, United States.
"If there is only one country that has all the technology, all private companies, all devices, all expertise, it will be a failure for all of us," French Economy and Finance Minister Bruno Le Maire told reporters.
The UK also disagrees with the EU by proposing a mild approach in AI regulation, in contrast to Europe's AI Act which is nearing completion and will bind app developers who are considered to have a high risk of subject to tighter controls.
"I came here to introduce our AI Act," said European Commission Vice President Vera Jourova.
Jourova said that although he did not expect other countries to emulate the block law as a whole, some deals on global rules were needed.
"If the democratic world does not become a rulemaker, and we are the recipients of the rules, the fight will lose," he said, quoted by VOI from Reuters.
Despite projecting the image of unity, participants said three major power blocs present - the US, EU, and China - were trying to assert their dominance.
Some suggested that Harris had outperformed Sunak when the US government announced their own AI safety institute - just as Britain did it one week earlier - and he gave a speech in London highlighting the risks of technology in the short term, in contrast to the focus of gatherings on existential threats.
"Interesting that right when we announced the AI safety institute, America announced they had their own," said event participant Nigel Toon, CEO of British AI company Graphcore.
China's presence at the meeting and its decision to sign the Bletchley Declaration was announced as a success by British officials.
China's deputy minister of science and technology said the country was willing to work with all parties on AI arrangements.
However, hinting at tensions between China and the West, Wu Zhaohui told the delegation: "The state, regardless of their size and scale, has the same right to develop and use AI."
SEE ALSO:
The recurring theme of closed discussions, emphasized by a number of participants, is the potential open source AI risk, which gives public members free access to experiment with the code behind the technology.
Some experts have warned that open source models can be used by terrorists to manufacture chemical weapons, or even create super-intelligence beyond human control.
Speaking with Sunak at a live event in London on Thursday, Musk said: "There will be a time when you have an open source AI that will begin to approach human intelligence levels, or maybe exceed. I don't know what to do."
Yoshua Bengio, the pioneer of AI appointed to lead the "state of the science" report ordered as part of Bletchley's Declaration, said open source AI risk was a high priority.
"This can be placed in the hands of bad actors, and can be modified for malicious purposes. You cannot release the open sources of these strong systems, and still protect the public with the right safety fences," he said.