US Official: Abuse Of AI Is Difficult To Prevent When Science Is Still Developing
JAKARTA The policymaker's efforts to recommend security measures for artificial intelligence (AI) face major challenges: the science that continues to grow.
Elizabeth Kelly, Director of the US Artificial Intelligence Safety Institute, stated that AI developers themselves are still looking for ways to prevent misuse of the new system. This makes it difficult for the government to establish effective security measures.
Speaking at a Reuters NEXT conference in New York on Tuesday, December 10, Kelly highlighted concerns in cybersecurity. He explained that ways to overcome security restrictions that AI laboratories have implemented, known as "jailbreaks," could be done easily.
"It's hard for policymakers to recommend the best practices in terms of security, while we don't know which one is effective and which one isn't," Kelly said.
Technologists are working to test and protect AI in various aspects. One of the problems is synthetic content, where the manipulation of digital water signatures showing images produced by AI is still too easy to do. This makes it difficult for authorities to provide the right guidance for the industry.
Founded under the Biden administration, the US AI Safety Institute seeks to address this challenge through partnerships with academics, industry, and civil society to support its technological evaluation. Kelly emphasized that AI safety is a "fundamental bipartisan issue," when asked about the future of this institution under the leadership of Donald Trump, who will take office in January.
SEE ALSO:
As the institution's first director, Kelly recently chaired the inaugural meeting of AI safety institutions from around the world which was held last month in San Francisco.
Kelly revealed that the 10 meeting member states are developing safety testing that can be operated globally, assisted by technical experts.
"This meeting really gathered tech experts in the room," Kelly said, describing a more technical collaborative atmosphere than a regular diplomatic meeting.
Through this collaborative approach, AI safety institutions aim to set stronger global standards in facing security challenges in the era of developing technology.