The Biden Administration Seeks Accountability Measures for AI
JAKARTA - US President Joe Biden's administration announced on Tuesday, April 11 that it is seeking public comment on potential accountability measures for artificial intelligence (AI) systems, as questions arise about their impact on national security and education.
ChatGPT, an AI program that recently caught the public's attention for its ability to quickly answer questions on a wide variety of topics, has especially caught the attention of policymakers in the United States for being the fastest-growing consumer app in history with over 100 million active users. monthly.
The National Telecommunications and Information Administration, an agency of the Commerce Department that advises the White House on telecommunications and information policy, wanted input because there was "increased regulatory interest" in AI's "accountability mechanisms."
The agency wanted to know if there were steps that could be implemented to provide assurance "that AI systems are legal, effective, ethical, safe and trustworthy."
"Responsible AI systems can provide enormous benefits, but only if we address the potential consequences and impacts. For these systems to reach their full potential, companies and consumers need to be able to trust them," said NTIA Administrator Alan Davidson.
President Joe Biden last week said that it remains to be seen whether AI is harmful. "Tech companies have a responsibility, in my view, to ensure their products are safe before bringing them to the public," he said.
اقرأ أيضا:
ChatGPT, which has wowed some users with quick responses to questions and caused concern in others due to its inaccuracies, is created by California-based OpenAI and backed by Microsoft Corp.
NTIA plans to produce the report as it looks at "efforts to ensure AI systems function as promised - and without causing harm" and says these efforts will inform the Biden Administration in their ongoing work to "ensure a comprehensive and uniform approach from government federal response to AI-related risks and opportunities."
A technology ethics group, the Center for Artificial Intelligence and Digital Policy, asked the US Federal Trade Commission to stop OpenAI from attempting to release a new commercial version of GPT-4, arguing that the system is "prejudicial, misleading, and a risk to privacy and public safety."