US Congress Discusses Artificial Intelligence Regulations, These Are Debated Challenges And Approaches
JAKARTA - United States lawmakers are struggling to set boundaries for artificial intelligence that is growing rapidly. But a few months after the ChatGPT caught Washington's attention, there was still no definite consensus.
Interviews with a US senator, congressional staff, artificial intelligence companies, and interest groups show that there are several options being discussed.
Some proposals focus on artificial intelligence that could threaten a person's life or livelihood, such as in the field of medicine and finance. Other possibilities include rules to ensure that artificial intelligence is not used for discrimination or violates a person's civil rights.
Another debate is whether to regulate artificial intelligence developers or companies that use them to interact with consumers. And OpenAI, the startup behind the ChatGPT chatbot sensation, has discussed self-organized intelligence regulators.
It's not clear which approach will be the winner, but some parties in the business world, including IBM and the US Chamber of Commerce, support approaches that only regulate critical areas such as medical diagnosis, which they refer to as risk-based approaches.
"If Congress decides that a new law is needed, the US Chamber AI Commission recommends that risks be determined based on the impacts of individuals," Jordan aired from the Technology Engagement Center Chamber. "Video recommendations may not pose a risk as big as decisions made on health or finance."
The growing popularity of generative artificial intelligence, which uses data to create new content like the word ChatGPT that sounds humane, has raised concerns that this fast-growing technology could drive fraud in tests, fuel misinformation, and lead to a new generation of scams.
The artificial intelligence hype has led to a number of meetings, including the visit of OpenAI, Microsoft Corp, and Alphabet Inc CEOs to the White House this month. US President Joe Biden met with the CEOs.
Congress is also involved in this issue, said congressional aide and tech expert.
"Extensive staff at the House of Representatives and the Senate are basically built up and asked to understand this," said Jack Clark, co-founder of well-known AI startup Anthropic, whose CEO also attended a meeting at the White House. "People want to lead artificial intelligence, partly because they feel like they don't lead social media," he added.
As lawmakers continue to study this issue, Big Tech's top priority is against "overreactions that are too early," said Adam Kovacevich, CHAmber of Progress's progressive head.
And while lawmakers like Senate Majority Leader Chuck Schumer are determined to address the issue of artificial intelligence bipartisanally, in fact the Polarized Congress, the presidential election will come next year, and lawmakers are facing other major issues, such as raising debt limits.
The plan proposed by Schumer requires independent experts to test artificial intelligence technology before it launches. It also calls for transparency and provides the government with the necessary data to prevent losses.
A risk-based approach means artificial intelligence used to diagnose cancer, for example, will be examined by the Food and Drug Administration (FDA), while artificial intelligence for entertainment will not be regulated. The European Union has headed towards issuing similar rules.
However, the focus on risks is deemed insufficient for Democratic Senator Michael Bennet, who introduced a bill calling for the creation of a government-made intelligence task force. He said he supports "value-based approaches" to prioritize privacy, civil liberties, and rights.
Bennett's side added that risk-based rules may be too rigid and fail to detect hazards such as the use of artificial intelligence to recommend videos promoting white supremacy.
Legislators have also discussed how to best ensure that artificial intelligence is not used to discriminate racially, perhaps in determining who gets a loan at low interest rates, according to someone who follows congressional discussions and is not allowed to speak to journalists.
At OpenAI, staff have considered wider scrutiny.
Cullen O'Keefe, an OpenAI research scientist, proposed in a speech at Stanford University in April establishing an agency that would order companies to obtain licenses before training powerful artificial intelligence models or operating data centers that facilitate them. According to O'Keefe, the agency can be called the Office of Security and Security of Artificial Intelligence Infrastructure, or OASIS.
When asked about the proposal, Mira Murati, head of OpenAI technology, said that trustworthy bodies can "enforce the developer's responsibilities to security standards. But what is more important than the mechanism is the agreement "on what standards you want to apply, what risks you want to overcome."
The last regulator formed was the Bureau of Financial Consumer Protection, which was established after the 2007-2008 financial crisis. Some Republican members may reject any regulations related to AI.