Britain Encourages Strict Regulations And Supervision For AI Developers Such As Nuclear Technology And Pharmacy

JAKARTA - Representatives from the Labor Party, Britain's largest opposition party, stated that developers working on artificial intelligence (AI) should be permitted and regulated in a similar way, such as the pharmaceutical, medical and nuclear industries.

Lucy Powell, a member of the Labor Party parliament, told The Guardian on June 5 that companies like OpenAI and Google that have created AI models should "have a license to build these models."

"The important point of concern for me is the lack of regulation on large language models which can then be applied to various AI tools, either in terms of how they are built, managed, or controlled," Powell said, quoted by Cointelegraph.

Powell, who is a spokesman for the Labor Party and serves as secretary of the shadow for digital, cultural, media, and sports, argues that regulating the development of certain technologies is a better choice than banning it, as the European Union has done with facial recognition tools.

He added that AI "could have many unwanted consequences," but if developers were forced to become transparent about their AI training model and datasets, some risks could be reduced by the government.

"This technology develops so fast that it requires an active government approach and interventionist, not a passive approach," he said.

Powell also believes that advanced technologies such as AI can have a major impact on the UK economy, and the Labor Party is reportedly drafting its own policies on AI and related technologies.

Next week, the leader of the Labor Party, Keir Starmer, plans to hold a meeting with the party's shadow cabinet at Google's UK office in order to talk to AI-focused executives.

Meanwhile, Matt Clifford, chairman of the Advanced Research and Invention Agency (ARIA) - a government research institute founded in February last year - appeared on TalkTV on June 5 to warn that AI could threaten humans within two years.

"If we don't start thinking about now how to organize and think about security, then within two years we will find that we have a very strong system," he said. Clifford explains that two years is the best estimate of an optimistic scenario.

Clifford stressed that current AI tools can be used to help "conduct large-scale cyberattacks." OpenAI has provided $1 million in funds to support AI-assisted cybersecurity technologies to prevent such use.

"I think there are a lot of scenarios to worry about," he said. "I'm sure that this should be one of the top priorities in the policymakers' agenda."

In an effort to address these concerns, Powell and the Labor Party argue that strict regulations and oversight of AI technology development need to be implemented. They request that companies like OpenAI and Google should obtain licenses before building AI models. It aims to ensure that these model development, management and control processes are subject to clear standards and take into account potential consequences.

Although AI regulations are considered an important step, Powell is also aware that AI technology has great potential in driving the UK economy. Therefore, the Labor Party is drafting its own policies regarding AI and related technologies to ensure that the country can utilize the potential of the technology optimally.