Chinese Government Tightens AI Regulations, Focuses On Content And Licensing

The Chinese government is considering additional regulations for the development of artificial intelligence (AI) which emphasizes content control and licensing.

According to a Financial Times report on July 11, China's Cyberspace Administration (CAC) wants to implement a system that requires local companies to obtain licenses before releasing a generating AI system.

The move signifies the imposition of the initial draft regulation released in April, which gives the company 10 working days after the product launch to register it with the authorities.

"This new licensing scheme is expected to be included as part of the upcoming regulations and is expected to be released at the end of this month," a source told FT.

Also included in the April draft regulation is a mandatory safety review of content generated by AI.

The government stated in its draft that all content must "image core socialist values" and should not "collapse state power, advocate the overthrow of socialist systems, incite state separation, or undermine national unity."

Chinese technology and e-commerce companies Baidu and Alibaba have both released AI tools this year, the latter competing with popular AI chatbot ChatGPT.

According to sources in the FT report, the two companies have been in contact with regulators in recent months to keep their products in accordance with the new rules.

In addition to the aforementioned implications in future regulations, the draft also states that Chinese authorities have established that tech companies that make AI models fully responsible for content created using their products.

Regulators around the world have asked for regulations on content generated by AI. In the United States, Senator Michael Bennet recently wrote a letter to tech companies developing this technology to label content generated by AI.

The Vice President of the European Commission on Value and Transparency, Vera Jourova, also recently told the media that she believes AI tools are generative to "potential to generate disinformation" needs to label the content generated to stop the spread of disinformation.