China is reportedly tightening restrictions on the release of generative AI tools
By: Mark Jessy

July 12, 2023 6:45 AM
According to insiders close to Chinese officials, the proposed guidelines for artificial intelligence developers have been revised to include securing a license before publishing generative AI systems.
The Chinese government is exploring new restrictions for artificial intelligence (AI) research that prioritize content control and licensing.
According to a Financial Times report→ dated July 11, the National Cyberspace Administration of China (CAC) plans to establish a system that would require local enterprises to seek a license before launching generative AI systems.
This action indicates a tightening of the initial draft regulations issued in April, which gave corporations 10 working days following the product's launch to register it with authorities.
The new licensing structure is expected to be included in upcoming regulations, which are scheduled to be announced as early as the end of this month, according to sources.
Mandatory security inspections of AI-generated content were also included in the April draft of the legislation.
According to the government's proposal, all content must "embody core socialist values" and must not "subvert state power, promote an overthrow the current socialist structure, incite division, or undermine national unity."
Baidu and Alibaba, both Chinese IT and e-commerce companies, debuted AI tools this year, the latter competing with the popular AI chatbot ChatGPT.
According to the FT report's sources, both businesses have been in communication with regulators in recent months to ensure that their goods comply with the new standards.
Along with the concerns described above, the document also indicates that Chinese authorities have held tech companies developing AI models totally liable for any content made using their technology.
Regulators all across the globe have called for AI-generated content to be regulated. Senator Michael Bennet of Colorado recently wrote a letter to tech companies researching technologies to flag AI-generated content.
Vera Jourova, the European Commission's vice president for values and transparency, recently informed the media that she believes generative AI technologies with the "potential to generate disinformation" should mark the content they generate in order to prevent the spread of disinformation.