1. INTRODUCTION

On July 13, 2023, the Cyberspace Administration of China (the "CAC"), in concert with six other ministries, jointly issued the Interim Measures for the Management of Generative Artificial Intelligence Services (the "AIGC Measures"). The AIGC Measures judiciously absorbed feedback from a broad spectrum of stakeholders on the previous draft for comment, and explicitly uphold the principle of harmonizing technological progress with security measures and fostering innovation within a lawful governance framework. In the foreseeable future, generative artificial intelligence, a technology that applies algorithms and datasets to autonomously generate content, will invariably be thrust into the limelight among tech companies. Inevitably, discussions surrounding data compliance and other related issues will increase. As Part I in a series of articles intending to chart the regulatory course for AIGC and explore its potential trajectory under the current legal advancements, this article seeks to decode the regulatory intent embedded within the AIGC Measures.

  1. CALIBRATING THE COMPLIANCE OBLIGATIONS FOR AIGC

The previous draft of AIGC Measures was heavily focused on imposing legal obligations concerning the content security of AIGC on enterprises. This approach overlooked the practical challenges that businesses may encounter during their fulfillment of such obligation, and might even contest the inherent logic of AIGC technology. Hence, the draft version of AIGC Measures has provoked widespread discussion upon its release.

In contrast, current AIGC Measures have prudently refined the compliance obligations for providers of AIGC services ("Providers"), establishing a buffer zone that allows enterprises to prepare for compliance measures without hindering their growth momentum. This calibrated approach demonstrates the law's modesty. Additionally, the AIGC Measures have alleviated some compliance burdens on enterprises. For instance, the mandatory duty in the draft that required enterprises to validate the truthfulness and accuracy of the training dataset has been moderated to a commitment based on due diligence. Furthermore, the Measures have taken into account the limited control that Providers have over user-generated content, leading to the removal of certain prohibitive provisions.

It's also noteworthy that the AIGC Measures have provided a degree of flexibility for Providers in meeting compliance obligations. Providers now can utilize service agreements to delegate part of the compliance obligations to users, thereby controlling and reasonably allocating the compliance risks. The draft version, which demanded Providers suspend or terminate AIGC services upon detection of any online hype, malicious postings and comments, spam, or malicious software, has been revised. Providers are now afforded discretion to take a variety of responsive measures, such as issuing warnings, restricting certain functionalities, or suspending or terminating services as per the law and the agreement. The draft's requirement for Providers to optimize training models within three months upon discovery of any illegal content has been removed, respecting the autonomy of Providers in improving algorithm model performance and content management.

  1. SUPPORTING THE DEVELOPMENT OF THE AIGC INDUSTRIES

The AIGC Measures reflects a national commitment to fostering the growth of the AIGC industry. These provisions not only provide strategic guidance and legal safeguards for industrial innovation but also establish a balanced, rational, and scientifically based framework for industry regulation:

  • Principle of Balance. The AIGC Measures introduce principles that underscore the necessity of harmonizing development with security and intertwining innovation with governance by law, setting the tone for the regulation of the AIGC industry.
  • Legislation Fostering Technological Advancement. The AIGC Measures regard the Law on the Progress of Science and Technology as one of its basis of superior laws, emphasizing the primary purposes of fostering technological advancements in AIGC services.
  • Encouraging Multi-Level Development. The AIGC Measures put forth a diverse, multi-level approach to advocate the development and innovation within the AIGC sector.
    1. On a technical level, the AIGC Measures encourages innovation across the supply chain and at all production stages, encompassing areas such as algorithm design, framework construction, chip manufacturing, and the development of supporting software platforms.
    2. Regarding infrastructure, the AIGC Measures fosters collaborative efforts between all tiers of government and private enterprises to build robust AIGC infrastructures and public platforms for training data resources, aiming to enhance the sharing of computational resources.
    3. In terms of market engagement, the AIGC Measures calls for cooperation from a broad array of organizations, both profit and non-profit, in areas such as technological innovation, data resource development, commercialization and application, and risk prevention in the AIGC field.
    4. Concerning resource investment, the AIGC Measures supports the use of public data for algorithm training.
    5. For industrial support, the AIGC Measures promotes enterprises' procurement of secure, reliable chips, software, tools, computational power, and data resources.

These provisions will provide consistent support and bolster China's competitive position in the global AIGC technology race in the upcoming years.

  1. MAPPING THE REGULATORY COURSE FOR THE AIGC INDUSTRY

Compared to the draft version, the AIGC Measures puts more emphasis on the distinctive features of industry-specific regulation. As outlined in Article 16 of the AIGC Measures, regulatory bodies including the CAC, NDRC, MOE, MOST, MIIT, MPS, and NRTA are each assigned to fortify the supervision of AIGC services within their jurisdictions. The relevant supervisory authorities are expected to adapt and refine their regulatory methods in line with the innovative development of AIGC, crafting appropriate rules or guidelines tailored to various categories and tiers of AIGC technology. This suggests a likely future shift towards more nuanced, industry-specific, and targeted supervision of AIGC services.

The industry-specific regulatory approach aligns with the technologically intensive nature of AIGC, allowing a diverse range of sectors to develop more precise and effective regulations, measures, and standards based on their specific needs. For instance, a key regulatory objective for AIGC applied in the news industry may be the prevention of AI-generated and propagated false news. In the case of the financial industry, it becomes imperative to maintain the objectivity and fairness of user profiling, as well as ensure business continuity in the event of system attacks.

Under this industry-specific approach, each regulatory authority is better positioned to understand and manage AIGC services within its jurisdiction, paving the way for the development of specific regulatory measures and guidelines for different industries. This targeted approach avoids the pitfalls of a one-size-fits-all regulatory methodology, preventing potential roadblocks in the overall development of AIGC services.

  1. TRANSPARENCY, CATEGORIES AND TIERS OF THE AIGC

Globally, the regulation of Artificial Intelligence is attracting considerable attention, with numerous countries actively engaging in exploratory efforts and practical applications. In the realm of AIGC-related laws and regulations, two primary themes emerge as common focus areas: algorithmic transparency and the categorization and tiering of AIGC.

Algorithmic transparency refers to the ability to reveal or explain the principles, logic, data, results, and other information involved in the design, training, optimization, and operation of an algorithm, allowing for effective supervision and evaluation. Algorithmic transparency is pivotal for regulating the AIGC and promoting the sustainability of society at large. By facilitating a better understanding of the decision-making basis of the system, transparent algorithms bolster trust and acceptance of AIGC technology among users and stakeholders. Furthermore, algorithmic transparency empowers regulatory bodies to verify and assess the compliance and fairness of AIGC technology, safeguarding against discriminatory, biased, or inappropriate practices. Transparent algorithms also stimulate innovation and advancement in AIGC technology, promoting system improvement through learning and error correction. The emphasis on algorithmic transparency is embodied in Article 4 of the AIGC Measures, which instructs providers to adopt effective measures to improve transparency and increase the accuracy and reliability of AIGC services. The stipulation for algorithmic registration in the AIGC Measures, synchronized from the Internet Information Service Algorithmic Recommendation Management Provisions, further underscores its commitment to ensuring transparency.

The categorization and tiering of AIGC is an approach that addresses the diverse potential risks and impacts associated with AIGC technology and its applications. It involves stratifying these into different levels and applying corresponding regulatory measures. A regulatory framework built on this approach allows for the tailoring of requirements and measures to the specific application areas, risk levels, and technological maturity of AIGC systems. This ensures precision in regulatory action, avoiding the pitfalls of a 'one-size-fits-all' approach. Moreover, a categorized and tiered framework fosters innovation and development. It enables businesses and research institutions to plan and manage technological research and development, thereby reducing compliance costs and risks more effectively. The AIGC Measures echo this approach, explicitly advocating for AIGC categorization and tiering, aligning with the Data Security Law's required protection measures for data categorization and tiering.

  1. CONCLUSIONS

While AIGC services carry enormous potential and promising prospects, they also encounter a multitude of challenges and risks. The AIGC Measures, the first legislation in the People's Republic of China explicitly tailored to govern AIGC services, espouse an approach of inclusivity and caution. They strike a considerate balance between promoting AIGC innovation and curtailing potential misuse. The rollout of the AIGC Measures not only sets a robust legal foundation and provides safeguards for enterprises, encouraging the sturdy growth and regulated use of AIGC but also offers valuable insights and benchmarks for the global regulation of AIGC.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.