COMPARATIVE GUIDE
27 March 2025

Artificial Intelligence Comparative Guide

AB
AnJie Broad Law Firm

Contributor

AnJie Broad Law Firm is a full-service law firm with a wide range of practice areas. We are committed to delivering high-quality bespoke legal solutions to clients. AnJie Broad has extensive experience serving clients in practice areas such as Capital Market & Securities, Antitrust & Competition, Private Equity & Venture Capital, Intellectual Property, Dispute Resolution, Labor & Employment, Cross-border Investment & Acquisition, Insurance & Reinsurance, Maritime & Shipping, Banking & Finance, Energy, International Trade, Technology Media & Telecommunications, Life Sciences & Healthcare, Private Wealth Management, Real Estate & Construction, Hotels Resorts & Tourism and Media, Game and Entertainment & Sports.
Artificial Intelligence Comparative Guide for the jurisdiction of China, check out our comparative guides section to compare across multiple countries
China Technology

1 Legal and enforcement framework

1.1 In broad terms, which legislative and regulatory provisions govern AI in your jurisdiction?

China has rules relevant to AI but does not have a general AI law or regulation. Key rules include:

  • the Administrative Provisions on Algorithmic Recommendation in Internet-Based Information Services 2021, effective from 1 March 2022;
  • the Administrative Provisions on Deep Synthesis in Internet-Based Information Services 2022, effective from 10 January 2023;
  • the Opinions on Strengthening the Governance of Scientific and Technological Ethics 2022;
  • the Interim Measures for the Management of Generative Artificial Intelligence Services 2023;
  • the Measures for the Review of Science and Technology Ethics (Trial) 2023; and
  • the Measures for Labelling Artificial Intelligence-Generated Synthesised Content (Draft for Public Comments), released on 14 September 2024.

1.2 How is established or 'background' law evolving to cover AI in your jurisdiction?

China's AI governance framework can be traced back to the Regulations on the Security Assessment of Internet Information Services with Public Opinion Attributes or Social Mobilisation Capabilities, issued in 2018. Although primarily focused on internet information content governance, this was the first legal instrument to require a security assessment for:

  • information services with public opinion or social mobilisation attributes; or
  • the use of certain new technologies and applications.

Such security assessments have since become a key compliance obligation for various AI products in China.

Since 2021, China has been contemplating the governance of AI. On 17 September 2021, the government issued the Guiding Opinions on Strengthening the Comprehensive Governance of Internet Information Service Algorithms, which introduced the concept of algorithm governance at the regulatory level for the first time. The document proposed the establishment of an algorithm governance mechanism and the improvement of the regulatory system over a period of approximately three years.

Subsequently, the Cyberspace Administration of China (CAC) and other departments issued:

  • the 2021 Algorithmic Recommendation Provisions, which regulate recommendation algorithms;
  • the 2022 Deep Synthesis Provisions, which regulate deep synthesis; and
  • the 2023 GenAI Measures, which regulate generative AI (GenAI) services.

In addition, China's National Internet Security Standardisation Technical Committee, in collaboration with other regulatory authorities, has been systematically developing national and industry standards in the field of AI.

1.3 Is there a general duty in your jurisdiction to take reasonable care (like the tort of negligence in the United Kingdom) when using AI?

The Civil Code 2020 sets out the legal framework for tort liability. Article 120 states that where a person's civil law rights and interests are infringed due to a tort, that person is entitled to request the tortfeasor to bear tort liability. Article 1165 further clarifies that:

  • if one person is at fault for infringing the civil rights and interests of others, it must assume tort liability; and
  • if one person is presumed to be at fault by law and cannot prove otherwise, it must assume tort liability.

Based on the foregoing, it can be observed that for there to be a tort and tort liability, the following elements are required:

  • a person with legal rights or interests;
  • an infringement of those rights and interests; and
  • a person at fault or presumed to be at fault.

This general rule also applies to the use of AI.

1.4 For robots and other mobile AI, is the general law (eg, in the United Kingdom, the torts of nuisance and 'escape' and (statutory) strict liability for animals) applicable by analogy in your jurisdiction?

Currently, there are no laws or regulations in China that directly stipulate general principles of liability for robots and other mobile AI.

However, when AI services infringe the legal rights and interests of others, if the service provider has failed to fulfil a relevant duty of care, the courts may require that service provider to bear tort liability. For example, in (2024) Yue 0192 Min Chu 113, an AI platform generated an image containing infringing content. The service provider of this AI platform:

  • lacked a complaint reporting mechanism;
  • failed to provide potential risk warnings; and
  • did not include a prominent AI-generated label.

As a result, the court held that the service provider:

  • had not fulfilled its reasonable duty of care; and
  • was required to assume liability for the infringement.

1.5 Do any special regimes apply in specific areas?

In the financial sector, Chinese regulators emphasise that AI decisions should be explainable and transparent. On 8 November 2023, the People's Bank of China released the Guidance on Information Disclosure for Financial Applications based on Artificial Intelligence Algorithms. This guidance requires financial institutions to disclose relevant AI information when financial products and services using AI algorithms:

  • are first launched;
  • undergo significant risk events;
  • experience major changes; or
  • are discontinued.

The disclosed information must include details on:

  • the AI models used;
  • the mechanism of the algorithms; and
  • the data utilised in the algorithms.

The guidance also provides sample information disclosure reports in the appendix.

In the automotive sector, Chinese regulators take a cautious approach towards autonomous driving technology. On 31 December 2024, the Beijing municipal government issued the Regulations on Autonomous Vehicles in Beijing, China's first local regulations governing the use of autonomous vehicles. According to the regulations:

  • autonomous vehicles must undergo road testing, demonstration applications and safety assessments before they can apply for pilot road use;
  • the applicant must record and store operational information in accordance with relevant regulations and report it to the traffic authorities; and
  • if a malfunction occurs in an autonomous vehicle, the driver or safety operator must take measures to minimise the risk of accidents, such as:
    • manually taking control;
    • activating hazard lights;
    • reducing speed; or
    • driving the vehicle to a location that does not obstruct traffic.

1.6 Do any bilateral or multilateral instruments have relevance in the AI context?

China and several other countries have issued statements on the governance of artificial intelligence:

  • On 1 November 2023, China participated in the first AI Safety Summit and jointly signed the Bletchley Declaration with 28 other countries.
  • On 7 May 2024, China and France issued a Joint Statement on Artificial Intelligence and Global Governance.
  • On 14 May 2024, China and the United States held the first meeting of the AI Government Dialogue.
  • On 16 May 2024, China and Russia issued the Joint Statement on Deepening the Comprehensive Strategic Partnership of Coordination in the New Era, announcing plans to collaborate in areas such as:
    • AI;
    • cybersecurity;
    • information security; and
    • data security.
  • On 11 February 2025, China participated in the AI Action Summit and signed the Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet.

1.7 Which bodies are responsible for enforcing the applicable laws and regulations? What powers do they have?

The key government bodies involved in AI regulation in China include the following:

  • The CAC oversees internet information content management and serves as the lead regulator for:
    • the 2021 Algorithmic Recommendation Provisions;
    • the 2022 Deep Synthesis Provisions; and
    • the 2023 GenAI Measures.
  • According to these regulations, the CAC has the authority to:
    • issue warnings;
    • issue public criticism;
    • order rectifications within a specified timeframe; and
    • if rectification is refused or the situation is severe:
      • order the suspension of related services; and
      • impose a fine ranging from RMB 10,000 to 100,000.
  • The Ministry of Science and Technology is responsible for formulating national plans and policies to:
    • drive development;
    • promote national innovation;
    • coordinate the research and development of certain technologies; and
    • encourage China-bound technology transfers.
  • The Ministry of Industry and Information Technology is responsible for:
    • guiding the construction of information systems;
    • promoting the development of major technological equipment;
    • encouraging indigenous innovation; and
    • safeguarding information security.
  • The China National Intellectual Property Administration is responsible for:
    • organising the implementation of national IP rights strategies;
    • protecting IP rights; and
    • registering trademarks, patents and geographical indications.
  • The State Administration for Market Regulation, in the context of AI, is responsible for:
    • market supervision;
    • anti-monopoly and anti-unfair competition enforcement;
    • trademark and patent enforcement;
    • supervision of advertising activities; and
    • quality inspection, certification and accreditation.

1.8 What is the general regulatory approach to AI in your jurisdiction?

The 2021 Algorithmic Recommendation Provisions, the 2022 Deep Synthesis Provisions and the 2023 GenAI Measures require service providers offering services with "public opinion attributes or social mobilisation capabilities" to file their algorithms and conduct algorithm security self-assessments. The CAC will regularly publish a list of algorithms and generative AI (GenAI) products that have been filed.

Local branches of the CAC regularly inspect AI service providers and impose administrative penalties on entities violating the regulations. For example, on 22 July 2024, the Chongqing CAC reported that it had acted against three illegal GenAI services. The Chongqing CAC found that certain websites were providing GenAI services without conducting algorithm security assessments and filings. It conducted interviews with the executives of the operating entities and ordered them to immediately cease providing certain GenAI services.

In addition, regulators conduct special enforcement actions targeting algorithms and AI. For example, on 24 November 2024, the CAC, together with four other departments, launched a special action titled "Clear and Bright: Governance of Typical Issues in Online Platform Algorithms", requiring companies to conduct self-inspections and corrections on 21 typical issues. Thereafter, the CAC will check the progress of corrective actions taken by these companies.

2 AI market

2.1 Which AI applications have become most embedded in your jurisdiction?

In recent years, many AI applications have emerged in China, including the following:

  • Generative conversational large models: Tongyi Qianwen, Wenxin Yiyan, Doubao, Kimi, DeepSeek.
  • Intelligent driving products: NIO, Li Auto.
  • AI voice recognition products: iFlytek.
  • AI facial recognition products: SenseTime, Hikvision.

2.2 What AI-based products and services are primarily offered?

A wide range of AI-based products and services have been offered across multiple sectors, such as the following:

  • Tencent Medpedia provides virtual assistant services that can answer medical questions and provide relevant health information.
  • E-commerce platforms such as Taobao use AI to offer personalised recommendations and optimise search results, enhancing shopping experiences.
  • Intelligent customer service robots, such as those used by JD, provide efficient pre-sale and after-sale support.
  • AI-driven systems for autonomous driving and intelligent navigation, such as the Huawei Advanced Driving System, are being developed and deployed to improve transportation efficiency and safety.
  • Huawei Cloud and Alibaba Cloud offer AI-as-a-service solutions that democratise access to advanced AI capabilities.

2.3 How are AI companies generally structured?

The shareholding structures of AI companies in China range from tech giants with diversified ownership to privately held startups backed by venture capital and strategic investors. Government-backed entities also play a significant role in driving AI research and development.

In addition to the ordinary governance bodies established in the tech companies, according to the Measures for the Review of Science and Technology Ethics (Trial) 2023, AI companies must establish science and technology ethics review committees. For example, SenseTime established its AI ethics and governance committee in early 2020 and has developed internal regulations such as:

  • the management charter of the AI ethics and governance committee; and
  • the SenseTime Group Ethics Governance System.

On 2 September 2022, Alibaba Group announced the establishment of its technology ethics governance committee, introducing seven external advisory members to strengthen tripartite supervision.

2.4 How are AI companies generally financed?

Chinese AI companies are generally financed through a variety of sources, including:

  • government support;
  • venture capital (VC);
  • private equity; and
  • strategic investments from big tech companies.

The Chinese government plays a significant role in financing AI companies through various initiatives and funds. Most AI firms with both government and private VC funding received government investment first, serving as a signal for private VC funds to follow.

Additionally, big tech companies in China, such as Alibaba, Tencent and Baidu, are heavily investing in AI. These companies have significantly increased their capital expenditures on AI, focusing on investment in AI infrastructure in the face of US sanctions aimed at curbing China's advancements in AI.

2.5 To what extent is the state involved in the uptake and development of AI?

The Chinese government guides the development of the AI industry through various development plans and guiding documents. Key documents include the following:

  • New Generation Artificial Intelligence Development Plan (2017): This document explicitly elevates AI development to a national strategy, proposing a three-step strategic goal:
    • By 2020, AI technologies and applications should be synchronised with the world's advanced levels;
    • By 2025, significant breakthroughs in AI's fundamental theory should be achieved, with some technologies and applications reaching world-leading levels; and
    • By 2030, China aims to lead globally in AI technology and applications, positioning itself as a major hub for AI innovation.
  • Guidelines for Building the National New Generation Artificial Intelligence Standard System (2020): This document sets the target to establish an initial AI standard system by 2023.
  • Guiding Opinions on Accelerating Scenario Innovation and Promoting High-Level AI Applications for High-Quality Economic Development (2022): This initiative encourages local governments and organisations to accelerate AI application scenarios, driving high-quality economic development.
  • Implementation Opinions on Promoting Innovation and Development of Future Industries (2024): This document outlines the use of AI, advanced computing and other technologies to accurately identify and cultivate high-potential future industries, fostering long-term economic competitiveness.

In addition, some local governments have issued regional policy documents to promote the development of the AI industry, such as the Shenzhen Special Economic Zone Artificial Intelligence Industry Promotion Regulations, which provide tailored incentives and frameworks to accelerate AI innovation at the municipal level.

3 Sectoral perspectives

3.1 How is AI currently treated in the following sectors from a regulatory perspective in your jurisdiction and what specific legal issues are associated with each: (a) Healthcare; (b) Security and defence; (c) Autonomous vehicles; (d) Manufacturing; (e) Agriculture; (f) Professional services; (g) Public sector; and (h) Other?

(a) Healthcare

  • Regulatory treatment: The National Health Commission issued the Reference Guide on Artificial Intelligence Application Scenarios in the Health Sector in November 2024, which outlines potential applications of AI in various aspects of healthcare, including:
    • medical services management;
    • medical and pharmaceutical services;
    • medical insurance services;
    • traditional Chinese medicine management services;
    • hospital management;
    • primary health services;
    • public health services;
    • geriatric and childcare services;
    • health industry development; and
    • medical education and research.
  • Specific legal issues:
    • Data privacy and security: AI systems in healthcare frequently process sensitive personal information, raising concerns about compliance with the 2021 Personal Information Protection Law and the 2021 Data Security Law.
    • Liability for medical errors: Determining responsibility for errors or adverse outcomes resulting from AI-assisted diagnoses or treatments, which may involve complex interactions between healthcare providers and AI developers.

(b) Security and defence

  • Regulatory treatment: While there are no dedicated AI laws specifically for security and defence, relevant laws such as the 2016 Cybersecurity Law and the 2021 Data Security Law apply.
  • On 21 December 2023, the Ministry of Commerce and the Ministry of Science and Technology issued the updated Catalogue of Technologies Prohibited or Restricted from Export, which explicitly restricts the export of "data-driven personalised information push service technologies (such as user preference learning technology based on massive data continuous training and optimisation, real-time user preference perception technology, information content feature modelling technology, user preference and content matching analysis technology, large-scale distributed real-time computing technology supporting recommendation algorithms, etc)".
  • Specific legal issues:
    • National security risks: AI applications in security and defence must not pose risks to national security, as prohibited by the 2016 Cybersecurity Law and the 2021 Data Security Law.
    • Ethical considerations: The use of AI in military or surveillance contexts raises ethical concerns, which may be addressed through guidelines such as the Measures for the Review of Science and Technology Ethics (Trial) 2023.
    • Export controls: Restrictions on exporting AI technologies with dual-use (civilian and military).
    • Data protection: Ensuring the security and confidentiality of sensitive data used in AI systems for security and defence purposes.

(c) Autonomous vehicles

  • Regulatory treatment: The Norms on the Administration of Road Testing and Demonstration Application of Intelligent Connected Vehicles (for Trial Implementation) 2021 provide specific regulations for autonomous vehicles, allocating compliance obligations and liability for accidents.
  • Specific legal issues:
    • Product liability: Determining who is liable for accidents or malfunctions involving autonomous vehicles, which may involve:
      • manufacturers;
      • software developers; and
      • service providers.
    • Data privacy: Addressing the collection and use of personal data by autonomous vehicles, in compliance with the 2021 Personal Information Protection Law and the 2021 Data Security Law.
    • Safety and reliability: Ensuring that AI systems in autonomous vehicles meet safety standards and can reliably operate in various conditions.

(d) Manufacturing

  • Regulatory treatment: AI is increasingly integrated into manufacturing processes, but specific regulations are still evolving. Made in China 2025 promotes AI integration in smart factories and industrial automation.
  • Specific legal issues:
    • Data security: Protecting industrial data used in AI systems from unauthorised access or breaches, in line with the 2021 Data Security Law.
    • IP risks: Protecting proprietary AI algorithms from infringement or reverse engineering.
    • Labour and employment: Addressing the impact of AI on jobs and working conditions in the manufacturing sector, as outlined in labour laws and the 2023 GenAI Measures.

(e) Agriculture

  • Regulatory treatment: While there are no dedicated AI laws specifically for agriculture, relevant laws such as the 2016 Cybersecurity Law and the 2021 Data Security Law apply.
  • Specific legal issues:
    • Data security: Protecting farmers' personal data and agricultural data used in AI systems, in compliance with the 2021 Personal Information Protection Law and the 2021 Data Security Law.
    • Intellectual property: Addressing issues related to the ownership and use of AI-generated agricultural data and models.
    • Ethical considerations: Ensuring that AI in agriculture does not negatively impact the environment or rural communities, in line with ethical guidelines.

(f) Professional services

  • Regulatory treatment:
    • The Guidelines on Code of Conduct for Responsible Research 2023:
      • provide a set of scientific and ethical norms for researchers and institutions; and
      • state that AI should not be directly used for projects to generate materials or be listed as collaborators.
    • The Guiding Opinions on Regulating the Asset Management Business of Financial Institutions 2018 contain:
      • requirements for using AI to deliver financial advice;
      • reporting requirements for financial institutions;
      • customer disclosure obligations; and
      • obligations to cure or terminate defective AI systems.
    • The Opinions of the Supreme People's Court on Regulating and Strengthening the Judicial Application of AI 2022 deal with the use of AI by the Chinese judiciary.
  • Specific legal issues:
    • Professional liability: Determining responsibility for errors or omissions resulting from AI-assisted professional services, which may involve complex interactions between professionals and AI developers.
    • Ethical considerations: Ensuring that AI in professional services adheres to professional ethical standards and does not compromise professional integrity.

(g) Public sector

  • Regulatory treatment: The Opinions of the Supreme People's Court on Regulating and Strengthening the Judicial Application of AI 2022 deal with the use of AI by the Chinese judiciary.
  • Specific legal issues:
    • Transparency and accountability: Ensuring that AI systems used in public services are transparent and accountable, in line with ethical guidelines and regulations such as the 2023 GenAI Measures.
    • Bias in public services: Ensuring that AI systems (eg, welfare allocation) do not discriminate against marginalised groups.
    • Data misuse: Preventing unauthorised use of citizens' data.

(h) Other

In China, online services involving information or transactions may require a specific filing with or licence from the telecommunications authorities, including but not limited to:

  • an internet content provider (ICP) filing for a non-commercial website;
  • an ICP licence for operating commercialised websites;
  • an electronic data interchange licence for handling electronic business documents; and
  • an internet data centre licence for managing data centres.

GenAI companies should assess whether their services require any of these filings or licences to ensure compliance. Due to the close relations between GenAI and telecommunication services, GenAI service providers will often apply for some licences or permits from the telecommunication authorities.

4 Data protection and cybersecurity

4.1 What is the applicable data protection regime in your jurisdiction and what specific implications does this have for AI companies and applications?

The main data protection regulations in China are as follows:

  • The 2021 Personal Information Protection Law governs personal information processing in general, including automated processing by AI systems.
  • The 2021 Data Security Law governs how data in the wider sense is secured and managed, including data that is ingested and generated by AI systems.

4.2 What is the applicable cybersecurity regime in your jurisdiction and what specific implications does this have for AI companies and applications?

The 2016 Cybersecurity Law is the primary source of cybersecurity law in China. In the context of AI, it:

  • governs network and network operators that use or provide AI services; and
  • requires critical network equipment and specialised cybersecurity products to meet relevant Chinese national standards (Article 23).

5 Competition

5.1 What specific challenges or concerns does the development and uptake of AI present from a competition perspective? How are these being addressed?

Big data swindling: Some companies use big data and algorithmic technologies to classify consumers and offer them different prices. In response, many laws in China regulate this issue. Specifically:

  • Article 24 of the 2021 Personal Information Protection Law provides that no unreasonable differential treatment of individuals in terms of transaction prices or other transaction terms may be implemented;
  • Article 21 of the 2021 Algorithmic Recommendation Provisions provides that where a recommendation algorithm-based service provider sells goods or provides services to consumers, it:
    • should protect consumers' right to fair trading; and
    • should not, according to consumers' preferences, trading habits and other characteristics, use algorithms to carry out unreasonable differential treatment and commit other illegal acts in terms of trading conditions such as trading prices; and
  • Article 12 of the Interim Provisions on Network Anti-Unfair Competition provides that operators must not use technical means to influence user choice or other means to impede or destroy the normal operation of network products or services legally provided by other business operators.

Self-preferencing: Some business operators may use their strong market advantages, such as algorithms and data sets, to:

  • impose unreasonable restrictions on competitors and downstream business operators; or
  • obtain unreasonable concessions.

Business operators are prohibited from exploiting algorithms and other advantages to conduct such self-preferencing under Chinese law. Specifically:

  • under Article 15 of the 2021 Algorithmic Recommendation Provisions, recommendation algorithm-based service providers are prohibited from exploiting algorithms to:
    • impose unreasonable restrictions on other internet-based information service providers; or
    • hinder or destroy the normal operation of internet-based information services provided by them;
  • Article 20 of the Interim Provisions on Network Anti-Unfair Competition further clarifies that operators must not use technical means to unreasonably provide different trading conditions to counterparties; and
  • Article 35 of the E-commerce Law provides that an operator of an e-commerce platform should not take advantage of the service agreement, transaction rules, technologies or other means to:
    • impose unreasonable restrictions over or add unjustified conditions to:
      • the deals and prices concluded on the platform by business operators; or
      • their own deals with other business operators; or
    • charge operators on its platform any unreasonable fees.

6 Employment

6.1 What specific challenges or concerns does the development and uptake of AI present from an employment perspective? How are these being addressed?

AI is commonly used for employee screening, recruitment and appraisal. These uses are governed by the 2021 Personal Information Protection Law, which requires personal information processors – which in the workplace will likely be the employer – to ensure that:

  • automated decision making is transparent, fair and impartial; and
  • there is no unreasonable differential treatment of individuals in terms of transaction terms.

If an automated decision materially impacts an individual's rights or interests (eg, hiring, promotion or termination), the affected person may:

  • demand an explanation; and
  • refuse decisions made solely through automated means.

In addition, China makes efforts to protect the rights and interests of workers on internet platforms (eg, delivery couriers). Under the "Clear and Bright: Governance of Typical Issues in Online Platform Algorithms" enforcement campaign, the regulators have set specific requirements for:

  • algorithmic outcome fairness;
  • rule transparency; and
  • the appeal channels of such internet platforms.

7 Data manipulation and integrity

7.1 What specific challenges or concerns does the development and uptake of AI present with regard to data manipulation and integrity? How are they being addressed?

The Basic Safety Requirements for Generative Artificial Intelligence Services (TC260-003), a recommended national standard, sets strict requirements for the source of AI training data:

  • Businesses must verify the training data that they collect to ensure that the content does not contain more than 5% illegal or harmful information.
  • Methods such as keyword filtering, classification models and manual spot checks should be used to thoroughly filter out any illegal or harmful content from the training data.
  • The training data used must:
    • have a legitimate source; and
    • not infringe the rights of others.

Additionally, Chinese laws require AI service providers to append clear labels to the information content they generate, such as images and videos, in a way that does not hinder the user's experience, in order to inform the public about the synthetic nature of the content and avoid misunderstandings. Relevant legal regulations include:

  • the Cybersecurity Standard Practice Guide – Content Labelling Methods for Generative Artificial Intelligence Services, effective from 25 August 2023;
  • the Measures for Labelling Artificial Intelligence-Generated Synthesised Content (Draft for Public Comments); and
  • the Cybersecurity Technology – Labelling Methods for Content Generated by Artificial Intelligence (Draft for Public Comments).

8 AI best practice

8.1 There is currently a surfeit of 'best practice' guidance on AI at the national and international level. As a practical matter, are there one or more particular AI best practice approaches that are widely adopted in your jurisdiction? If so, what are they?

Guidance on AI development and use comes in the form of legally binding rules and national standards, which include:

  • mandatory ethics review procedures (Measures for the Review of Science and Technology Ethics (Trial) 2023; Network Security Standard Practice Guide – Artificial Intelligence Ethics Security Risk Prevention Guidelines (TC260-PG-20211A));
  • explicit training data requirements (2023 GenAI Measures);
  • labelling requirements for AI-generated content (2022 Deep Synthesis Provisions);
  • holistic security requirements (ie, internal security, security for users, security for the public, national security) that describe concrete key performance indicators (Basic Safety Requirements for Generative Artificial Intelligence Services (TC260-003); AI Safety Governance Framework (V1.0)); and
  • regulatory filing requirements (2021 Algorithmic Recommendation Provisions, 2022 Deep Synthesis Provisions and 2023 GenAI Measures).

Moreover, many existing laws and regulations are technologically neutral and directly applicable to AI, such as the 2021 Personal Information Protection Law, which imposes legal obligations on personal information processors regardless of the technical measures that they use to process PI.

Regulators, such as the Cyberspace Administration of China, are often open to discussing compliance issues. It is recommended to consult with these regulators when navigating areas of legal ambiguity.

8.2 What are the top seven things that well-crafted AI best practices should address in your jurisdiction?

  • Explainability;
  • Bias and discrimination;
  • Unreliable output;
  • Adversarial attack;
  • Illegal collection and use of data;
  • Improper content and the poisoning of training data; and
  • Data leakage.

8.3 As AI becomes ubiquitous, what are your top tips to ensure that AI best practice is practical, manageable, proportionate and followed in the organisation?

We would recommend that businesses establish a comprehensive AI governance framework internally to ensure that the operation and use of AI comply with the law. This AI governance framework should include the following:

  • Principles of AI governance: Businesses should clearly define the type of AI that they intend to develop or deploy and use this to set their core AI governance principles. For example, Microsoft has set responsible AI development as its governance objective and has established six principles – including accountability, fairness and transparency – to support this core AI governance principle.
  • AI policies: Businesses may create standardised workflows and specific regulations for various stages of AI development, deployment, use and operation, integrating legal and compliance considerations from the design phase.
  • Impact assessment: Businesses should conduct regular or ad hoc impact assessments of AI, particularly when significant changes occur. Chinese regulators require businesses to:
    • review algorithm mechanisms;
    • conduct technology ethics reviews;
    • verify the legality of AI-generated outputs; and
    • consider the impact of AI on:
      • personal information;
      • IP rights; and
      • other interests.
  • Businesses can refer to these legal requirements when developing their own impact assessment tools and content.
  • Robust organisational structure: Businesses should:
    • set up dedicated AI governance or compliance departments at the management level; and
    • allocate sufficient personnel.
  • For example, we understand that Microsoft has established:
    • an AI governance committee;
    • an AI governance office; and
    • responsible AI employee champions.

9 Other legal issues

9.1 What risks does the use of AI present from a contractual perspective? How can these be mitigated?

Risks can arise from the ambiguity of specific words or terms. For instance, contracts involving AI may lack clear definitions, particularly regarding technical matters. This can be dealt with by:

  • using multidisciplinary teams; and
  • adopting the definitions provided in national standards on AI to ensure that a common lexicon is used to communicate the right information.

Also, issues such as the following are relatively novel and could lead to new and unforeseen kinds of disputes:

  • the ownership of AI-generated content;
  • AI copyright infringement;
  • unlawful training data; and
  • data usage rights.

To mitigate such risks, AI service providers should clearly address such matters in their service agreements with users.

9.2 What risks does the use of AI present from a liability perspective? How can these be mitigated?

In recent years, AI litigation has mostly involved intellectual property, with the primary issues being:

  • AI using copyrighted works owned by others during training; and
  • AI generating works that infringe others' IP rights.

In response, Chinese regulators have outlined a series of measures for businesses to avoid IP liability in the Basic Safety Requirements for Generative Artificial Intelligence Services (TC260-003), including:

  • designating a responsible person for IP related to training data and generated content, and establishing an IP management strategy;
  • identifying major IP infringement risks in the training data before beginning the training process;
  • setting up channels for filing complaints and reports regarding IP issues;
  • informing users of IP-related risks in the user service agreement, and establishing the responsibilities and obligations regarding IP issue identification with users;
  • promptly updating IP-related strategies based on national policies and third-party complaints;
  • publicly disclosing summary information related to IP in the corpus;
  • supporting third parties in querying the use of corpus and relevant IP issues via the complaint and reporting channels.

One litigation in China focused on the fundamental personality rights of humans and, in particular, the right to one's voice. That case, which involved the AI cloning of a voice actor's voice, highlighted that:

  • infringement can occur if an AI can be recognised as an individual; and
  • authorisation to create a voice clone must be explicit.

9.3 What risks does the use of AI present with regard to potential bias and discrimination? How can these be mitigated?

During the algorithm design and training process, personal biases may be introduced, either intentionally or unintentionally. Additionally, poor-quality datasets can lead to biased or discriminatory outcomes in the algorithm's design and outputs, including discriminatory content related to:

  • ethnicity;
  • religion;
  • nationality; and
  • region.

In this regard, Article 24 of the 2021 Personal Information Protection Law provides as follows:

  • When personal information is used by personal information processors for automated decision making, the transparency of the decision-making process and the fairness and impartiality of the results must be ensured.
  • Unreasonable differential treatment of individuals regarding transaction prices or other transaction terms is prohibited.

Additionally, discriminatory content is considered illegal and harmful information under Chinese law. AI service providers are prohibited from:

  • using data containing discriminatory content as training data for AI; or
  • using discriminatory content to create user profiles and push information based on such content.

10 Innovation

10.1 How is innovation in the AI space protected in your jurisdiction?

In China, AI innovations can be protected through patents or software copyrights if certain conditions are met. However, Chinese patent law does not protect abstract mathematical formulae. According to the Patent Examination Guidelines (2023), when reviewing invention patents related to AI, the focus is on the solution being proposed. If algorithms and data structures are organically integrated to form an algorithmic model, and the algorithmic model is implemented using certain machinery or devices in business processes:

  • such devices can be protected by patents; and
  • the software code used in these devices can be protected by software copyrights.

Additionally, datasets used for AI training can receive IP protection in judicial practice. For example, a Beijing company specialising in providing high-quality voice data for AI businesses sued a Shanghai company for copyright infringement and unfair competition over the unauthorised use of its Mandarin voice data. The Beijing Internet Court and IP Court ruled in favour of the plaintiff, recognising that the plaintiff's dataset had obtained a data IP registration certificate, which served as preliminary evidence of its lawful data ownership. The defendant was ordered to pay RMB 102,300 in damages.

Another hot topic is whether AI-generated content can be protected by copyright. In 2023, the Beijing Internet Court recognised that AI-generated artwork can be copyright protected, even if it was created using AI software (Stable Diffusion). The court found that the plaintiff had:

  • selected the prompts and parameters used to generate the images;
  • refined them through iterative processes; and
  • contributed to the overall aesthetic and composition of the images.

As a result, the court determined that the defendant's use of the plaintiff's images without permission constituted copyright infringement (2023 Jing 0491 Min Chu 11279).

10.2 How is innovation in the AI space incentivised in your jurisdiction?

The Chinese government has introduced various supportive policies for the development of the AI industry, including but not limited to the following:

  • Establishing government funds to invest in the AI industry: For example, in October 2024, Shanghai Guotou Company, together with several tech companies, set up the Shanghai Artificial Intelligence Ecological Fund. With a total value of RMB 10 billion, the fund will focus on AI infrastructure, foundational and vertical large models and next-generation edge-side AI applications along the large model industry chain.
  • Building AI industry parks: In recent years, AI industry parks have been established in various regions of China. For example, Shanghai has developed the country's first AI-themed industrial cluster in Zhangjiang, which has attracted:
    • international giants such as IBM, Microsoft and Infineon; and
    • domestic innovation enterprises such as CloudWalk Technology, Megvii Technology and numerous research institutions.
  • Providing subsidies: Local governments offer various subsidies to AI enterprises within their jurisdictions to support their development. For instance:
    • the Futian District of Shenzhen provides up to RMB 10 million in funding support for large model enterprises registered under generative AI; and
    • the Fujian provincial government offers subsidies of up to:
      • 50% of the cost for AI companies purchasing computing power services; and
      • up to RMB 8 million in funding for AI research projects at universities.

11 Talent acquisition

11.1 What is the applicable employment regime in your jurisdiction and what specific implications does this have for AI companies?

China's labour regime is characterised by a comprehensive legal framework that protects employees' rights while ensuring fair employment practices for enterprises. To avoid legal risks, employers must comply with:

  • contractual obligations;
  • social insurance and housing fund contributions; and
  • proper termination procedures.

The employment regime in China has specific implications for AI companies, as follows:

  • Given the short project cycles and evolving new technology requirements, AI companies often prefer fixed-term contracts (one to three years). However, under the Labour Contract Law, if an employee signs two consecutive fixed-term labour contracts, the third contract must be indefinite, which can limit flexibility.
  • Under Chinese law, an employer may ask its employees to comply with non-compete obligations after they depart from the company, provided that these are reasonable in:
    • scope (eg, no more than two years); and
    • compensation (at least 30% of the employee's average salary over the past 12 months).
  • This provides legal grounds for AI companies to protect their innovations.
  • Cities have provided many talent acquisition policies. For example, Suzhou has announced measures to support AI talent development, including substantial project funding and housing subsidies, attracting more AI talent and indirectly easing AI companies' labour expenses.

It is recommended that AI companies establish a dedicated labour compliance team to:

  • identify and mitigate labour-related risks; and
  • strategically leverage regional policy incentives to attract more talent.

11.2 How can AI companies attract specialist talent from overseas where necessary?

  • Offer competitive compensation and career growth: AI companies:
    • should provide compensation aligned with global standards, including high salaries and stock options, to attract AI talent; and
    • can partner with global institutions to host cutting-edge AI projects or R&D initiatives to access academic talent and provide a clear career path for recruits.
  • Leverage government incentives: AI companies should tap into government programmes which relax immigration rules and offer housing subsidies and project funding for foreign professionals in the AI sector.
  • Promote remote work programmes and relocation support programmes: AI companies should:
    • offer remote work options to attract talent who prefer flexible arrangements; and
    • implement programmes to support international hires with relocation assistance, cultural adaptation resources and legal guidance.

12 Trends and predictions

12.1 How would you describe the current AI landscape and prevailing trends in your jurisdiction? Are any new developments anticipated in the next 12 months, including any proposed legislative reforms?

Many scholars and experts have suggested that China should formulate a comprehensive AI law. Although no official drafts have been issued by the Chinese government, preliminary drafts have been circulating in academic circles since early 2024. To this end:

  • on 19 March 2024, experts from seven universities jointly published the Artificial Intelligence Law (Draft by Scholars); and
  • on 16 April 2024, institutions such as the Law Institute of the Chinese Academy of Social Sciences released the Artificial Intelligence Demonstration Law 2.0 (Draft by Experts).

Both drafts focus on key issues in AI governance and propose governance solutions.

The State Council's Legislative Work Plans for 2023 and 2024 respectively also mentioned that a draft AI law was scheduled to be submitted to the Standing Committee of the National People's Congress (NPC) for review. However, the Standing Committee of the NPC has not yet announced any formal legislative plans concerning a comprehensive AI law.

In terms of national standards, on 5 June 2024 the Ministry of Industry and Information Technology, in collaboration with other authorities, released the Guidelines for the Construction of the National Artificial Intelligence Industry Comprehensive Standardisation System (2024 Edition), which clearly introduce a plan to:

  • establish over 50 new national and industry standards by 2026;
  • enable more than 1,000 companies to implement and promote standards;
  • participate in the development of over 20 international standards; and
  • promote the globalisation of the AI industry.

On 26 January 2025, the National Internet Security Standardisation Technical Committee released the Artificial Intelligence Security Standard System (V1.0). This document:

  • systematically outlines the framework of standard documents in the field of AI security governance; and
  • lists the standards that still need to be developed in the future.

13 Tips and traps

13.1 What are your top tips for AI companies seeking to enter your jurisdiction and what potential sticking points would you highlight?

Our top tips are as follows:

  • Comprehensive compliance with existing regulations:
    • Strictly adhere to China's core rules, such as:
      • the 2021 Algorithmic Recommendation Provisions;
      • the 2022 Deep Synthesis Provisions; and
      • the 2023 GenAI Measures.
    • Prioritise algorithm and large model filings with the Cyberspace Administration of China, especially if your service has "public opinion attributes or social mobilisation capabilities".
  • Data security and privacy protection:
    • Ensure that training data sources are lawful and implement measures to enhance data quality and security.
    • Provide opt-out mechanisms for personalised recommendations and avoid discriminatory pricing practices.
  • Content governance:
    • Label AI-generated content clearly and embed hidden metadata to trace synthetic content.
    • Establish robust content moderation systems to filter illegal or harmful information and retain logs for regulatory audits.
  • IP protection: Avoid training models on copyrighted material without authorisation.
  • Ethics review: Establish an internal AI ethics committee to assess risks and align with national guidelines.

Potential sticking points to note include the following:

  • Ambiguity in regulatory scope: The determination of whether a service qualifies as having "public opinion attributes or social mobilisation capabilities" remains subjective, increasing compliance uncertainty.
  • Complex filing requirements: Algorithm and large-model filings demand significant technical and legal resources, including server localisation for foreign providers.
  • IP and liability risks: Balancing innovation with IP infringement risks, especially in generative AI, could lead to costly disputes.
  • Cross-border data transfers: Compliance with China's strict data localisation rules may complicate global operations.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More