ARTICLE
2 September 2024

Investing In AI In India (Part 2): Tracking The Regulatory Landscape

This note, the second of a multi-part series on investing in the Indian artificial intelligence ("AI") sector, outlines some of the key regulatory developments in AI in the country.
India Delhi Technology
To print this article, all you need is to be registered or login on Mondaq.com.

This note, the second of a multi-part series on investing in the Indian artificial intelligence ("AI") sector, outlines some of the key regulatory developments in AI in the country.

Prospective investors in Indian AI companies should familiarize themselves with the government's initiatives in AI regulation and the direction of future regulation. For a discussion on the challenges with respect to AI regulation in India, see our note here.

It is important to keep in mind that India's approach to AI governance may change in the future, given the rapidly evolving nature of technology as well as the country's dynamic regulatory trajectory so far. For an overview of India's initiatives until December 2023 with respect to regulating AI, see our note here.

Key Developments in 2024

Financial allocation for the IndiaAI mission

It was reported on March 7, 2024 that the Indian government had approved an allocation of over USD 1.24 billion for a national program aimed at building a comprehensive ecosystem to foster AI innovation (the "IndiaAI Mission"), including for the purpose of catalyzing various components of the IndiaAI Mission through public-private partnerships – such as initiatives on compute capacity, the India Datasets Platform ("IDP"), future skills, start-up financing, and research and development ("R&D") on AI safety.

For a discussion on India's proposed digital governance framework, including in respect of the IDP, see our note here.

Digital competition law

  • On March 12, 2024, India's Ministry of Corporate Affairs ("MCA") introduced a draft bill on digital competition law (the "Draft Bill") for public consultation. Among other things, the Draft Bill is aimed at regulating 'systemically significant digital enterprises' ("SSDEs," such as Alphabet, Amazon, Apple, Meta and Microsoft). Based on the European Union's Digital Markets Act, the Draft Bill prohibits SSDEs from carrying out certain practices that may be regarded as impeding competition, such as restricting third party applications, tying and bundling, favoring its own products or services or those of related parties.
  • The Draft Bill is part of a report ("MCA Report") submitted by a committee on digital competition law. This committee was set up in February 2023 for the purpose of examining the need for a separate law on competition in digital markets.
  • The MCA Report emphasized the need to identify digital services – including SSDEs – to enable swifter regulatory responses. In addition, the MCA Report acknowledged that the proposed "Digital India Act" is expected to regulate digital enterprises, including AI-based platforms.
  • The ambit of the Draft Bill may also include AI-driven digital platforms, potentially empowering the Competition Commission of India ("CCI") to selectively regulate such enterprises ex-ante (e., involving regulatory intervention prior to the occurrence of an event), including with respect to alleged anti-competitive practices by 'Big Tech' companies, as well as entities with significant market presence and/or the ability to influence the Indian digital market.
  • According to a government press release dated April 22, 2024, the CCI will launch a market study on AI and competition to "understand the transformative capabilities of AI that have significant pro-competitive potential, as well as competition concerns emanating from the use of AI."
  • As of May 2024, the CCI acknowledged various challenges which digital markets pose for traditional competition law, including in respect of potential market concentration on account of 'network effects'; the opacity of algorithms in the context of possible collusion – including with regard to price and market strategy coordination; as well as high barriers to entry for smaller enterprises because of the anti-competitive data dominance of larger platforms.
  • Recent media reports indicate certain industry concerns with regard to the Draft Bill, including in respect of:
    1. compromised user interfaces of digital applications stemming from data use restrictions;
    2. adverse effects on indigenous start-ups on account of low user base thresholds;
    3. deterrence to investment arising from dual regulation, given that the Competition Act, 2002, as amended from time to time, will operate in parallel;
    4. significant compliance burdens, especially for Big Tech companies, on account of strict prescriptive norms under the ex-ante framework – which may, in turn, disincentivize innovation and R&D;
    5. the possibility of arbitrary decision-making on account of the CCI's wide discretionary powers, including to designate an entity as an SSDE based on various subjective criteria;
    6. implications for data privacy and consumer choice; and
    7. potential overlap with other laws and regulatory frameworks, such as the Digital Personal Data Protection Act, 2023 (the "DPDP Act") and the proposed Digital India Act, which could result in regulatory arbitrage issues.
  • Certain SSDEs may be considered 'intermediaries' for the purpose of certain AI-related advisories, as issued by the Ministry of Electronics and Information Technology ("MeitY") with regard to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (the "Intermediary Guidelines") (see discussion below).

AI advisories on intermediary liability

  • On March 1, 2024, building on an earlier advisory issued by the MeitY in December 2023 (the "December 2023 Advisory") in respect of compliance with the Intermediary Guidelines, the MeitY issued an additional advisory (the "March 1 Advisory") with specific reference to due diligence requirements under the Intermediary Guidelines.
  • Among other things, the March 1 Advisory advised intermediaries and platforms to (i) label any under-trial/unreliable AI models, and (ii) secure explicit prior approval from the government before deploying such models in India.
  • Pursuant to social media posts made a few days after the issuance of the March 1 Advisory, the then-Minister of State of the MeitY clarified that (i) such advisory was aimed only at 'significant' and 'large' platforms and would not apply to 'start-ups'; and (ii) legal consequences under existing laws for platforms that enable or directly produce 'unlawful content' – as specified under the Intermediary Guidelines – would continue to apply, subject to the 'safe harbor' available under the Information Technology Act, 2000 (the "IT Act") and rules thereunder (such clarification, the "FirstMeitY Clarification").
  • Thereafter, the MeitY Union Minister clarified that (i) the March 1 Advisory is applicable to AI models available on social media platforms, and not to those AI models that apply to agriculture, healthcare, etc.; and (ii) the March 1 Advisory does not have binding effect (such clarification, the "Second MeitY Clarification").
  • However, given the perceived ambiguities and compliance difficulties, the March 1 Advisory was superseded by a follow-up advisory issued on March 15, 2024 (the "March 15 Advisory," and together with the December 2023 Advisory and the March 1 Advisory, the "AI Advisories").
  • While retaining elements from the March 1 Advisory, the March 15 Advisory removed certain compliance requirements, such as those related to prior government approval and submission of an action taken-cum-status report – thus easing the obligations of intermediaries and platforms which make AI models available to users in India.
  • However, the scope of prohibited content was expanded to include all content considered 'unlawful' under any law in force – as opposed to only under the IT Act and/or the Intermediary Guidelines.
  • Further, the March 15 Advisory appears to have extended the scope of due diligence requirements to all intermediaries and platforms – as opposed to only 'significant' and 'large' platforms, as informally clarified through the First MeitY Clarification.
  • In addition, while the requirement of being able to identify the creator or first originator of misinformation/deepfake content has now been removed, the March 15 Advisory stipulates that if changes are made by a user, the metadata should be configured to enable the identification of such user or the computer resource responsible for such modification.

Telecommunications

The Telecommunications Act, 2023 (the "Telecom Act") was published late last year pursuant to a gazette notification dated December 24, 2023. The Telecom Act aims to simplify a complex colonial-era legislative framework for improving the ease of doing business, as well as to promote new and evolving technologies such as AI.

Certain provisions of the Telecom Act were notified in June 2024, including Section 27 under Chapter VI ("Innovation and Technology Development"). Section 27 authorizes the central government to create regulatory sandboxes (i.e., 'live' testing environments where new products, services, processes and/or business models may be deployed on (i) a limited set of users, (ii) for a specified period of time, and/or (iii) with certain relaxations from the provisions of the Telecom Act).

Section 27 of the Telecom Act also empowers the central government to prescribe the manner and duration of such regulatory sandboxes for the purpose of encouraging and facilitating innovation and technological development in telecommunications.

In light of new developments in the AI space and other new technologies, pursuant to a press release dated April 12, 2024, the Telecom Regulatory Authority of India issued recommendations on encouraging innovative technologies, services, use-cases, and business models through a regulatory sandbox in the digital communication sector, including through the live testing of such emerging technologies.

Global partnership on AI

India is a member of the Global Partnership on Artificial Intelligence ("GPAI"), an international, multi-stakeholder initiative to guide the responsible development and use of AI based on human rights, inclusion, diversity, innovation, and economic growth. The sixth meeting of the GPAI Ministerial Council was held on July 3, 2024 at New Delhi (the "New DelhiAI Meeting"), where there was consensus about the future vision of GPAI.

The Organization for Economic Co-operation and Development ("OECD") and the GPAI have formed a new integrated partnership for the purpose of advancing coordinated international efforts towards implementing human-centric, safe, secure and trustworthy AI – as embodied in the principles of the OECD Recommendation on AI (the "OECD Recommendation"). The OECD Recommendation was adopted in 2019 as the first intergovernmental standard on AI and was updated in 2024, thus continuing to serve as a global reference for AI policy. A renewed vision for GPAI on the basis of the OECD Recommendation was announced at the New Delhi AI Meeting.

Global IndiaAI summit

Along with the New Delhi AI Meeting, India also organized the Global IndiaAI Summit on July 3 and 4, 2024. The summit focused on advancing the development of AI in areas such as compute capacity, foundational models, datasets, application development, future skills, start-up financing, and safe AI – consistent with the seven key pillars of the IndiaAI Mission.

The Current Scenario

While India still lacks a standalone framework for regulating AI, the government has been considering a dedicated AI regulation, potentially through a separate set of provisions of the proposed Digital India Act. For an analysis of key themes likely to be included in the Digital India Act, see our note here.

Going forward, the newly-elected central government is likely to continue with its 'Digital India' drive, including by funding and supporting various initiatives related to AI and emerging technologies. According to pre-election statements issued by the MeitY, a draft regulatory framework on AI may be issued over the next few months. Reports also suggest that until the rollout of the Digital India Act (which is expected to replace the IT Act), the MeitY may further amend the Intermediary Guidelines in connection with AI – including for the purpose of regulating deepfakes and other unlawful content.

While the Digital India Act is expected to eventually replace the IT Act, certain provisions of the IT Act relating to personal data, including Section 43A and the rules framed thereunder (the "IT Rules"), will be soon replaced by the DPDP Act, which has already been approved in Parliament and published in India's official gazette in August 2023. For a brief introduction to the DPDP Act, along with an update on developments under such law, see our notes here and here.

Section 3 of the DPDP Act specifically states that such law will not apply to personal data that is made publicly available by the concerned individual, and cites the example of a blogger sharing personal information on social media. This provision may allow companies, especially 'Big Tech' platforms offering aggregated services, to scrape or otherwise source publicly-available online personal data without consent, including for the purpose of developing bespoke AI models based on the non-consensual processing of such data.

However, several provisions of the DPDP Act are to be operationalized through specific rules which are yet to be notified. According to a recent statement made by the MeitY Union Minister, such rules are in advanced stages of drafting and are expected to be released for industry-wide consultation in the near future.

Under the DPDP Act, there is also scope for the central government to declare, via notification, that certain provisions of the Act will not apply to specified entities (or classes of entities) for a specified period. Accordingly, the MeitY may exempt certain AI start-ups from certain obligations under the DPDP Act for a certain duration.

The EU's AI act and India

The EU's Artificial Intelligence Act (the "EU AI Act") came into force on August 1, 2024. The world's first comprehensive AI law, the EU AI Act assumes a risk-based approach: the higher the risk of harm that an AI use-case presents, the more stringent its compliance requirements will be.

The EU AI Act applies to all sectors and industries, imposing new obligations on product manufacturers, providers, deployers, distributors, and importers of AI systems. These obligations relate to, among others, conformity assessments, data quality, technical documentation and standards, record retention, transparency, and human oversight.

Similar to the effect of the EU's General Data Protection Regulation ("GDPR") in terms of influencing national data protection regimes worldwide (including the DPDP Act), the EU AI Act may provide a global template for regulating AI.

However, unlike the EU, India is likely to adopt a light-touch approach to AI regulation, especially in view of the MeitY's stated purpose of allowing room for innovation, such as through exemptions for start-ups. Nevertheless, similar to the EU AI Act, India is likely to retain a risk- and safety-based template to protect users from harm.

It must be kept in mind, however, that like the GDPR, the EU AI Act applies extra-territorially: e.g., it applies to AI providers even if they are based outside the EU as long as they place an AI system on the EU market. It also applies to AI providers and deployers which are based outside the EU if their AI system's output is intended for use within the EU. Accordingly, Indian entities which operate in the AI industry may be affected if their business operations attract such extraterritorial provisions. For a broad overview of the international implications of the EU AI Act, including on Indian entities operating across a global AI project's lifecycle, see our note here.

This insight/article is intended only as a general discussion of issues and is not intended for any solicitation of work. It should not be regarded as legal advice and no legal or business decision should be based on its content.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More