On February 6, 2024, the UK Department for Science, Innovation and Technology ("DSIT") published the UK Government response (the "Response") to its consultation A pro-innovation approach to AI regulation launched in March 2023.

The Response reflects ongoing UK policy as one of the least regulatory major economies for AI, much less so than the EU (with its expected AI Act) and arguably less so even than the US (with the various actions under the Executive Order on AI). Such policy differentiation from the EU is part of what advocates of Brexit wanted, and it is generating some controversy in the case of AI. This client alert considers key aspects of the Response.

Context-Based Approach to AI Regulation

The Response adopts a primary "context-based" approach to AI regulation – i.e. regulation that fits various sectoral contexts, on a technology-neutral basis – subject to five cross-sectoral principles:

  • Safety, security and robustness;
  • Appropriate transparency and explainability;
  • Fairness;
  • Accountability and governance; and
  • Contestability and redress.

The UK Government will apply this high-level approach on a non-statutory basis for the time being, giving significant discretion to sectoral regulators. However, the Government intends to build a central government coordinating function "to monitor and assess risks across the whole economy and support regulator coordination and clarity".

A key focus of this central coordinating function is risk. The UK Government plans during 2024 to launch a consultation on a "cross-economy AI risk register". As a starting point, the Response identifies three broad categories of AI risk:

  • societal harms – including workforce issues, intellectual property protection, bias and discrimination, privacy, safe and trusted content, competitive markets, and best practice in the public sector;
  • misuse risks – including electoral interference, cyberattacks and criminality, and AI-based weapons; and
  • autonomy risks – primarily advanced AI avoiding human control.

The central coordinating function for AI is also expected to include elements of defining regulator powers, coordination among regulators, research and innovation, facilitating compliance by innovative businesses, building public trust, and monitoring and evaluation.

Potential Regulation of Highly Capable General-Purpose AI Systems

The one area in which the Response explores affirmative regulation is "highly capable general-purpose AI systems" – which is also an area in which the US Executive Order on AI is most regulatory.

The Response notes that the UK Government 's context-based approach may not be appropriate for general-purpose systems that are useful across sectors. The Response then explores voluntary and non-regulatory steps that the UK Government is already taking, including cooperating with the US Government to convince seven leading AI companies to publish their safety policies, publishing a report on processes for frontier AI safety, and testing of advanced AI models by the newly-established UK AI Safety Institute.

However, the Response ultimately acknowledges that mandatory regulation is likely eventually to be necessary:

"Whilst voluntary measures are a useful tool to address risks today, we anticipate that all jurisdictions will, in time, want to place targeted mandatory interventions on the design, development, and deployment of such systems to ensure risks are adequately addressed."

Open Source Software

In its discussion of general-purpose AI systems, the Response addresses the controversial issue of whether there should be restrictions on open source release of AI systems. It observes that "open release of AI has, overall, been beneficial for innovation, transparency, and accountability", but notes "an emerging consensus on the need to explore pre-deployment capability testing and risk assessment for the most powerful AI systems, including where systems might be released openly". The UK Government plans to engage further with experts on this issue during 2024.

Sectoral Regulatory Initiatives and AI-Related Funding

Although the Response does not announce any affirmative AI regulatory measures by DSIT or other central government departments, it does speak with approval of actions by sectoral regulators, including a review of foundation models by the Competition and Markets Authority and guidance on data protection and AI by the Information Commissioner's Office.

The Response also highlights UK Government financial support for AI safety and AI generally, including new funding initiatives to build nine new AI research hubs (£80 million), enhance regulators' AI capabilities (£10 million) and build cooperation with the US on responsible AI (£9 million).

Controversy Over UK Approach and Way Forward

By continuing its light-handed, "pro-innovation" regulatory approach to AI, the UK Government has clearly shown an intention to strengthen the UK's position as the leading country for AI development in Europe, by imposing a minimum of regulation on innovative AI companies. This is in substantial contrast to the EU approach, which is putting much greater weight on near- to medium-term regulation.

Not surprisingly, some are criticizing the UK approach as not doing enough to address AI safety and risk. However, the approach does not appear likely to change significantly for the time being. Although the UK Labour Party, which could regain power in the next election (which must take place by January 2025), is historically less favorable to business than the current Conservative Government, Labour leader Keir Starmer is an AI enthusiast who makes support for innovation a priority.

In any event, much remains in play for the approach to AI regulation in the UK, and organizations who are interested in the development of this regulation would do well to begin to engage with it this year.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.