The UK's AI Balancing Act: Regulators, Principles And Pragmatism

GW
Gowling WLG

Contributor

Gowling WLG is an international law firm built on the belief that the best way to serve clients is to be in tune with their world, aligned with their opportunity and ambitious for their success. Our 1,400+ legal professionals and support teams apply in-depth sector expertise to understand and support our clients’ businesses.
The King's Speech announced plans to strengthen AI regulation, likely involving safety checks on foundational models. Continuing the previous government's sector-specific approach, existing regulators will enforce principles such as safety, transparency, fairness, accountability, and contestability across AI applications.
UK Technology
To print this article, all you need is to be registered or login on Mondaq.com.

In the King's speech today, the new Government promised to legislate to strengthen regulation of artificial intelligence (AI). Although it did not give many details, it is likely that this will involve safety checks on the 'foundational models' which underlie specific AI applications.

As such, the approach taken by the previous Government in relation to specific sectors is likely to continue for a while.

As the new Government begins to set out its plans for the coming Parliamentary session, this article considers the themes which emerge from the plans developed by regulators and what the future may have in store for AI regulation under the new Government.

In March 2023, amid predictions that the UK AI market would exceed 1 trillion USD by 2035, the previous UK Government published a White Paper on AI regulation setting out its vision for a regulatory framework, reliant on collaboration between the Government, regulators and business.

The Government's ambition was for the UK to be a global leader in the development and deployment of 'safe AI'.

The White Paper set out what it termed a pragmatic, agile and iterative approach to the regulation of AI. There was to be no new cross-sectoral AI regulator, but instead existing regulators would use their current powers to ensure that the use of AI in their own areas complies with the following five principles –

  • Safety, security and robustness.
  • Appropriate transparency and explainability.
  • Fairness.
  • Accountability and governance.
  • Contestability and redress.

In February 2024, in its response to the consultation on the White Paper, the Government asked specified regulators to publish details of their strategic approach to AI. It also provided initial guidance for regulators on applying the principles. This was to include their assessment and analysis of AI-related risks in their sectors and the actions they would take to address them, as well as their current capability to do so.

We consider below the themes that emerge from the 13 strategies published so far and what may lie ahead under the new Government.

Cautious optimism?

Echoing the White Paper, regulators have indicated that they are keen to grasp the opportunities that AI presents, while managing the threats it poses.

It is clear that AI offers exciting possibilities that will revolutionise every sector. However, it is right to be cautious, particularly as understanding grows.

Regulators are varied in their approaches, reflecting the different risks that AI might pose in relation to different sectors.

For example, Ofqual indicated it was taking a careful approach, prioritising the precautionary principle while remaining open to innovations that comply with its regulatory regime. In line with its statutory objectives, it aims to ensure that AI is applied in a 'safe and appropriate' way that does not jeopardise the fairness or standards of regulated qualifications or impact the maintenance of public confidence in those qualifications.

Embracing and building on the principles

In the main, regulators have embraced the White Paper principles, whilst acknowledging not all principles will have the same relevance across all sectors. For example, the Health and Safety Executive considered that only three of the principles – safety, transparency and accountability – were relevant to workplace health and safety.

Several regulators, including the Information Commissioner's Office (ICO), observed that the principles fit neatly within their regulatory regimes, suggesting that some will find their integration easier than others. However, the Legal Services Board noted that the nature of its statutory framework, which was enacted before current developments in AI, means that some technologies, products and developers will fall outside its current remit.

The Competition and Markets Authority (CMA) identified its own set of complementary principles, which focus on developing 'well-functioning economic markets' from the perspective of competition and consumer protection. This reflects the discretion which the Government afforded to regulators in developing their strategies.

By contrast, while supportive of the Government's approach, the Equality and Human Rights Commission (EHRC) considered that the principles 'lacked sufficient emphasis' on equality and human rights considerations.

This may therefore be an area for future policy development, particularly given the new Prime Minister's background as a well-respected human rights lawyer.

Regulators' capacity to embrace AI

Some regulators consider themselves more prepared than others to grapple with the risks and opportunities posed by AI.

Ofcom stated that it had a sizeable team to support it in embracing AI including dedicated experts. It noted that the nature of its functions has always required it to be at the leading edge of technical knowledge and expertise.

By contrast, the wide-ranging nature of the EHRC's remit – effectively looking at equality and human rights considerations across all sectors – led it to signal that it may face capacity challenges, particularly due to its size and a lack of dedicated funding. It therefore plans to extend its existing approach in other areas to the regulation of AI, taking a 'focused and strategic approach', prioritising selected strategic issues rather than aiming for wider coverage. In doing so, it will prioritise understanding the application of its 'unique regulatory lever', the public sector equality duty.

The Medicines and Healthcare products Regulatory Agency (MHRA) also indicated that it had a small team working specifically on AI having benefitted from Government funding. Going forward, it will be important to ensure that regulators have sufficient budget to deal effectively with AI alongside their other day-to-day work.

Work already underway

Many regulators are already forging ahead. Some, such as Ofcom and the ICO, are involved in developing AI best practice on the global stage. The EHRC has given a specific focus to AI since 2022, as it recognises the importance of managing the risks associated with the outputs of AI and the potential for irresponsible use.

Others, such as Ofgem in its call for input in May, plan to support regulated entities through explanatory guidance and 'sandboxes' to nurture this growing area.

In addition, regulators are considering what to do if things go wrong. The ICO indicated a robust approach, demonstrating examples of actions it has already taken relating to businesses' use of AI, whilst the EHRC will adopt the principles into its enforcement work.

In keeping with the tone of the White Paper, there is also a degree of 'wait and see' in particular sectors. Ofgem indicated that there may only be limited uses for AI in the energy sector at this stage, and it was studying 'simple or localised use cases' as AI is not yet usable in more complex areas. In its call for input, it considered existing uses (such as customer service and renewables forecasting), those which are under active development (e.g. network planning, smart grids) and potential future developments (such as autonomous energy trading).

This approach recognises the dynamic nature of AI and the importance of being flexible as technologies develop.

Regulators are also keen to understand how they can use AI themselves. For example, Ofsted indicated that it already uses AI in preparing risk assessments for schools and anticipates using AI to improve decision-making and work more efficiently. The MHRA also highlighted its own use of AI in increasing the efficiency and effectiveness of its vigilance systems.

Regulators' expectations

It will be important to avoid creating a regulatory vacuum in which regulated businesses are unclear as to the requirements upon them as they seek to develop AI products and are forced to operate at risk. This could undermine the UK's potential to realise the opportunities that AI offers.

The Office for Nuclear Regulation (ONR) recognises such a risk, commenting that some stakeholders may have 'preconceived ideas' as to what it will and will not accept. This could lead to 'overly conservative thinking' in the industry and the 'risk is that the status quo is maintained, limiting the introduction of new, more effective solutions'.

It is in such situations that a principles-based approach can be useful, providing guard rails to guide innovation until the point at which any specific rules become needed and can be appropriately framed. The ONR also refers to a programme of targeted engagement to convey its open stance on innovation and contribute to the development of good practice.

The Bank of England expected regulated banks to define the concepts of transparency and explainability and it promoted individual accountability through the Senior Managers and Certification Regime.

Similarly, Ofsted confirmed that it would not check the quality of AI tools used by education providers. Rather it will be for school leaders to ensure that their use of AI does not have a detrimental effect on safeguarding or the quality of education.

Collaboration is essential

Each regulator expressed willingness to collaborate with a range of Government bodies, regulated entities, international bodies, and other regulators. For example, several already work together through the Digital Regulation Cooperation Forum, a cross-regulatory advisory hub aiming to build consensus on AI application.

These signs of collaborative approaches are promising, particularly for those regulators who may initially struggle with capacity or expertise.

It is also necessary given the overlapping jurisdiction of the regulators in certain areas. For example, the Financial Conduct Authority (FCA) recognised the roles of the ICO and the EHRC in regulating compliance with data protection and equalities requirements respectively, relating to the application of AI in the financial sector. It also noted the CMA's work on foundational AI.

Likewise, the Bank of England said that it had been working jointly with the FCA for many years to understand the risks of AI in relation to financial services. It stated that collaboration across regulators was 'essential'.

The Government will have a role to play in facilitating collaboration. For example, the AI Safety Institute, which is part of the Department for Science, Innovation and technology, will have an important role to play. In its manifesto, Labour trailed a new Regulatory Innovation Office that will help regulators co-ordinate issues that span existing boundaries.

What's next?

It seems that the new Government will indeed take a more hands-on approach to AI. Although it did not reference the Committee's report, Labour did comment in its manifesto that regulators are currently ill-equipped to deal with the development of new technologies – particularly where these cut across existing industries and sectors.

It committed to establishing a new Regulatory Innovation Office to 'help regulators update regulation' and 'co-ordinate issues that span existing boundaries'. That would not seem to be a new regulator but rather a central government agency to provide expert input and facilitation, potentially also advising Government on updating primary legislation to empower regulators where needed.

Labour also referred to introducing 'binding regulation' on those companies developing the most powerful AI models. So it may be that the UK moves closer to the EU approach over the course of the next five years – although whether it goes as far as the EU has yet to be seen.

For the moment, it appears that the UK will tread a middle course between the approach taken by the previous Government and that taken by the EU. An early indication of what regulation under Labour might look like was its statement at the beginning of the year that it would turn the existing voluntary agreement under which AI firms co-operate with the UK's AI Safety Institute into a statutory requirement. That would result in independent oversight of the safety testing of powerful AI systems.

However, Labour's manifesto refers to its regulation touching only a handful of companies and 'the most powerful AI tools'. This is reflected in the King's Speech which stated that the Government will seek to 'strengthen safety frameworks' and establish appropriate legislation through which to introduce 'requirements on those working to develop the most powerful [AI] models'. Unfortunately, no further detail has been given at present.

It is likely that this means 'foundation models' – the technologies that underpin AI capabilities rather than downstream applications or users of AI. For the latter, it seems that the current approach of regulators developing their own approaches to the regulation of AI in their sectors will continue for the moment, albeit with greater support from the new Regulatory Innovation Office when it is established.

Read the original article on GowlingWLG.com

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More