The Government has made much of its recent "AI Opportunities Action Plan" ("the Action Plan"), with all the positive rhetoric focusing on how transformative and exciting AI will be in the coming years. A key limb of this is the intention to scale up the use of AI in the public sector, i.e. by public bodies subject to public law requirements and with significant power over the lives and businesses of individuals and commercial organisations.
Regular readers (or listeners) of this blog will know that we have been highlighting the risks of public sector use of AI without sufficient safeguards for some time now, including in particular the risks of regulators using AI in compliance and enforcement activities in a way that is not transparent and may not necessarily align with their public law duties. It is striking that the Action Plan does not accompany the intention to increase public sector use with any formal limits or safeguards around that use.
Legislation on the horizon?
Separately the Government response to the Science, Innovation and Technology Committee's report on governance of AI indicated that a consultation setting out legislative proposals to establish binding regulations on the companies developing the most powerful AI models would be published "shortly", but as we have previously highlighted it is unclear whether that legislation will contain any safeguards or regulation of public sector use. The implication is that it will not, but will instead be "highly targeted" at only a "handful of companies", although there is a reference to consulting on "how best to establish a regulatory regime that will address the most immediate risks" which may suggest a broader reach.
Regulators still on the hook
Regulators look to be a key part of the Government's plans. One of the recommendations made by Matt Clifford CBE was to commit to funding regulators to scale up their AI capabilities, describing this as an issue that needs "urgent addressing". The Government does not appear to have committed to that extra funding, instead indicating that each sponsor department will simply liaise with their regulator ahead of the 2025 Spending Review.
It has however firmly thrown itself behind the previous Government's view that existing sectoral regulators are best placed to deal with AI in their own areas of expertise, both in terms of encouraging and regulating it. Many of our previous concerns about the scale of this task for regulators, and the dangers of them policing their own use of AI, therefore remain.
So what protections are in place to govern how the public sector, including regulators, implement this scaling up?
Algorithmic Transparency Recording Standard
One development that has flown under the radar somewhat is the Algorithmic Transparency Recording Standard (ATRS) Mandatory Scope and Exemptions Policy ("the Policy"), published at the end of last year. This explains which organisations are mandated to use the ATRS, a reporting mechanism by which public bodies disclose their use of certain algorithmic tools. Having been a voluntary tool for some time, it was made a requirement for some public bodies last year.
The organisations said to be in scope in the Policy are ministerial and non-ministerial departments, and arm's length bodies, meaning executive agencies and non-departmental public bodies which provide public or frontline services or routinely interact with the general public. Most large-scale sectoral regulators should be included in this scope, although the rollout of the mandatory policy is to proceed in two phases with all bodies outside central government being in phase 2. No timescale is given so it is unclear when the phase 2 bodies, which should include regulators, will actually be required to use the ATRS under the Policy.
The Policy requires an ATRS record to be published for use of algorithmic tools that either:
- have a significant influence on a decision-making process with public effect, or
- directly interact with the general public.
'Significant influence' includes where an algorithmic tool meaningfully assists, supplements, or fully automates a decision-making process. This could be a tool that plays a triaging or scoring function within a wider process.
The Policy states that to decide whether a decision-making process has a public effect, consideration should be given to whether usage of the tool assists, supplements or fully automates a process which:
- materially affects individuals, organisations or groups
- has a legal, economic, or similar impact on individuals, organisations or groups
- affects procedural or substantive rights
- impacts eligibility for, receipt of, or denial of a programme
In a commercial regulatory context, this could mean use of AI in an application process for a consent or licence that is material to a business (for example, to score different applicants), or use of AI in determining whether organisations have complied with their legal and regulatory obligations. Even relatively simple uses of AI may significantly influence a decision-making process that could have a material effect on a business, for example if AI is used in an enforcement context to summarise submissions or documents received from a regulated entity but in fact misses a key mitigating factor that should be taken into account. If the public body does not disclose its use of AI the regulated entity may never find out that the omission ever took place, and cannot therefore challenge this failure to comply with public law obligations.
It is immediately apparent that proper use of the ATRS is crucial to protect the rights of those subject to public sector decision-making. In the period when it was recommended but voluntary there was extremely poor take up by public bodies, further emphasising the need for a mandatory requirement of transparency, which should be backed by some form of review to ensure public bodies are complying. At present there are only 23 published records on the ATRS. Citizens and organisations have no way of knowing how accurately that reflects the scale of AI use.
A nod to safety
Going back to the Action Plan, a keyword search shows how far we have moved from the cross-sectoral principles of transparency, explainability, fairness and accountability that were emphasised last year as being at the heart of how AI should be used going forwards. These words hardly appear in the Action Plan. There is some reference to safety e.g. the importance of safe innovation will be stressed in guidance to regulators, which is likely to come from the newly created Regulatory Innovation Office.
Outside the Action Plan, DSIT is said to be working with industry and academia to develop robust tests for fairness and bias in AI systems, described as "sociotechnical solutions to improve fairness" by seeking to address bias and discrimination in AI systems. This idea is only at the earliest stages, with funding having been allocated to four projects to develop possible solutions, and is not therefore a realistic safeguard in the near future.
Interestingly, regulators will be asked to publicly report their activities to promote AI innovation in their sector, but not their safety activities in this area. That of itself gives some indication of which of these competing regulatory objectives the Government is now prioritising.
The fall-back
In the meantime, the 2024 Generative AI Framework for Government remains in place, with its 10 broad principles governing the use of Generative AI across public sector organisations, and the Government has indicated that it is working on an update to provide the public sector with guidance on using AI safely and securely. Such guidance, or policy, is likely to be the main tool for encouraging responsible, transparent, accountable and fair use of AI in the public sector. But public lawyers will know all too well the dangers of over reliance on non-binding guidance and policies without firm legal obligations to back them up, as we have previously discussed.
Ultimately at present the only real safeguard remains the existing legal duties on public bodies such as regulators, most relevantly their public law duties, the public sector equality duty and rights and obligations arising from data protection legislation. Without transparency of AI use however, it is extremely difficult for individuals and commercial organisations to hold public bodies to account by these means. The ATRS, once fully mandatory and if properly used in the spirit in which is it intended, has the potential to become vital for organisations to protect their position, and perhaps it will be for industry to push to ensure it is properly used.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.