AI Risk Poses Complex Challenges For Asset Management In European Energy And Utilities

AI enhances efficiency in the energy and utilities sector but requires high-level governance, clear data strategy, and thorough procurement due diligence. Compliance with EU and UK regulations, especially regarding cybersecurity and risk management, is essential.
UK Technology
To print this article, all you need is to be registered or login on Mondaq.com.

Artificial intelligence (AI) continues to power up every aspect of industry, with the energy and utilities sector no exception. For asset managers, AI can create efficiencies in managing performance, turn data flows into useful insight and support health and safety, as well as being increasingly embedded into everyday enterprise software. With risk management in mind, what are the considerations to be aware of?

High-level governance

A high-level governance strategy around the deployment of AI is sensible, not only to ensure legal risk is mapped and mitigated, but also because it can be used in such a broad range of applications, some of which can have ethical, reputational or security implications.

Data strategy

AI strategy is closely related to data strategy, not least as data is a raw material for AI development. Asset management systems will often generate huge quantities of data, which can become a valuable asset.

Businesses need to ensure that there is clarity about ownership of and access to data, that there are structures around the creation and collection of data, and policies around what it can and can't be used for. A policy is also needed on whether data is shared as an open resource, retained as confidential business information or available for third-party licensing.

Businesses operating in (or into) the EU should be aware of the impact of the EU's Data Act, which creates rights of access to data for those whose activities have generated data collected through, for example, Internet of Things systems. These new rights, which come into force on 11 September 2025, may be particularly relevant for the relationship between asset managers and owners.

AI procurement

AI is a hugely versatile and flexible technology. Introducing AI into the business may be a major transformational project. A new AI system could be "off the shelf", bespoke or somewhere in between. But it could also take the form of an upgrade to standard enterprise software, or an add-on to an existing subscription service. AI systems are often multi-layered and the AI element in software might be hosted on devices, on the supplier's systems or could be accessed from a third-party cloud service.

Businesses have a range of issues to consider in AI procurement. They need to ensure their due diligence of technology procurement is updated to draw out whether there is AI in the new system under consideration, what it does, where in the supply chain it sits and how it functions. From a legal risk perspective, the way in which a system has been procured, how it was trained, how it is deployed, the application that it is used for, the point in the supply chain at which it is used and whether it is used for automation or as an augmenting tool will all feed into its overall risk profile.

Cybersecurity

An AI system may sit on the device where its outputs are being used or at any point in the digital supply chain that underpins the system in question. Digital connections create supply-chain vulnerabilities from a cybersecurity perspective and should be understood as part of procurement due diligence. Managing such vulnerabilities is, in part, an operational issue but can also be managed through contractual liability frameworks and performance expectations.

Asset owners may expect comfort around cybersecurity and this may in turn necessitate upstream contractual provisions to ensure that downstream commitments and key performance indicators around digital security can be met.

Regulatory compliance

The EU's AI Act creates a cross-sector regulatory framework with onerous compliance obligations for some categories of AI (and prohibitions). The focus of the AI Act is on risks to health and safety and to fundamental rights. Clearly, health and safety is a significant aspect of asset management. In addition, AI systems used as safety components in the management of critical infrastructure (including the supply of water, gas, heating and electricity) are included in the category of regulated "high risk" AI.

Compliance will bite at various points in the supply chain so it is important to understand how it will impact on business and the systems used. It is expected to become law over the summer and to take effect on a staggered schedule from early February 2025 (on a slightly delayed timeframe).

In the UK, a much lighter touch approach is being taken, with existing regulators (including Ofgem) using their existing powers to oversee AI within their existing area of competence. Although some new UK legislation on AI is expected if there is a change of government after the general election, this is likely to concern the most advanced forms of AI (and potentially workforce issues) rather than introducing an overarching regulatory regime along the lines of the EU's AI Act.

Osborne Clarke comment

The complexity of AI means that there is a very wide range of legal risks that can flow from its use (as our overview explains). Taking a structured and consistent approach to understanding these risks for each AI system in use ensures proper visibility of risks and of the opportunities for mitigation. We recommend allocating time and resource to ensuring that AI legal risk is mapped, understood and opportunities for risk mitigation are identified.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More