It's Time For The Energy Industry To Embrace—Not Just Fear—Artificial Intelligence

DM
Duane Morris LLP

Contributor

Duane Morris LLP, a law firm with more than 800 attorneys in offices across the United States and internationally, is asked by a broad array of clients to provide innovative solutions to today's legal and business challenges.
Last year, President Joe Biden signed Executive Order 14110 on the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence."
United States Energy and Natural Resources
To print this article, all you need is to be registered or login on Mondaq.com.

Last year, President Joe Biden signed Executive Order 14110 on the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." Since the issuance of the executive order, a lot of attention has been focused on the provision requiring "the head of each agency with relevant regulatory authority over critical infrastructure ... to assess potential risks related to the use of AI in critical infrastructure sectors involved, ... and to consider ways to mitigate these vulnerabilities." See Exec. Order No. 14110 Section 4.3(i), 88 C.F.R. 75,191, 75,199 (Nov. 1, 2023). Naturally, government agencies generated numerous reports cataloging the well-documented risks of AI. At the same time, nearly every company has implemented risk-mitigation guidelines governing the use of artificial intelligence. To be sure, the risks of AI are real, from privacy and cybersecurity concerns, to potential copyright infringements, to broader societal risks posed by automated decision-making tools. Perhaps because of these risks, less attention has been focused on the offensive applications of AI, and relatedly, fewer companies have implemented guidelines promoting the use of artificial intelligence. Those companies may be missing out on opportunities to reduce legal risks, as a recent report by the Department of Energy (DOE) highlights.

While many sectors could benefit from the offensive adoption of AI, the energy sector stands out for ripe opportunities to tap AI. In addition to cataloguing risks, DOE provided road maps for future regulations on the offensive adoption of AI by showing how artificial intelligence can reduce threats to critical infrastructure and limit the disruptions to operations that commonly spark litigation.

At its most fundamental level, AI can enhance operational awareness by "helping system operators identify key information in real time" about systems status and issues. CESER Summary Report: Potential Benefits and Risks of Artificial Intelligence for Critical Energy Infrastructure at 2. AI's "inference capabilities can help rapidly characterize changes in the system status, even with limited or incomplete data" about the nature of a problem. This improves efficiency and safety and mitigates risk by allowing for faster and earlier reactions to alerts.

AI models can also provide more accurate forecasting for energy-system resilience in weather events because of their inferential abilities and can be used for "predictive maintenance" to identify potential equipment malfunctions or failures. Some of these uses have already been implemented in wind turbines, hydropower, solar panel systems, and oil and natural gas compressors and pumps. See CESER Summary Report, note 2, at 2. Most in the energy sector know of significant litigation and liabilities that have resulted from network breakdowns, including the California Public Utilities Commission lawsuit against PG&E for damage incurred in the Dixie Fire as a result of system malfunctions and Department of Justice settlements with oil and gas companies for claims that they failed to prevent pipeline leaks that resulted in air pollution. AI application in operational awareness and forecasting are particularly promising for reducing these types of legal exposure.

Indeed, as discussed below, recent federal government announcements and initiatives not only provide guidance on good AI practice, but also preview how regulators may soon expect energy operators to adopt AI. Such adoption may soon set new standards of care against which operators are scrutinized when there are catastrophic breakdowns triggering litigation.

Identifying the Operability or Status of Infrastructure

The DOE unabashedly considers AI integral to a future of "self-healing infrastructure," in which the grid could "autonomously identify and fix problems" at scale, "minimizing the need for human intervention." See DOE, AI for Energy: Opportunities for a Modern Grid and Clean Energy Economy 20 (Apr. 2024). DOE envisions AI identifying and localizing faults based on data inputs from Supervisory Control and Data Acquisition (SCADA) systems, which utilize offsite software to remotely manage hardware industrial equipment, and mapping larger, more-distributed outages.

Detecting Malicious and Nonmalicious Events

While much has been written about AI being an attack vector for malicious actors, AI also plays a critical role in responding to malicious activities, in both the physical (attacks on transmission lines) and cyber (intrusions and malware) domains. AI can help identify those events in real time and minimize their impact or even prevent them altogether. In one project, the Department of Homeland Security's Cybersecurity and Infrastructure Security Agency (CISA) has begun incorporating AI-assistance into its Critical Infrastructure Anomaly Alerting model. CISA partners with non-federal entities that voluntarily share cyber-related information to CISA and notify them of detected cybersecurity concerns. By processing data on information technology and operational technology networks, including industrial control systems and SCADA systems, through machine-learning algorithms and AI-assisted visualization, this Cyber Sentry program could far more effectively monitor critical infrastructure networks.

CISA also recently developed several use cases directly for the energy sector. In conjunction with the DHS Science & Technology Directorate and other sector stakeholders, CISA created energy-specific cybersecurity test environments. These test environments include "a chemical processing plant; an electric distribution substation; a natural gas compressor station; a building automation system; and a water treatment facility." Test environments for autonomous vehicles and rail are also being developed. These test environments are used to run cybersecurity event scenarios and determine where improvements and patches can be implemented. The test environments incorporate actual AI systems to determine vulnerabilities and cure them, and can be used for both research and training.

Nonmalicious anomalies (e.g., power surges, undervoltage, stresses in wiring) can be equally, if not more, damaging. Just as with malicious attacks, these events can escalate or ripple across a power line or grid. Macomb County, Michigan, recently conducted underground pipe inspections exponentially faster and cheaper with a combination of drone footage and AI analysis to identify defects and conduct preventive repairs. It shrunk a process of inspections that once took years to days and saved an estimated $4 million in repairs.

Additional Offensive Use Cases

In 2023, federal agencies reported over 700 use cases through the AI use case inventory. Many of these use cases, as described above, can benefit critical energy infrastructure operators. AI can help operators manage a fundamental problem of energy infrastructure: how to maintain and improve the performance of assets that are frequently at a distance from their parent utilities. Technology directors within the critical energy infrastructure sector should consider reviewing the AI use case inventory and creating a resource of their own.

Lastly, the Department of Transportation's Pipeline and Hazardous Materials Safety Administration (PHMSA) has developed several use cases for artificial intelligence. PHMSA also awarded research funding to a company that is developing a machine-learning predictive model for stress corrosion cracking to better pinpoint physical vulnerabilities in pipelines. It has supported research into use of machine learning to improve pipeline leak detection. PHMSA's Office of Hazardous Materials has also supported research into the use of AI models in conjunction with sensors to improve hazardous material safety alerts.

Conclusion

Energy operators should continue to pay close attention to guidance on the risks of AI and critically analyze the tolerance level of such risks. But energy operators also should pay equal attention to the offensive use of AI, which, if properly implemented, can substantially reduce infrastructure risk and legal exposure. Indeed, today's federal agency guidance may evolve into rules and regulations mandating the use of AI in critical infrastructure, specifically in the energy sector, at which point, companies that have fallen behind on AI implementation may find themselves being scrutinized against these new standards of care.

Reprinted with permission from The Legal Intelligencer, © ALM Media Properties LLC. All rights reserved.

Disclaimer: This Alert has been prepared and published for informational purposes only and is not offered, nor should be construed, as legal advice. For more information, please see the firm's full disclaimer.

We operate a free-to-view policy, asking only that you register in order to read all of our content. Please login or register to view the rest of this article.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More