ARTICLE
5 September 2024

Cybersecurity Of AI And AI For Enhanced Cybersecurity – A Symbiotic Waltz

SJ
Steptoe LLP

Contributor

In more than 100 years of practice, Steptoe has earned an international reputation for vigorous representation of clients before governmental agencies, successful advocacy in litigation and arbitration, and creative and practical advice in structuring business transactions. Steptoe has more than 500 lawyers and professional staff across the US, Europe and Asia.
As artificial intelligence (AI) continues to revolutionize various industries and evolve at a rapid pace, its impact is becoming more pronounced, making cybersecurity an essential concern. In the dynamic digital landscape.
Belgium Technology
To print this article, all you need is to be registered or login on Mondaq.com.

As artificial intelligence (AI) continues to revolutionize various industries and evolve at a rapid pace, its impact is becoming more pronounced, making cybersecurity an essential concern. In the dynamic digital landscape, the scale and sophistication of cyber threats are intensifying, with malicious entities exploiting AI technology to amplify their attack capabilities. Conversely, a growing number of companies are harnessing AI technologies to develop advanced cybersecurity solutions aimed at enhancing protection and resilience against these threats. The complex interplay between AI and cybersecurity is thus reshaping the digital terrain. In this article, Anne-Gabrielle Haie and Maria Avramidou, from Steptoe LLP, delve into the cybersecurity risks associated with the use of AI and how these risks are addressed by the European Union's Artificial Intelligence Act (the EU AI Act), which sets out specific cybersecurity obligations. Moreover, they analyze how AI can be a game-changer in bolstering cybersecurity.

Cybersecurity risks in using AI

Despite the increased efficiency prompted by AI, the latter is not immune to cybersecurity risks. Thus, it is vital to understand such risks spanning across AI's lifecycle, i.e., from its design, development, and deployment to its maintenance phase.

For instance, during AI's design phase, there are risks for vulnerabilities' exploitation such as a lack of robust security architecture, inadequate threat modeling, insufficient data protection safeguards, insecure authentication, inadequate consideration of security implications when selecting AI models, and a lack of quality of training data.

During the development phase, risks may relate to the exploitation of potential vulnerabilities in AI's source code, inadequate data protection during storage and transmission between AI components, and failure to secure AI-related components.

In AI's deployment phase, inadequate attention to the infrastructure components necessary for cybersecurity, inadequate security measures, evaluation, and testing can allow for AI's compromise.

During the maintenance phase, cybersecurity risks can relate to the postponement of required updates and fixes to vulnerabilities, malicious activities, and insufficient logging of AI's activities, errors, and metrics.

These are a non-exhaustive list of examples of cyber risks that may arise when using AI. AI-driven cyberattacks are constantly evolving and new risks emerge on a daily basis. It is thus critical to keep in mind that advanced technologies, such as AI technologies, are not immune from potential vulnerabilities, which could be maliciously exploited by cybercriminals, and constant vigilance is required.

How are cybersecurity risks addressed in the EU AI Act?

The EU AI Act intends to ensure the safety and trustworthiness of AI systems. As cybersecurity risks could clearly endanger this objective, the EU AI Act imposes a number of stringent obligations related to cybersecurity. As we will develop below, these obligations span from risk management obligations, and the adoption of measures to ensure the accuracy, robustness, and cybersecurity of AI systems or models, to the implementation of corrective measures in case of failure. It is important to note that these cybersecurity-related obligations will nevertheless only apply to AI systems and general-purpose AI (GPAI) models that are considered to present the highest risks of potential harm to public interests and fundamental rights. The EU AI Act entered into force on August 1, 2024, while most of its provisions will apply from August 2, 2026.

Cybersecurity-related obligations for high-risk AI systems

In a nutshell, the EU AI Act foresees two main categories of high-risk AI systems:

  • AI systems intended to be used as a safety component of a product or which are themselves products covered by certain EU product laws (e.g., medical devices regulation, machinery directive, directive on the safety of toys, etc.) and subject to certain requirements; and
  • AI systems used in certain use cases (e.g., employment, workers' management, access to self-employment, critical infrastructures, etc.).

AI systems falling within the scope of these high-risk categories will be subject to onerous obligations, which include stringent cybersecurity-related requirements.

Firstly, taking stock that the cybersecurity risks may emerge across AI's lifespan, Article 15(1) of the EU AI Act mandates the design and development of high-risk AI systems that achieve an appropriate level of accuracy, robustness, and cybersecurity, and perform consistently in those aspects throughout their lifecycle. Hence, envisioning to address cybersecurity risks stemming from the exploitation of AI-specific vulnerabilities, it imposes obligations related to high-risk AI systems' resilience against unauthorized parties' attempts to alter their use, outputs, or performance by exploiting those vulnerabilities. Analytically, to comply with these obligations, providers of high-risk AI systems will need to implement organizational and technical measures tailored to the specific circumstances and risks. Such technical measures can include measures to prevent, detect, respond to, resolve, and control attacks trying to manipulate training datasets (data poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the AI model to make a mistake (adversarial examples or model evasion), confidentiality attacks, or model flaws. Moreover, pursuant to their risk management obligation enshrined in Article 9 of the EU AI Act, providers of high-risk AI systems must identify and analyze any known or reasonably foreseeable risks, including cyber threats, and deploy appropriate risk-mitigation measures.

Additionally, the EU AI Act envisages addressing cybersecurity risks arising in AI's deployment phase. Accordingly, pursuant to Article 13(3) of the EU AI Act, providers of high-risk AI systems must provide deployers (users) with detailed instructions for use. These comprise information on the characteristics, capabilities, and limitations of high-risk AI systems' performance, including their accuracy, robustness, and cybersecurity levels, as well as an overview of known and foreseeable circumstances that can affect those aspects. Also, the EU AI Act envisions tackling cybersecurity risks related to insufficient events' logging (e.g., delays in problem detection and resolution, facilitation of exploitation of undetected weaknesses, etc.) which often emerge during AI's maintenance phase and result in AI systems' unauthorized access/manipulation. Specifically, Articles 12 and 19 of the EU AI Act mandate providers of high-risk AI systems to ensure that their systems technically allow for the automatic recording of events over their lifetime, and require them to retain these logs, to the extent that they are under their control.

Finally, pursuant to Article 20 of the EU AI Act, where a high-risk AI system presents a risk to the health, safety, or fundamental rights of persons, or where such AI systems are no longer in conformity with the requirements of the EU AI Act, providers of high-risk AI systems are obligated to report such incidents to other actors in the value chain and implement corrective actions (e.g., withdraw, disable, or recall their AI system). Additionally, Article 73 of the EU AI Act mandates providers and deployers of high-risk AI systems to report any serious incidents (i.e., the death of a person or serious harm to a person's health, a serious and irreversible disruption of the management or operation of critical infrastructure, the infringement of obligations under EU law intended to protect fundamental rights, or serious harm to property or the environment) to competent authorities. It is clear that the advent of a cybersecurity risk or the occurrence of a cybersecurity incident could trigger these obligations.

Cybersecurity-related obligations for GPAI models with systemic risks

The EU AI Act distinguishes GPAI models (i.e., AI models that can be used in and adapted to a wide range of applications for which they were not intentionally and specifically designed) depending on their risks, and imposes cumbersome additional obligations on GPAI models which are considered as presenting 'systemic risk.' GPAI models are considered as presenting systemic risk if they have high-impact capabilities or capabilities or impact equivalent to GPAI models with high-impact capabilities (based on its parameters' number, quality/size of its dataset, and amount of computation used for its training).

More specifically, Article 55 of the EU AI Act requires providers of GPAI models with systemic risk to perform a model evaluation, including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risks, which may include cybersecurity risks, and to assess and mitigate these potential systemic risks. Further, they are required to keep track of, document, and report to competent authorities any serious incidents. Eventually, they must ensure an adequate level of cybersecurity protection for their GPAI models and the physical infrastructure of their models.

Means to comply with these cybersecurity-related obligations

The EU AI Act provides that compliance with the abovementioned cybersecurity-related obligations and other obligations imposed by the EU AI Act may be achieved through adherence to harmonized standards. Harmonized standards are developed by recognized European Standards Organizations (i.e., CEN, CENELEC, or ETSI), at the request of the European Commission. The first AI standardization request, adopted by the European Commission in May 2023, gives a formal mandate to CEN and CENELEC to develop standards required in support of the EU's AI policy, considering the EU AI Act's requirements, including those pertaining to accuracy, robustness, and cybersecurity1. Albeit optional, harmonized standards can provide a way for providers of high-risk AI systems and GPAI models with systemic risk to demonstrate compliance with the EU AI Act's cybersecurity-related requirements. Indeed, pursuant to Article 40 of the EU AI Act, high-risk AI systems or GPAI models with systemic risk that are in conformity with harmonized standards will be presumed to be in conformity with the EU AI Act's requirements, to the extent that those standards cover those requirements.

Additionally, pursuant to Article 41 of the EU AI Act, the European Commission may adopt common specifications on certain requirements of the EU AI Act, including requirements related to cybersecurity. Conformity with these common specifications creates a presumption of conformity with related requirements of the EU AI Act.

Furthermore, Article 42(2) of the EU AI Act provides that conformity of high-risk AI systems with the EU AI Act's cybersecurity requirements can be presumed when they have been certified or when a statement of conformity has been issued for them under a cybersecurity scheme pursuant to the Cybersecurity Act. Besides, pursuant to Recital (77) of the EU AI Act, high-risk AI systems that qualify as 'products with digital elements' under the Cyber Resilience Act can demonstrate compliance with certain cybersecurity requirements of the EU AI Act by complying with the essential cybersecurity requirements of the Cyber Resilience Act.

Use of AI to support cybersecurity

AI technologies, while susceptible to cybersecurity threats as we outlined above, can also be a significant asset in bolstering cybersecurity defense and resilience2. An increasing number of companies are showcasing the ways AI can enhance cybersecurity capabilities, including the prediction, prevention, detection, analysis, and mitigation of cyber threats and incidents. This is achieved through a variety of AI applications, which organizations are steadily integrating into their cybersecurity procedures.

Indicatively, AI can significantly enhance the detection and prevention of cyber threats. For instance, it can be used to monitor systems for any unusual activities or intrusions, alerting security teams when potential threats are detected. Furthermore, AI can predict potential risks, such as service outages, allowing organizations to take proactive measures to prevent them.

AI also plays a key role in preventing cybersecurity incidents. It can assess potential vulnerabilities in a system and identify threats such as spam emails, malware, and attempted intrusions into AI systems. With its machine learning capabilities, AI can learn from each incident, adapting and improving its threat detection and prevention strategies over time.

Another critical application of AI in cybersecurity is in the analysis of malicious programs. AI can analyze the code of such programs to understand their impact on specific software, thereby helping organizations develop appropriate response strategies.

In terms of mitigating threats, AI can automate incident response capabilities, reducing the time taken to respond to an incident and minimizing the potential damage. It can also automate routine tasks, reducing the risk of human error, which is often a significant cause of cybersecurity breaches.

Risk assessments, too, can be significantly improved with AI. It can analyze vast amounts of data to identify potential risks, allowing organizations to prioritize their cybersecurity efforts effectively. With its predictive capabilities, AI can also help forecast potential cyber threats and detect patterns that might indicate a looming cybersecurity incident.

AI technologies can therefore serve as a potent tool for enhancing cybersecurity protection and resilience. By leveraging AI's predictive, preventive, detective, analytical, and mitigative capabilities, organizations can significantly strengthen their cybersecurity strategies and procedures.

The implementation of AI can be particularly relevant for companies that are bound by the rigorous cybersecurity obligations set out in the NIS2 Directive. These organizations can leverage AI to fulfill their obligations enshrined therein, which span across various aspects of cybersecurity. For instance, AI can assist in conducting comprehensive risk management assessments, enabling businesses to identify, evaluate, and mitigate potential threats. Through machine learning algorithms, AI can learn from past incidents and trends to predict future risks and suggest preventive measures. In terms of cybersecurity governance procedures, AI can help streamline and automate many processes, reducing the likelihood of human error. AI can be used to continuously monitor and enforce compliance with cybersecurity policies, ensuring that all activities are in line with the standards set by the NIS2 Directive. AI can also play a critical role in incident reporting and handling.

Conclusion

While the EU AI Act's cybersecurity-related obligations are limited to high-risk AI systems and GPAI models with systemic risk, organizations can choose to apply the same cybersecurity measures to all their AI systems and models, thereby standardizing their cybersecurity practices. This proactive approach can help organizations strengthen their defense mechanisms, which is increasingly crucial given the significant increase in AI-driven cyberattacks. By adhering to robust cybersecurity practices across all AI systems or models, not just those classified as high-risk AI systems or GPAI models with systemic risk under the EU AI Act, organizations can further minimize their exposure to cybersecurity risks.

On the other hand, AI also offers potential benefits for enhancing cybersecurity. AI can be used to identify potential threats, analyze risk levels, and automate responses to security incidents. Integrating AI into cybersecurity processes can therefore help businesses to better protect their systems, data, and digital assets. For instance, AI algorithms can be trained to detect anomalies and suspicious activities that might indicate a cyberattack, allowing for faster response times. Predictive AI models can also be used to anticipate potential cybersecurity threats based on patterns and trends, enabling organizations to take preventative action. By adopting AI-driven cybersecurity measures, businesses can not only comply with the EU AI Act's cybersecurity-related requirements but also gain a competitive advantage in the increasingly digital and interconnected business environment. This dual approach of implementing stringent cybersecurity measures across all AI systems and leveraging AI capabilities for cybersecurity can help organizations stay ahead of evolving cybersecurity threats.

Originally published by OneTrust DataGuidance

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More