ARTICLE
1 August 2024

Investing In AI

HC
Herrington Carmichael

Contributor

Herrington Carmichael is a full-service law firm offering legal advice to UK and international businesses. We work with corporate entities of all sizes from large PLCs through to start-up businesses.
On 6 February 2024, the UK Government released its long-anticipated response to last year's White Paper consultation on regulating Artificial Intelligence (AI).
Worldwide Technology
To print this article, all you need is to be registered or login on Mondaq.com.

On 6 February 2024, the UK Government released its long-anticipated response to last year's White Paper consultation on regulating Artificial Intelligence (AI). As anticipated, the Government's "pro-innovation" strategy, led by the Department for Science, Innovation and Technology (DSIT), remains largely consistent with the initial proposals.

This strategy is a principles-based, non-statutory, and cross-sector framework designed to balance innovation and safety by utilising the existing technology-neutral regulatory framework for AI. While the UK acknowledges the future necessity of legislative action, especially concerning General Purpose AI systems (GPAI), it believes that immediate legislation would be premature. The government asserts that a deeper understanding of AI risks, regulatory gaps, and effective mitigation strategies is needed before taking legislative steps.

This approach stands in contrast to other regions, such as the EU and, to a certain extent, the US, which are implementing more prescriptive legislative measures. This divergence suggests that despite international cooperation agreements, global AI regulatory approaches are likely to remain varied.

However, this does not seem to have put investors off. The rise of tech startups and solutions for the B2B and consumer markets which began in 2023 is continuing in 2024, with the focus of that being investing in AI companies. So, what should all investors (and their lawyers) be concerned with when making these investments?

National Security and Investments Act (NSIA) – allows the UK government to scrutinise and intervene in certain acquisitions or investments that could harm the UK's national security. Buyers are legally required to tell the government (and seek pre-closing approval) about acquisitions of, or investments in, certain entities (known as a "mandatory notification") active in certain sensitive areas of the economy, including AI. It is important for potential investors to bear in mind that target businesses that simply make use of AI technology, as more and more businesses are doing, will not be caught by the mandatory notification regime of the NSIA. However, specialist legal advice should be sought in relation to whether a specific piece of AI falls in the scope of the NSIA.

Data protection and privacy – AI uses massive datasets to train, test and improve its algorithms and learning models. These data sets could contain personal or sensitive data that is subject to data protection or privacy laws. These laws impose various obligations and restrictions on the collection, processing, sharing and transferring of personal data, as well as grant various rights and remedies to the data subjects. Failure to comply with these laws can result in hefty fines, lawsuits, reputational damage and loss of trust and customers.

Security and cybersecurity – AI programs and tools can face serious security risks. These include risks like unwanted access, changes to data, stealing or destroying information, and attacks on the AI system itself. These problems could lead to private information being exposed, data getting changed or lost, and services being unavailable, which can harm users or others involved. Because of these risks, companies that create AI must put strong security rules and protections in place. They also need to follow any laws or guidelines about keeping their AI systems safe.

Discrimination – AI systems and tools might unintentionally or intentionally create or increase bias and unfair treatment. This issue can impact how accurate and fair AI's decisions or results are, and it can affect people's rights and well-being. This is especially true for individuals from groups that are often overlooked or face disadvantages. For instance, AI could make biased choices in areas like job hiring, loans, insurance, education, healthcare, or policing. Therefore, it's important for investors to carefully check if AI companies have steps in place to identify, prevent, lessen, and fix any unfair bias or discrimination in their AI technologies.

Accountability and transparency – AI programs and tools can sometimes be hard to understand or explain, especially when they use complex or automatic processes. This can make it tricky to know who is responsible if something goes wrong and can make people less likely to trust the AI. It's really important for the safety and rights of users or anyone else who might be impacted by AI decisions. Investors need to make sure that AI companies are open about how their systems work and have clear rules to make sure they are responsible and transparent.

How can investors manage these risks?

  • Conducting comprehensive due diligence – when making an investment, most investors conduct extensive due diligence. This should be especially true for when contemplating investments in AI. It is advisable to conduct comprehensive legal regulatory risk assessments of the company, their specific AI systems and application as well as the datasets and sources the company uses.
  • Conducting reviews of the policies of the AI company – undertaking reviews of policies such as data protection, privacy, cybersecurity and discrimination policies of the company you're potentially investing in. These reviews will assist getting a full understanding of the potential data protection and discrimination risks assisting in making an informed investment decision.
  • Undertaking data protection audits – Investors should understand how an AI company processes data and shares data. Is this outsourced? Where is the information going (is the data remaining in the EU)? How is the data being protected? Undertaking the audit will help in managing and potentially reducing these risks.
  • Include regulatory clauses and conditions in the investment agreement. It may be useful to include warranties, representations, indemnities, covenants and remedies, to ensure that the AI companies are legally and contractually bound to comply with the relevant laws and regulations.
  • Review all consents and licences for each jurisdiction where the AI company operates. It should be confirmed whether all such licences and consents are valid, subsisting, not likely to be suspended or cancelled and that there are no stringent conditions attached to any of them.
  • Insurance – copies of all insurance policies must be checked to ensure these policies are sufficient to cover errors, omissions, security privacy, cyber events, regulatory issues and media risks for data breaches.

Investing in AI companies presents substantial opportunities and benefits, yet it also carries notable challenges and risks, particularly in terms of legal and regulatory compliance. Consequently, investors must exercise meticulous diligence in navigating these legal regulatory waters. This involves conducting thorough legal and regulatory risk management and due diligence processes, both prior to and following their investment. Through such proactive measures, investors can mitigate potential or existing legal and regulatory liabilities and penalties, safeguarding their investments.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More