ARTICLE
15 November 2023

TLT's AI Forum Round-Up

TS
TLT Solicitors

Contributor

TLT Solicitors
Each day of the festival had a theme that in turn spelt out the word "SPARK". The end of the week saw a focus on "Knowledge" – presenting the perfect setting in which to launch TLT's AI Forum...
UK Technology
To print this article, all you need is to be registered or login on Mondaq.com.

Bristol played host to techSpark's week-long Bristol Technology Festival in October.

Each day of the festival had a theme that in turn spelt out the word "SPARK". The end of the week saw a focus on "Knowledge" – presenting the perfect setting in which to launch TLT's AI Forum - a series of events which will consider how AI is changing the way we live and work, and what organisations should be thinking about when designing, deploying and exploiting AI technology.

The first AI Forum focused on the AI regulatory landscape and saw chair Emma Erskine-Fox joined by panellists Nigel Winship, AI investment specialist at the Department of Business and Trade; Sian Ashton, TLT Client Service Transformation Partner; and Karin Rudolph, founder of Collective Intelligence and ethical technology champion.

AI regulation – the current landscape

With more businesses implementing AI to create efficiencies, it's important to create a dialogue to explore this ever-changing landscape. And as AI develops at such a rapid pace, we also need to think about how AI may be regulated.

There are different approaches to regulating the development and use of AI solutions. For example:

  • The EU's draft AI Act sets out an extensive and prescriptive regime applicable to EU-based organisations and those outside the EU who place AI onto the EU market. The draft Act bans certain types of AI (such as social credit scoring and live biometric recognition systems) and categorises many other types as "high-risk", meaning that they are subject to stringent obligations and external assessment requirements before they can be put into the market.
  • The UK government published its whitepaper on AI regulation in March 2023,. This proposed five broad principles that organisations would be expected to take into account in AI development and deployment. This is a non-statutory, regulator-led approach based around agility and innovation. The government consulted on the whitepaper from March to June 2023 but has not released any update since the consultation closed. However, the UK recently hosted the global AI Safety Summit (where the landmark Bletchley Declaration was signed) so we may see more activity from the UK in coming months.
  • The US has already implemented a number of AI-specific laws, including one relating to maintaining US leadership in AI. Most recently, President Biden announced a sweeping Executive Order establishing AI safety and security standards and calling on Congress to pass data privacy legislation to protect citizens from the risks of AI.
  • China passed a law in August 2023 to regulate generative AI in particular and has also implemented various other specific laws such as regulations on algorithmic recommendation technologies. Earlier in 2023, China consulted on drafted measures for ethical review of AI technologies.

Is further regulation needed?

A number of existing laws (such as data protection law, competition law and consumer protection law) already encompass some elements of AI use. However, there is still a need for further regulation to ensure the safety of users. The challenge that comes with regulating this technology is balancing the tension of protecting individuals and businesses while allowing space for innovation.

The current lack of AI-specific regulation has led to many tech companies policing themselves to ensure that they handle AI with care. For example, ChatGPT 4 is billed as the "safest" version of the technology, arguably because humans are manually checking its work. Given the rapid development of AI, it is reasonable to assume that effective self-regulation is not sustainable. Specific regulation would mitigate the risks of this self-regulation turning into the equivalent of greenwashing.

In addition, without the correct frameworks in place there are also significant data risk implications. Considered regulation is essential, as will be supporting companies in understanding what regulations mean within their own contexts.

What do regulators need to think about to maintain agility and growth?

An essential part of regulating AI in a way that does not stifle innovation is ensuring that the regulation is built with input from many viewpoints, including those who are actually working with these technologies.

Whilst there is competition between the major global players in the AI regulation space (namely the EU, US and China), there is an argument that these "digital empires" need each other to protect the interests of their own tech companies abroad. Therefore, we may not see any major extremes in regulation and could instead see some form of alliance, with all markets striving towards a better technological ecosystem. Regulators certainly need to consider global approaches to ensure that businesses operating on a multi-national level have certainty as to their obligations and how potentially different regulatory frameworks affect them.

We have seen a drive towards this coming from the UK with the AI Safety Summit, aiming to put the UK at the forefront of global AI safety.

What do organisations need to be doing to build compliant and ethical AI processes into their projects?

Alongside considering the risks of AI, it is important to consider the huge potential benefits that the technology brings such as advances in healthcare and increased productivity. But at the same time organisations need to take ethics into account when implementing AI solutions. While companies are generally keen to ensure that they are 'doing the right thing' and taking an ethical approach, without any regulation or official frameworks, it will be difficult to ensure AI companies are accountable.

Existing human rights frameworks could be a good starting point for developing suitable ethical framework. In addition, education and guidance for employees, alongside clear company policies, are essential to make sure that AI is used within the right ethical and regulatory parameters.

Conclusion

AI is not a new fad which is suddenly going to disappear. It is here to stay. If anything, its use and its impact on business and our personal lives is only going to increase. Therefore, developing a regulatory and associated ethical framework which protects all parties involved in the development and use of AI without stifling innovation is essential. And to achieve this there needs to be real collaboration between the public and the private sector – not just in the UK but on an international basis.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More