AI And Data Protection: Latest Guidance From Germany

LP
Logan & Partners

Contributor

Logan & Partners is a Swiss law firm focusing on Technology law and delivering legal services like your in-house counsel. We are experts in Commercial Contracts, Technology Transactions, Intellectual Property, Data Protection, Corporate Law and Legal Training. We are dedicated to understanding your industry and your business needs and to deliver clear and actionable legal services.
AI technologies, especially Large Language Models (LLMs), are becoming integral to various applications, from customer service chatbots to complex analytical tools. However, their use raises significant data...
Germany Privacy
To print this article, all you need is to be registered or login on Mondaq.com.

AI technologies, especially Large Language Models (LLMs), are becoming integral to various applications, from customer service chatbots to complex analytical tools. However, their use raises significant data protection concerns. The Conference of Independent Federal and State Data Protection Supervisory Authorities in Germany recently released a guide on AI and data protection, providing a detailed framework for using AI in compliance with data protection laws.

The guide's primary focus is on ensuring that AI applications, particularly those involving personal data, follow the General Data Protection Regulation (GDPR). It addresses common concerns and provides practical steps for integrating AI into operations while safeguarding personal data.

Key Topics Covered

Selection of AI Applications

Before deploying AI, businesses must clearly define the intended fields of application and specific purposes. The guide emphasises the importance of ensuring that these applications are lawful and do not involve prohibited practices, such as social scoring or real-time biometric monitoring of public spaces.

The guide also advises on the choice between closed and open AI systems. Closed systems, where data processing occurs in a restricted environment with limited user access, are preferable for enhancing data protection. In contrast, open systems, often run as cloud solutions, present higher risks due to potential data exposure and unauthorised access.

Implementation of AI Applications

  • Defining responsibilities: the entity that decides the purposes and means of data processing is the controller under GDPR. This definition includes situations where AI applications are used on behalf of another entity, requiring clear agreements and responsibilities.
  • Internal regulations: businesses should set up clear internal guidelines on the use of AI applications. This includes setting permissible use cases and ensuring employees understand these boundaries to avoid unauthorised use.
  • Data protection impact assessment (DPIA): for AI applications likely to result in high risks to individuals' rights, conducting a DPIA is mandatory. This assessment helps find and mitigate potential data protection risks.
  • Employee protection: providing employees with company accounts and devices for using AI applications prevents the creation of personal profiles and enhances data security.

Use of AI Applications

When using AI, particular caution is needed in handling personal data:

  • Data entry and output: even seemingly non-personal inputs can generate outputs with personal data. Ensuring a legal basis for such processing and informing data subjects accordingly is required.
  • Special categories of data: processing sensitive data, such as health or biometric information, requires stringent checks to ensure compliance with GDPR provisions.
  • Result accuracy and non-discrimination: AI-generated results must be verified for accuracy and checked for potential discrimination. Incorrect or biased results can lead to unlawful data processing.

Key Takeaways

  • Transparency: businesses must maintain transparency about how AI applications process data. This includes providing clear documentation and ensuring users can exclude their data from AI training.
  • Legal compliance: every step of data processing requires a valid legal basis.
  • Data subject rights: AI systems must ease the exercise of data subject rights, such as rectification and erasure, ensuring these rights can be effectively implemented.
  • Security measures: security measures are mandatory to protect data from unauthorised access and breaches, ensuring AI applications follow GDPR requirements.

Conclusion

The latest guidance from the German Data Protection Supervisory Authorities provides a detailed framework for using AI in compliance with data protection laws. It helps organisations understand how to implement AI responsibly while complying with personal data regulations.

If you're looking to discuss data protection or need legal advice on commercial contracts for software or AI applications, book a complimentary 20-minute call with our lawyers. Our team specialises in helping businesses manage these complex areas and ensure compliance with relevant regulations.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More