ARTICLE
9 April 2025

Data Governance In The Age Of AI And Privacy: Building Trust And Driving Innovation

FP
FABIAN PRIVACY LEGAL GmbH

Contributor

We are a boutique law firm specializing in data, privacy and data protection laws and related issues, information security, data and privacy governance, risk management, program implementation and legal compliance. Our strengths are the combination of expert knowledge and practical in-house experience as well as a strong network with industry groups, privacy associations and experts around the world.
As artificial intelligence (AI) is fundamentally changing the way data is collected, used, managed and protected, the importance of data governance has never been more crucial.
Switzerland Privacy

Introduction

As artificial intelligence (AI) is fundamentally changing the way data is collected, used, managed and protected, the importance of data governance has never been more crucial. AI holds immense potential to drive innovation, streamline operations, unlock new value from data, enhance decision-making and personalise customer experiences. However, alongside these opportunities come significant new risks, such as bias, lack of transparency, and accountability challenges. To navigate this landscape effectively, organisations should prioritise three fundamental principles:

  • Trust: To build trust, AI must be developed in a responsible, transparent and ethical manner.
  • Compliance: Organisations must adhere to emerging AI and data regulations across different jurisdictions, such as the EU AI Act.
  • Innovation enablement: AI governance shall enable innovation by fostering responsible data practices, not hinder it.

AI systems rely on vast amounts of data for training and decision-making. Without proper governance, they can produce biased, misleading or non-compliant results. Effective data governance is therefore essential to help prevent AI bias, ensure transparency and explainability in AI-driven decisions, reduce regulatory, privacy and security risks and boost stakeholder trust by demonstrating responsible AI practices.

Stakeholder trust is essential for AI adoption, as without trust, stakeholders such as customers, regulators, employees or partners will push back. It is therefore not an option, but a key driver of business success in the AI area, as it allows organisations to differentiate themselves in the market.

Data governance, as understood in this article, is a set of policies, processes, standards, and roles that ensure an organisation's data is managed effectively, securely, and ethically throughout its lifecycle. Data governance establishes who has access to data, how it is used, and what rules must be followed to maintain data quality, privacy, and compliance. Data governance is therefore understood as the foundation for trustworthy AI and data-driven innovation. It is not just about compliance, but about leadership. It allows to mitigate risks before they materialise, strengthen relationships with customers and regulators and unlock new opportunities for innovation.

In this article, we will analyse how to achieve the objectives set out above by implementing strong data governance in order to ensure responsible AI use and reduce AI-related risks while fostering innovation and driving business success.

Data stewardship in an AI-driven world

Due to the volume and velocity of AI-driven data processing, AI expands the scope of traditional data stewardship and requires a more proactive and adaptive approach. As AI models often rely on data from multiple sources, it is essential for organisations to have clear data accountability frameworks that regulate who is responsible for data input, processing and output. Ethical considerations must also be embedded to make sure that data is collected, labelled and used responsibly. To achieve these goals across the organisation, it is important to adopt a global approach and create cross-functional teams with key players from functions such as legal, privacy, compliance, ethics as well as business leaders.

Depending on the organisation's structure and regulatory requirements, different AI data governance models can be used:

  • Centralised model: Typically comprises a single governance body overseeing AI policies, compliance and risk management and ensuring uniform standards across all AI initiatives. The downside is that such an approach leaves little to no room for adaptations and is not very flexible. This model might be an option for highly regulated industries such as healthcare or finance.
  • Decentralised model: Each business unit or department manages its own AI data governance, which allows for more flexibility but also increases inconsistencies and compliance risks. This model might be suitable for organisations with a fragmented organisational structure and diverse AI applications across departments.
  • Hybrid model: Comprises a core AI data governance framework with tailored department-specific guidelines, thus balancing control and flexibility and ensuring alignment with enterprise-wide governance policies. This approach is generally our recommendation, as it ensures a uniform approach while leaving room for adaptations.

Irrespective of the model chosen, it is recommended to establish a cross-functional AI governance committee that oversees AI-related data governance, ethical AI use and compliance monitoring and makes sure that AI aligns with regulatory, legal and compliance requirements as well as with business objectives.

Regulatory compliance and ethical AI

Navigating the increasingly complex AI, privacy and security regulatory landscape is a challenge to most organisations. In addition to existing privacy frameworks that must be complied with, more and more countries implement specific AI legislation, such as the EU AI Act, the USA's NIST AI Risk Management Framework or the Singapore Model AI Governance Framework. Most AI regulations, regardless of the jurisdiction, share some basic requirements:

  • Transparency: AI systems must disclose how they make decisions and AI-generated output must be clearly recognisable as such.
  • Accountability: Organisations must define who is responsible for AI outcomes.
  • Fairness and bias mitigation: AI systems must be tested for discriminatory behaviour.
  • Risk-based AI classification: Higher-risk AI systems require stricter oversight.
  • Human oversight: AI should support human decision-making but not replace it in critical areas.

To align AI development with legal and ethical expectations, it is recommended to adopt a proactive approach:

  • Embed compliance, privacy by design and ethical AI principles from the outset in AI design and build systems with privacy and data governance in mind instead of retrofitting them for compliance.
  • Develop clear AI governance policies by establishing company-wide policies that align with global regulations and address privacy, AI bias and transparency.
  • Ensure multi-stakeholder collaboration by creating cross-functional AI governance teams and regularly train AI developers and business leaders to make sure they understand AI regulations and risks.

To monitor and mitigate AI-related risks, it is recommended to conduct AI Impact Assessments (similar to Data Protection Impact Assessments) for high-risk AI models, to carry out regular AI audits to ensure compliance with legal and ethical standards and to develop tools to make AI-driven decisions explainable for consumers and regulators.

Embedding privacy by design in AI development

Privacy by design is crucial to mitigate the privacy risks engendered by AI's heavy reliance on data. To learn, improve, and make decisions, AI requires large-scale data processing, which often involves personal data or even sensitive personal data. Without strong controls, AI can thus expose personal data to misuse, breaches or regulatory violations.

By adopting a privacy by design approach and embedding privacy safeguards from the outset, organisations can make sure that their AI systems minimise data collection, process data securely, provide transparency on how data is used and comply with all applicable privacy principles and requirements.

Key strategies for privacy-centric AI include:

  • Data minimisation: AI systems should only collect the data they need to reduce the risk of data breaches, regulatory issues and unnecessary exposure of personal data.
  • Anonymisation and pseudonymisation: These techniques allow AI to use data for insights and training while protecting the privacy of the data subjects and improving privacy compliance.
  • Algorithmic transparency: AI-driven decisions must be explainable, fair and auditable so that users and regulators understand what data the AI is using, how it makes decisions and whether bias or risks exist in the model.

Privacy Enhancing Technologies (PETs), such as differential privacy (i.e., adding mathematical noise to datasets to prevent the identification of individuals) or federated learning (i.e., training AI with decentralised data that remains on the users' device), can help enable the use of data by AI while protecting the privacy of individuals.

If all these aspects are taken into account, privacy and AI can coexist, and it can be ensured that AI innovation does not come at the expense of privacy by using strong privacy safeguards to build trust, compliance and long-term sustainability.

Managing third-party relationships in AI projects

Most organisations rely on third-party vendors when developing and using AI systems, for example for the provision of data sets, pre-trained models or algorithms and cloud computing infrastructure. This can be a challenge, because when companies involve a third party, they may lose control over key aspects of AI governance while remaining responsible and liable for the outcomes produced by the AI. This may entail the following key risks:

  • Compliance risks: Vendors may not adhere to regulations such as the GDPR, the EU AI Act or emerging AI laws in other countries.
  • Data security and privacy risks: Weak data protection measures from third-party vendors may lead to data breaches, unauthorised AI model training or unauthorised disclosure of personal data.
  • Bias and ethical risks: Pre-trained AI models may contain hidden biases that organisations are unaware of.
  • Lack of transparency: AI vendors may treat their models as so-called black boxes, making it difficult for companies to explain how AI decisions are made.

To mitigate AI vendor risks, it is therefore essential to ensure and monitor the vendors' compliance by implementing a structured risk management framework including the following key elements:

  • Vendor risk assessments to evaluate potential suppliers with respect to their regulatory compliance, data security practices, fairness and bias testing as well as transparency and explainability.
  • Contractual safeguards that oblige the vendor to commit to the applicable AI and privacy regulations as well as industry standards, define responsibilities and liabilities for compliance violations or harm caused to consumers, clearly outline who owns AI-generated outputs and what data the vendor may retain, and require the vendor to conduct bias audits and share the results.
  • Regular third-party audits including annual AI compliance reviews to ensure vendors stay aligned with new laws and policies, independent audits of vendor-supplied AI models for bias, fairness and robustness, and incident response plans to review how vendors handle data breaches, AI failures or compliance violations.

Training and awareness in AI and data governance

AI and data governance is not just an IT or legal issue, it's a company-wide responsibility. Everyone, from executives to frontline employees, interacts with AI-driven systems, making ongoing education essential. It is key that business leaders and employees must be aware of:

  • AI-specific risks, such as bias, explainability, and accountability;
  • Regulatory obligations related to AI compliance, such as the obligations of the EU AI Act, GDPR, and emerging US regulations and regulations in other countries; and
  • Operational impact, respectively, how AI decisions affect business processes, customer trust, and legal exposure.

Without AI and data governance training, organisations may face significant risks, including:

  • Regulatory non-compliance penalties;
  • Reputational damage from AI bias, unethical decision-making, or data misuse; and
  • Lost business opportunities due to lack of consumer trust in AI-driven services.

Education empowers proactive governance. So, organisations that invest in AI training are better positioned to anticipate risks, ensure compliance and drive responsible AI innovation.

In order to ensure effective training, it is essential to deliver general basic training to all employees and complement it with specific training for functions more closely involved, such as legal, privacy, compliance, IT, etc. and highly specialised training for AI developers and data scientists.

The training efforts should be complemented by AI and data governance guidelines outlining clear AI policies and compliance obligations, providing practical guidelines for AI model development, deployment and monitoring and presenting case studies of past AI failures and successes to help employees learn from real-world examples.

Turning data governance into a competitive advantage

In today's digital economy, trust is currency and allows companies to differentiate themselves in the market. Customers and stakeholders are more likely to engage with organisations that prioritise ethical AI, data privacy and responsible governance. Beyond ensuring regulatory compliance, strong AI and data governance thus serves as a competitive differentiator, allowing companies to build stronger customer relationships and brand loyalty.

Contrary to the widespread belief that governance and compliance hinder innovation, a well-structured data governance framework actually enables and fosters innovation by reducing uncertainties and risks.

The following strategies help align governance with corporate strategy:

  • Make data governance a board-level priority: Integrate AI risk management into boardroom discussions, just like financial or cybersecurity risks.
  • See governance as a strategic asset: Use governance to drive strategic business objectives, such as enhancing customer experience or driving operational efficiency, instead of considering it as a regulatory burden.
  • Establish cross-functional collaboration: Governance should involve different departments such as legal, business development, data scientists, etc. to create a governance framework that supports business growth.
  • Empower agility and continuous improvement: Stay ahead of evolving regulations and market demands by regularly auditing, monitoring and updating data governance.

Closing remarks and actions for organisations

Data governance is not a one-time compliance exercise – it is an ongoing commitment. As AI technologies evolve, regulations will continue to change, new risks will emerge and consumer expectations around ethical AI will rise.

Companies that proactively integrate AI, data and privacy governance into their business models will not only mitigate risks, but also build long-term trust, drive innovation and maintain a competitive edge.

To summarise, organisations can take the following steps to refine their AI and data governance:

  • Conduct an AI governance audit to assess current practices, classify AI models, identify gaps, and establish governance roadmaps;
  • Develop role-specific AI and data governance training for employees at all levels, and developers;
  • Implement AI risk assessment frameworks to evaluate the ethical, legal, and security risks of AI systems before deployment;
  • Align AI, data, privacy and governance policies with emerging regulations to avoid compliance pitfalls;
  • Adopt privacy by design frameworks in AI models to ensure responsible data usage from the outset;
  • Establish ongoing AI compliance monitoring to keep up with regulatory changes and evolving risks;
  • Foster collaborative decision-making between privacy, compliance and technical teams to ensure that AI and data governance is both legally sound and technically feasible.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More