ARTICLE
27 April 2025

Shadow AI: The Risks Of The AI You Don't See

E
ENS

Contributor

ENS is an independent law firm with over 200 years of experience. The firm has over 600 practitioners in 14 offices on the continent, in Ghana, Mauritius, Namibia, Rwanda, South Africa, Tanzania and Uganda.
Artificial Intelligence ("AI") is transforming modern business, but not all usage of AI within an organisation is visible to internal governance structures...
South Africa Technology

Artificial Intelligence ("AI") is transforming modern business, but not all usage of AI within an organisation is visible to internal governance structures, IT, or risk management teams and could pose a risk to businesses if not actively monitored and controlled.

The term "Shadow AI" refers to AI tools that are adopted by employees or business units without the knowledge or approval of an organisation's internal governance structures or IT department. Much like shadow IT, shadow AI can emerge when employees use AI tools, such as generative AI and other AI tools, outside the confines of official policies, such as an AI governance policy or Risk management policy. While these tools often enhance efficiency and productivity, they introduce significant risks to organisations if they are used without proper checks and balances.

The Risks of Shadow AI

Shadow AI presents various legal, security and operational risks that can compromise an organisation's compliance, data security and overall AI governance. Some of the key risks include:

  • Violation of Data Privacy Laws: Many AI applications process sensitive data, and without oversight, they can inadvertently expose personal information or sensitive corporate information. This can lead to violations of data privacy legislation, such as the Protection of Personal Information Act, 2013 or industry-specific compliance frameworks and regulations.
  • Violation of Intellectual Property Rights: Employees may use AI tools to generate content without understanding the ownership implications. Some AI tools retain rights over user-generated content, potentially leading to IP disputes or loss of proprietary information or intellectual property rights. It is critical for organisations to properly vet third-party AI service provider terms and conditions and to understand clearly who retains ownership over inputs and outputs.
  • Introduction of Security Vulnerabilities: AI tools may lack proper security protocols, making them susceptible to data breaches. When employees use AI tools outside of secure environments, they may introduce vulnerabilities that cybercriminals can exploit to gain access to their accounts and any information which has been shared with such accounts.
  • Bias and Other Ethical Issues: Bias is an issue that plagues most AI systems, and if a specific AI tool has been trained on biased datasets, this could lead to unfair or discriminatory outputs. Unvetted AI tools can reinforce biases in decision-making, which could affect customer relations. This could ultimately result in reputational harm for an organisation should a client or public-facing AI model become discriminatory in its decision-making or outputs.
  • Hallucination and Incorrect Outputs: AI tools can generate misleading or incorrect information, impacting the reliability and credibility of the output. For example, using AI for legal research or drafting without verification of the output could lead to liability, reputational harm, and damages, as highlighted recently by the judgment of Mavundla v MEC.

Mitigating the Risks of Shadow AI

Organisations can take proactive steps to manage and mitigate the risks associated with shadow AI by implementing some of the following processes:

  • Develop a Clear AI Governance Framework: Establish policies that define acceptable AI usage, data privacy and security measures, and governance and compliance requirements. It is also important to ensure that employees are made aware of and trained on approved AI tools and their proper usage.
  • IT Oversight and Monitoring: Implement AI usage monitoring tools that can detect use of unauthorised AI applications. Conduct regular audits to help identify instances of shadow AI and bring them under formal oversight and within the AI governance framework.
  • Employee Training and Awareness: Educate employees on the risks of shadow AI and encourage them to seek approval before utilising AI tools. Providing training and guidance on responsible and ethical AI usage and compliance requirements can reduce the temptation to use unauthorised AI tools.
  • Integrate AI Risk Management into Vendor Assessments: If third-party AI tools are being used, ensure the third party's legal terms have been legally reviewed and vetted. Conduct an AI risk and privacy impact assessment on the third-party AI tool and ensure that the third-party AI tool meets security and AI compliance and governance requirements.

By addressing shadow AI proactively, organisations can harness AI's benefits while mitigating its risks. ENS' TMT Team offer a Responsible AI Toolkit, designed to establish an internal AI governance process through the deployment of policies, legal terms, checklists and other measures to ensure that this balance is struck between innovation and proper governance to promote an enabling environment whilst mitigating the risks in using AI within the workplace.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More