ARTICLE
23 April 2025

TMT AI Series | Part 1 | The Issue With Inputs: Why What You Feed The Machine Matters More Than You Think

E
ENS

Contributor

ENS is an independent law firm with over 200 years of experience. The firm has over 600 practitioners in 14 offices on the continent, in Ghana, Mauritius, Namibia, Rwanda, South Africa, Tanzania and Uganda.
Artificial intelligence ("AI") has moved well beyond hype. It is now driving real change across industries: streamlining operations, enabling faster decision-making and unlocking new forms of creativity.
South Africa Technology

Artificial intelligence ("AI") has moved well beyond hype. It is now driving real change across industries: streamlining operations, enabling faster decision-making and unlocking new forms of creativity. But as more businesses integrate AI into their workflows, an often-overlooked question surfaces: Who owns what?

At the heart of this discussion are two technical but critical concepts:

  1. Inputs: the data and materials used to prompt AI tools
  2. Outputs: the content or results AI tools produce

Understanding the implications of these inputs and outputs is essential not only from a legal standpoint but also for managing risk, protecting business value, and enabling commercial scalability. In this first part of a series, we deal with inputs; in part 2, we will deal with outputs and in part 3, we will deal with practical measures that companies need to adopt.

AI tools are only as good as the data they are trained on, however, certain AI tools constantly learn and evolve from data contained in prompts. These inputs, ranging from internal documents and research to third-party datasets and open-source materials, form the foundation of an AI tool's capability. But the act of feeding data into a model through prompting is not without consequence.

Many organisations input data or information that they do not clearly own or have permission to use in that context. This becomes especially risky when the data comes from third-party sources or was originally acquired for a different purpose. Even in cases where the business "owns" the raw material, questions arise about whether it can be lawfully inputted into an AI tool or reused in downstream systems.

One example that gained wide attention was when a well-known electronics company's developers inputted proprietary code to ChatGPT to help with debugging. While this may have seemed like a quick fix, it resulted in the company's sensitive technical information being absorbed into a third-party system with unclear boundaries around reuse and retention.

The implications are serious. Once data becomes part of an external model, it is extremely difficult to extract or control. Trade secrets, commercial strategies and unique know-how can be lost, compromised or unintentionally shared through the outputs of the model itself.

The key takeaway? The inputs you feed an AI tool can reshape its behaviour and outputs in ways you may not anticipate, especially if those inputs were not meant to be shared in the first place.

If your team is exploring or adopting AI tools or already developing them, our TMT team can help you think through and structure these complex technical and legal issues before they become roadblocks.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More