Regulating Artificial Intelligence In Nigeria

A
AELEX

Contributor

AELEX is one of the largest most diversified law practices in West Africa. We provide corporate and commercial legal services which include, transactional, regulatory and policy matters. We also offer Litigation and dispute resolution services in the corporate and commercial space, including private clients.
Artificial Intelligence (AI) represents one of the greatest technological advancement of man and since its emergence, it has witnessed a widespread global adoption. From the introduction of smart...
Nigeria Technology
To print this article, all you need is to be registered or login on Mondaq.com.

Introduction

Artificial Intelligence (AI) represents one of the greatest technological advancement of man and since its emergence, it has witnessed a widespread global adoption. From the introduction of smart assistants like Alexa, Cortana, Siri and Google assistants, to the introduction of marketing chat bots, to the more recent introduction of Chat GPT, AI has continued to evolve.

In recent years, Nigeria has shown increased interest and investment in AI-related initiatives. We have seen AI initiatives like Uniccon's Group Omeife, Africa's first humanoid, the deployment of AI for risk assessment by digital lenders and the deployment of AI for diagnosis and scanning medical records in the health sector. New search trends released by Google revealed that Nigerians have become more interested than ever in AI as their interests grew by 100 percent in 2022 compared to 20211.

Despite the appeal of AI, the use of AI in its various forms comes with its drawbacks and like any novel technology, maximising its benefits will require putting in place adequate guardrails around its use.

In this respect, some jurisdictions have made more progress than others. The European Parliament, in early 2024, passed into law the EU Artificial Intelligence (AI) Act,2 a broad legislation that empowers the EU and its member states to impose restrictions on the use of high-risk systems, prohibit some systems outrightly and generally engender a safer environment for the adoption of AI systems, where they affect EU citizens. The State of California, USA, has introduced a similar bill that will regulate the development and training of advanced AI models to ensure they are not exploited for nefarious purposes.3 There are also similar initiatives in China, the UK, and Brazil, to name a few.

In Nigeria, while there is no specific AI legislation, this article will consider existing laws in the areas of data protection and data privacy, cybercrime, intellectual property, consumer protection, and capital markets, and the extent of their applicability to AI adoption and use in Nigeria.

AI and Data Privacy/Protection

One significant effect of the adoption of AI is the automation of several activities, with little or no human input.

The Nigeria Data Protection Act, 2023 (NDPA), the primary data protection legislation in Nigeria, restricts the exclusive use of automated decision-making processes for processing personal data, including profiling, that result in legal or similarly significant effects on the data subject. Exceptions to this restriction include obtaining the data subject's consent, fulfilment of a legal requirement, or where it is necessary for the performance of a contract involving the data subject. Therefore, businesses relying on AI for automated decision-making, must do this only in compliance with the NDPA.

The Nigeria Data Protection Implementation Framework4 also mandates companies to adopt privacy by design, embedding data protection into technical systems from the start. For example, when developing software or applications handling personal data, it's crucial to integrate data protection measures, to ensure safety of such personal data.

The Nigeria Data Protection Commission (NDPC), the agency primarily charged with the enforcement of the NDPA also recently issued a draft General Application and Implementation Directive (GAID), to the NDPA. The GAID seeks to regulate the use of AI by requiring data controllers or processors who deploy or intend to deploy AI for processing personal data to take into consideration, the provisions of the NDPA, the GAID itself, public policy, and other regulatory instruments issued by the Nigeria Data Protection Commission.5 Specifically, the GAID provides that such controllers or processors, in putting in place the technical and organizational parameters for their AI deployment in data processing, should take into account the right of data subjects not to be subject to a decision based on automated processes, the right to be forgotten, safeguards for processing sensitive personal data, safeguards for child rights and other vulnerable groups, regulation of cross-border data flows, and privacy by design.6 Businesses deploying AI for personal data processing are also mandatorily required to conduct a Data Privacy Impact Assessment ("DPIA") on such processing activities.

AI and Intellectual Property

In May 2024, Hollywood actress Scarlett Johannson alleged that OpenAI had copied her voice for its virtual assistant, Sky, without her consent.7 Before this, OpenAI was sued by The New York Times Company, for allegedly infringing on their copyright, by training ChatGPT on millions of its article.8

While OpenAI may be at the center of these examples, it is not the sole AI company involved in copyright infringement disputes. Other AI companies have faced accusations of training their systems with copyrighted material available online. The defense often hinges on the argument that, as the material is publicly accessible and the AI tools do not reproduce the materials in their entirety, such usage is permissible.

The Copyright Act 2022 (CA 2022), plays a crucial role in governing the use of AI, particularly in relation to intellectual property rights. For instance, it protects original works, including literary, musical, and artistic works, audio visual, sound recordings, and broadcasts. Copyrighted content that falls into these categories and is used by AI may be eligible for copyright protection, provided it is original and fixed in a tangible medium of expression.

Separately, although the CA 2022 attributes authorship and ownership to human creators, the emergence of AI challenges this concept, raising questions about who owns the copyright to works generated by AI. Currently, the CA 2022 does not explicitly address AI authorship, potentially leading to legal uncertainties. The use of AI in creating or distributing content can lead to copyright infringement issues. For instance, AI systems trained on copyrighted material without permission may infringe on existing rights. The Act provides remedies for infringement, including injunctions, damages, and accounts of profits, which could apply to unauthorized use of copyrighted works by AI.

Despite the limitation of the CA 2022 in addressing AI authorship, on authors' ability to establish copyright in materials used by AI, the CA 2022 empowers the Nigerian Copyright Commission (NCC) to demand information and access any database relating to copyright, without warrant. This means that the NCC can potentially demand that an AI deployer provides access to the underlying data used in training its model, to ascertain if it was developed using copyrighted information.

AI and Cybercrime

In 2018, the camera of a facial recognition authentication system in China was hijacked, allowing its hackers to gain access to privileged information and impersonate several users. Using this data, they were able to defraud local tax authorities of $77 million.9

This example shows the vulnerabilities of AI-powered systems to cyber attacks, and the importance of protection to curb bad actors. Some provisions of the Cybercrimes Act, 2015 impose sanctions for this kind of behaviour. The Cybercrimes Act provides that anyone who without authorization, intentionally accesses a computer system or network with the intent of obtaining computer data, securing access to any program, commercial/industrial secrets or classified information; commits an offence and liable to punishment.10

Similarly, if an AI system used for transmitting computer data, content, or traffic data, is unlawfully intercepted by technical means, such action would constitute an offence under the Cybercrime Act.11

AI and Consumer Protection

Increased generative AI adoption in the commercial space has also come with increased consumer risks. For instance, in January 2024, AI-generated videos of Taylor Swift were circulated, falsely claiming the singer was giving away free cookware, with consumers only needing to pay a shipping fee.

The Federal Consumer Commission Protection Act (FCCPA) provides that 'an undertaking shall not knowingly apply to any goods a trade description that is likely to mislead consumers as to any matter implied or expressed in that trade description or alter, deface, cover, remove or obscure a trade description or trade mark applied to any goods in a manner calculated to mislead consumers'.12 Trade descriptions under the FCCPA are generally any form of branding that may inform a customer to request or order the goods.13 Trade descriptions can thus include product labels, advertisement content, product catalogues, email pitches, or business proposals.

Businesses using generative AI for marketing and branding should review AI-generated outputs to ensure they accurately represent their goods or services and do not mislead consumers. Under the FCCPA, using false, misleading, or deceptive representations, or failing to correct them, makes the responsible business liable for damages and monetary restitution to affected individuals.14

AI and Capital Market

As of October 2023, ten major investment firms utilize robo-advisors to serve over 6.7 million clients, managing assets worth more than USD 353.5 billion. In Nigeria, some investment firms have also incorporated robo-advisory into their service offerings.15

The Securities and Exchange Commission (SEC) issued the Rule on Robo-Advisory Services 2021 ('the Rules') to regulate the use of AI by capital market operators (CMOs), to automate the investment process. The Rules apply to CMOs offering digital robo-advisory services in Nigeria. Digital robo-advisory services refer to the 'provision of investment advice using automated, algorithm-based tools which are client-facing, with little or no human adviser interaction in the advisory process'.16

CMOs utilising robo-advisory services are to establish adequate governance and supervisory arrangements to mitigate algorithmic bias.17 The board and senior management of such CMOs are also to ensure that there are sufficient resources to monitor and supervise the performance of algorithms.18

Conclusion

While a specific AI legislation is yet to be passed in Nigeria, the Nigerian government has demonstrated keen interest in AI by developing a national AI strategy under the Ministry of Communications, Innovation, and Digital Economy. It is expected that this strategy will establish a clear path to the development of regulations to support and promote responsible AI practices and provide necessary protection in Nigeria.19 In the interim, issues related to AI adoption can be addressed using existing laws, particularly those discussed in this article.

Footnotes

1. Punch Newspaper; 5 February 2023

2. Caitlin Andrews, 'European Parliament approves landmark AI Act, looks ahead to implementation', (IAPP, 13 March 2024) https://iapp.org/news/a/with-eu-ai-act-on-the-books-lawmakers-look-ahead/ accessed 4 April 2024

3. Owen J. Daniels, 'California AI bill becomes a lightning rod – for safety advocates and developers alike' (Bulletin of the Atomic Scientists, 17 June 2024) <

4. Paragraph 3.2(v)

5. Art. 44(1), Nigeria Data Protection Act 2023 General Application and Implementation Directive 2024

6. Art. 44(2), ibid

7. Tripp Mickle, 'Scarlett Johannson Said No, but OpenAI's Virtual Assistant Sounds Just Like Her' (New York Times, 20 May 2024) < https://www.nytimes.com/2024/05/20/technology/scarlett-johannson-openai-voice.html> accessed 24 June 2024

8. Cade Metz, Katie Robertson, 'OpenAI Seeks to Dismiss Parts of The New York Times's Lawsuit' (New York Times, 27 Feb. 2024) https://www.nytimes.com/2024/02/27/technology/openai-new-york-times-lawsuit.html accessed 24 June 2024

9. 'Cybersecurity risks to artificial intelligence' (GOV.UK, 15 May 2024) < https://www.gov.uk/government/publications/research-on-the-cyber-security-of-ai/cyber-security-risks-to-artificial-intelligence#:~:text=Malicious%20actors%20may%20exploit%20vulnerabilities,et%20al.%2C%202021).> accessed 24 June 2024

10. S. 6(2), Cybercrimes Act 2015 (as amended, 2024)

11. S. 12(1), ibid

12. S. 116(2), Federal Competition and Consumer Protection Act 2018

13. S. 116(1), ibid

14. S. 125(1), Federal Competition and Consumer Protection Act 2018

15. Tom Jackson, 'Nigerian startup launches crypto investment robo-advisor' (Disrupt Africa, 27 Aug. 2021) < https://disruptafrica.com/2021/08/27/nigerian-startup-launches-crypto-investment-robo-advisor/> accessed 24 June 2024; Nicholas Idoko, 'Robo-Advisors: Fit for Nigerian Markets?' (Make Money Online, 28 Jan. 2024) < https://makemoneyonline.ng/nigerian-markets-robo-advisors/> accessed 24 June 2024

16. Rule 1, Rule on Robo-Advisory Services 2021

17. Rule 6(a), ibid

18. Rule 6(b), ibid

19. Abdullah Ajibade, 'Nigeria launches first multilingual LLM after drafting initial National AI Strategy' (Techpoint Africa, 22 April 2024) < https://techpoint.africa/2024/04/22/nigeria-launches-multilingual-llm/#:~:text=With%20this%20development%2C%20Nigeria%20hopes,accelerating%20the%20country's%20AI%20development.>

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More