Bridging the AI Governance Gap: Ethical and Regulatory Imperatives for Generative AI in Nigeria

Authors

DOI:

https://doi.org/10.34190/icair.5.1.4129

Keywords:

Artificial Intelligence, Global south, AI Ethics, AI Policy, Misinformation, Deepfakes

Abstract

As generative artificial intelligence (AI) technologies—such as ChatGPT, DALL·E, and other large language and image models—become increasingly mainstream, they introduce new ethical, legal, and governance challenges that are particularly urgent in developing countries. Nigeria, Africa’s most populous nation and a regional technology hub, offers a compelling case study of how these technologies are being adopted in environments with minimal regulatory infrastructure and limited public awareness. This paper examines the ethical and societal implications of generative AI in Nigeria and interrogates the country's preparedness to manage these risks. Despite the creation of the National Centre for Artificial Intelligence and Robotics (NCAIR) in 2020 and the recent passage of legislation such as the Nigeria Data Protection Act (2023) and the Startup Act (2022), Nigeria lacks a unified national AI formal risk classification systems, or sector-specific ethical guidelines. These gaps are important given the widespread, unregulated use of generative AI tools in education, politics, and digital commerce. In higher education, students increasingly rely on generative AI for assignments and projects, raising concerns about academic integrity in a system already strained by infrastructural deficits. Meanwhile, in the political domain, deepfake videos and AI-generated misinformation have circulated in election periods, threatening democratic stability in a media world prone to disinformation and weak content regulation. The paper compares Nigeria’s regulatory trajectory with global trends, particularly the European Union’s Artificial Intelligence Act and similar initiatives in Kenya, South Africa, and Rwanda. It highlights how Nigeria’s reactive approach to AI governance contrasts sharply with more proactive global models. Sectoral analysis reveals risks including digital labour displacement, cultural misrepresentation through foreign-trained models, algorithmic bias, and the erosion of public trust. Ultimately, the study calls attention to Nigeria’s urgent need for a comprehensive, context-sensitive AI ethics and governance framework. Through an analysis grounded in local realities and informed by global comparisons, the paper contributes to broader conversations about equitable, responsible AI adoption in the Global South.

Downloads

Published

2025-12-04