The startups most exposed by Nigeria’s new AI law are not the ones building AI, but the ones that assumed they weren’t.
Key takeaways
- Nigeria’s National Digital Economy and E-Governance Bill is on track to pass in March 2026, making Nigeria one of the first African countries to have a binding, risk-tiered AI compliance framework.
- The National Information Technology Development Agency (NITDA) gains explicit authority under the bill to request documentation, issue directives, and block non-compliant AI systems.
- The penalty structure gives NITDA discretion to apply either a ₦10 million cap or 2% of annual revenue, whichever is higher.
- Fintech credit scoring, HR tech hiring tools, and healthtech diagnostics are explicitly named as high-risk categories, placing a large slice of Nigeria’s most active startup sectors under the heaviest compliance burden from day one
- Startups using third-party AI APIs, such as OpenAI, Google Gemini, and Anthropic, are not exempt. If the AI-powered feature makes consequential decisions about users, the compliance obligation follows the product, not the model provider.
The National Digital Economy and E-Governance Bill aims to introduce the most structured AI regulatory framework on the continent. The Nigerian AI law establishes a risk-based classification system enforced by NITDA, one that runs the full spectrum from basic transparency disclosures at the low end to mandatory impact assessments, audits, and licensing requirements for systems operating in high-stakes categories.
This is a compliance framework with a penalty structure, an enforcement agency with explicit authority to enforce it, and a timeline that’s already in place. If your product uses automation to make decisions about users (e.g., credit approvals, hiring screens, health diagnostics, identity verification), the obligations in this bill apply to you regardless of whether you built the underlying model or just plugged into an API.
This piece breaks down important aspects of the framework.
Key obligations of the Nigerian AI law at a glance
| AI risk tier | Who it applies to | Key obligations | Penalty for non-compliance | Compliance timeline |
| Minimal risk | Chatbots, spam filters, and recommendation engines with no consequential user impact | Basic transparency disclosure to users | No penalty, but general NITDA oversight applies | Ongoing from bill passage |
| Limited risk | AI systems that interact with users or influence non-critical decisions | Disclose AI involvement to users; maintain basic usage logs | Administrative directive from NITDA; potential operational restrictions | Within 6 months of passage |
| High risk | Fintech credit scoring, HR hiring tools, healthtech diagnostics, identity verification, law enforcement tools | Mandatory conformity assessments, algorithmic impact statements, human oversight mechanisms, registration with NITDA | ₦10 million or 2% of annual revenue, whichever is higher | Immediate upon passage; no grace period for existing systems |
| Unacceptable risk | Social scoring systems, real-time biometric surveillance in public spaces, & AI that exploits vulnerable users | Prohibited outright; no compliance path available | Full operational ban; potential criminal liability for operators | Immediate prohibition |
Note: Final penalty thresholds and timeline provisions are subject to the bill’s exact language at the time of passage. Please, verify against the signed version.
What Nigeria’s AI law says
Risk tier classification
The bill classifies AI systems into four tiers based on consequentiality: the extent of harm a system could cause to rights, livelihoods, or access to services.
- Minimal risk (recommendation engines, basic chatbots): Light-touch obligations.
- Limited risk
- High risk (systems affecting finance, employment, health, movement): Mandatory assessments, human oversight, NITDA registration.
- Unacceptable risk (public biometric surveillance, social scoring): Prohibited outright.
Covered sectors include finance, public administration, critical infrastructure, law enforcement, and automated decision-making.
Enforcement powers
NITDA can formally request documentation, issue binding directives, and critically, block non-compliant AI systems from operating. That last power carries the most immediate business consequence.
The AI bill layers atop the Nigeria Data Protection Act 2023, not replacing it. Both require data processing documentation and user transparency. But the AI bill adds algorithmic accountability, requiring you to document how your model makes decisions and to demonstrate meaningful human oversight. NDPA compliance alone won’t cover you.
Obligations for high-risk systems
For high-risk systems, submit to NITDA:
- Conformity assessments covering design, training data, decision logic, and bias evaluation are mandated.
- Annual algorithmic impact statements detailing risks and mitigation measures are required.
- Internally, a senior executive (CEO, CTO, or designated AI officer) must own the sign-off.
Transparency requirements
Users must be informed at the point of interaction (not buried in terms of service) that they’re interacting with an AI system, what data it uses, and how to contest outcomes. A vague onboarding checkbox won’t suffice.
For credit decisions, hiring screens, or health recommendations, disclosure must be immediate, in plain language, with a clear path to human review.
Regulatory sandbox
You can apply to NITDA for a sandbox environment to test AI systems with reduced compliance overhead. It protects against enforcement action during the testing window but not against NDPA obligations, consumer protection laws, or user-harm claims. It’s a compliance runway, not a liability waiver.
Penalties
NITDA can impose either a flat fee of ₦10 million or 2% of annual revenue (whichever is higher).
- Early stage (₦50 million revenue): ₦10 million cap binds.
- Growth stage (₦500 million revenue): 2% = ₦10 million (roughly equivalent).
- Scale (₦2 billion+ revenue): 2% = ₦40 million+ (flat cap irrelevant).
Executive liability
Personal liability attaches to the individuals responsible for AI deployment, typically the CEO, CTO, or product lead. Regulatory action can follow individuals, not just the company.
Which Nigerian startups are most exposed right now?
Three startup categories sit at the top of the compliance burden stack:
- Fintech credit scoring.
- Healthtech diagnostics.
- HR tech hiring tools.
If your product operates in any of these three categories, you are in the highest compliance tier, and the obligations are active from day one of passage.
What fintech needs to do differently
If you’re a fintech founder running automated credit decisioning or AI-powered identity verification, every automated credit decision must be explainable to the user in plain language:
- What factors were assessed?
- What was the outcome?
- How can they challenge it?
Also note:
- Identity verification flows using AI must disclose that AI is being used at the point of verification, not in the terms of service.
- On the documentation side, your model’s training data sources, decision logic, and error rates need to be formally documented and made available for NITDA review upon request.
The third-party API problem
If your product uses OpenAI, Google Gemini, or Anthropic’s APIs to make or materially influence consequential decisions about users, the compliance obligation follows your product, not the model provider.
That means if your lending product uses a third-party model to score creditworthiness, you own the conformity assessment, the impact statement, and the disclosure obligation.
Architecturally, this means startups need to build an explainability and audit layer on top of whatever model they’re calling because the model provider won’t provide that for you, and NITDA won’t accept “we used a third-party API” as a compliance defense.
Why compliance could become a competitive advantage
The GDPR precedent
When the European Union (EU) implemented the General Data Protection Regulation (GDPR) on May 25, 2018, the initial reaction from many startups was panic.
Soon, however, companies that complied early found that it was a sales asset:
- Enterprise procurement teams began requesting data processing agreements as a baseline vendor requirement.
- Cross-border expansion into regulated markets became easier because the documentation was already in place. Salesforce publicly credited its early GDPR investment with accelerating enterprise deals in European markets.
The pattern here is that in regulated environments, compliance eventually becomes a filter that separates vendors enterprises will work with from vendors they won’t.
Nigeria’s AI readiness signal
Nigeria’s 31-place jump in the Oxford Insights Government AI Readiness Index 2025, landing at 72nd out of 195 countries, is proof of this. This reflects a deliberate institutional push toward AI governance credibility, and that push has direct implications for startups operating in the market.
When governments signal AI readiness at that level, international partners, development finance institutions, and foreign investors read it as a stable regulatory environment worth entering. Startups that are demonstrably compliant with Nigeria’s AI framework will be the ones best positioned to benefit from that inbound interest.
What investors will now look for post-bill
Investors with exposure to regulated markets know what a compliance gap looks like in a data room. After the bill passes, an undocumented AI system becomes a high-risk category and a liability. Compliance, therefore, signals operational maturity.
Government and enterprise contract access
Post-passage, demonstrable AI compliance will serve as a procurement filter for government contracts and large-enterprise vendor relationships in Nigeria. Public sector buyers facing the pressure of the new framework will default to vendors that can show clean compliance documentation, because working with a non-compliant AI vendor creates liability.
FAQs
Does Nigeria’s AI law apply to startups using AI tools built by foreign companies?
Yes, unambiguously. The compliance obligation follows the product deployment, not the model’s origin.
Do the compliance obligations still apply if my startup is pre-revenue?
Yes. The bill’s obligations are triggered by what your product does, not what it earns.
Do we need two separate compliance programs for the new AI bill and the NDPA 2023?
Not entirely separate, but not fully consolidated either. The NDPA 2023 governs how personal data is collected, processed, and protected, and the AI bill governs how automated systems use that data to make decisions.
Conclusion
Despite the seeming imperfection of Nigeria’s AI regulatory framework, it’s fundamentally good for Nigeria’s tech ecosystem. Trust at scale requires structure.
The startups that will define the next decade of Nigerian tech will be the ones that can demonstrate their AI systems are accountable, auditable, and fair. That’s not a compliance burden. It’s a foundation.
Start with your product audit. Map your automated decisions. Identify your risk tier. And if you’re building in a regulated sector and need to stay ahead of Nigeria’s evolving compliance landscape, Techpoint Africa is where I’d start.
Citations
- https://techpoint.africa/news/nigeria-to-pass-ai-law/
- https://fmcide.gov.ng/wp-content/uploads/2024/07/National%20Digital%20Economy%20and%20E-Governance%20Bill%2C%202024%20-%20Draft.pdf
- https://businessday.ng/technology/article/nigeria-set-to-pass-ai-law-among-first-in-africa-to-regulate-sector/
- https://nitda.gov.ng/
- https://techpoint.africa/insight/nigerias-data-protection-bill-2023/
- https://techpoint.africa/insight/techpoint-digest-1081/
- https://techpoint.africa/insight/5-facts-google-ai/
- https://techpoint.africa/news/anthropic-and-rwanda-partner/
- https://gdpr-info.eu/
- https://www.salesforce.com/news/stories/gdpr-five-year-anniversary/
- https://oxfordinsights.com/ai-readiness/government-ai-readiness-index-2025/
- https://ndpc.gov.ng/
Disclaimer!
This publication, review, or article (“Content”) is based on our independent evaluation and is subjective, reflecting our opinions, which may differ from others’ perspectives or experiences. We do not guarantee the accuracy or completeness of the Content and disclaim responsibility for any errors or omissions it may contain.
The information provided is not investment advice and should not be treated as such, as products or services may change after publication. By engaging with our Content, you acknowledge its subjective nature and agree not to hold us liable for any losses or damages arising from your reliance on the information provided.
Always conduct your research and consult professionals where necessary.











