Point AI

Powered by AI and perfected by seasoned editors. Every story blends AI speed with human judgment.

EXCLUSIVE

How AI could disrupt Nigeria’s 2027 election 

Experts warn synthetic media and misinformation could challenge trust in electoral processes
A robotic hand casting a vote |techpoint.africa
Subject(s): ,

Psst… you’re reading Techpoint Digest

Every day, we handpick the biggest stories, skip the noise, and bring you a fun digest you can trust.

Digest Subscription (In-post)

A few days before the 2023 election, an audio recording of an alleged conversation between Atiku Abubakar, former vice president of Nigeria; Aminu Tambuwal, former governor of Sokoto State; and Ifeanyi Okowa, former governor of Delta State, allegedly planning to rig the election, circulated online. The audio sent shock waves among Nigerians on social media, with many calling on the Independent National Electoral Commission (INEC) to foil the conspirators’ plans.

Fact-checkers at TheCable later found the audio to be a deepfake. But by then, the clip had travelled widely online and offline, tainting political conversations and showing how easily synthetic media could influence politics during an election season.

As another presidential election approaches in January 2027, the potential risks of artificial intelligence (AI) to the election are more extensive than in the 2023 election. Nigeria’s 2023 election was characterised by widespread misinformation, including AI-generated media used to campaign for preferred candidates.

For example, in November 2022, three months before the election, a doctored video of Hollywood stars holding a placard that read ‘Yes, it makes sense to vote for Peter Obi in 2023’ went viral on social media. Similarly, videos of Elon Musk and Donald Trump endorsing Peter Obi also circulated, gathering thousands of views and likes, before they were eventually declared fake.   

Since then, AI adoption in Nigeria has grown rapidly. A recent survey by Google and Ipsos revealed that 88% of Nigerian adults say they have interacted with an AI chatbot, and 39% admitted to using AI frequently in their work or life.

The ease of access to AI tools has contributed significantly to its growing adoption. While many AI tools, like ChatGPT, Gemini, and Claude, have paid subscription tiers, they remain largely free and within reach of the average Nigerian, 142 million of whom have access to the Internet and 85% of whom have access to a smartphone.

This rapid adoption means that by the time Nigerians head to the polls again in 2027, AI could have a much bigger impact on their conversations and decisions than it did in the last elections. While AI can have positive and negative impacts on the election process, in a country like Nigeria, with limited policy and regulatory restrictions on its use, the risks of manipulation, disinformation, and erosion of public trust may outweigh the potential benefits if safeguards are not urgently established.

AI tools promise reach, but carry serious election risks

Artificial intelligence can prove beneficial in Nigeria’s complex electoral environment, where over 250 ethnic groups and languages are spoken. AI-powered tools can enable politicians to communicate in multiple local languages, personalise messages for diverse audiences, and engage voters more effectively than traditional campaigns might. At the same time, the technology carries serious risks.

“AI has the capability to enhance voter education, government efficiency, and citizen participation. However, without administration, particularly during elections, the dangers of excessive exposure and abuse are real,” Kola Ijasan, Research Director at Research ICT Africa, says.

Victoria Fakiya – Senior Writer

Techpoint Digest

Stop struggling to find your tech career path

Discover in-demand tech skills and build a standout portfolio in this FREE 5-day email course

Synthetic media, especially AI-generated videos, images, and audio, pose one of the most significant risks to electoral contexts. Beyond deepfake videos of politicians appearing to say things they never said, AI tools can fabricate voice notes, generate fake campaign posters and news screenshots, clone candidates’ voices for fake audio recordings, mass-produce propaganda articles, and flood social media with coordinated false narratives, potentially exacerbating an already charged environment.

According to Mayowa Tijani, a journalist and fact-checker at TheCable, the key shift since the 2023 elections is not the existence of AI-generated content, but the speed, quality, and accessibility of the technology. While AI was already capable of producing convincing fake videos and audio in 2023, those techniques have since become far more sophisticated and difficult to detect.

The implications of such tools extend beyond misleading campaign videos. In a politically tense environment like Nigeria’s, synthetic media could interfere with multiple stages of the electoral process. False result sheets, fabricated concession speeches, or videos depicting violence at polling units could spread rapidly online, potentially confusing voters and undermining trust in official results.

“We can expect the kind of AI use we experienced in 2023, but the sophistication has gotten a lot better. As a result, even for fact-checkers, it will be difficult to differentiate what is real and what is not,” Tijani says. “Where it gets more problematic is when we see AI being used to forge election results because of how good they’ve gotten. We can have an AI-generated result with human handwriting. This is something we’ve not experienced before that we could experience in the coming elections.”

However, while AI-generated content threatens electoral integrity, AI researchers say there are still technical and resource barriers to producing highly convincing deepfakes.

Ayomide Odumakinde, an AI researcher at Cohere, notes that producing convincing deepfakes often requires more than simply generating a video or audio clip. High-quality tools are typically behind paywalls and require some level of technical configuration and data preparation to produce realistic results. In many cases, users need to spend time refining the output before it becomes convincing.

“Most of the high-quality tools that can be used for video deepfakes are locked behind subscriptions. The open source variants are not as great,” Odumakinde says. “Audio deepfake tools are cheaper, and the output is more difficult to detect. If you have a strong understanding of how some of these audio models work, you can take an open source model and make it great, compared to a video generation model.”

Dr. Jeffery Otoibhi, Medical Doctor and AI Research Engineer, adds that AI tools still exhibit some bias towards aspects of Nigeria’s cultural context, such as traditional dresses and skin tone. But in the hands of a highly skilled engineer, they can be successfully manipulated.

“It takes a lot of work, patience, and understanding of the AI system, and not everyone can do it. Many of the videos that circulate on Facebook and WhatsApp are easy to detect in the first few seconds.”

These barriers may limit the average Nigerian, but they are unlikely to stop political actors intent on spreading misinformation. Such actors often have the financial capacity and networks needed to recruit skilled individuals to produce this kind of content.

Nigeria is especially vulnerable to misinformation due to low media literacy, the widespread inability to distinguish real from fake content on social media and messaging platforms, and a polarised political environment. While fact-checking tools exist to counter the potential surge of AI-generated media, experts say they are unlikely to match the volume and speed at which this content can be produced and distributed.

Moreover, many detection tools are trained primarily on Western datasets and cannot effectively identify synthetic content within Nigeria’s cultural and language context.

“AI models for detecting deepfakes still have shortcomings, and some of those include the inability to adapt to local language or code switching when people speak,” Lois Ugbede, Assistant Editor at Dubawa, an African fact-checking organisation, explains. “While several organisations have been trying to create locally inclined tools to help with these problems, we have not found a solid way around it.”

Additionally, the required manpower to fact-check AI-generated media at scale is unavailable. According to Tijani, fact-checking audio files is trickier than video because it relies heavily on human fact-checkers, since not many people are building tools to detect audio. Odumakinde elaborates that video content has a visual component that makes irregularities easier to spot, while audio deepfakes often require much closer attention to detect.  

Fact-checking tools are also evolving alongside AI. Many Nigerians are turning to tools like X’s AI chatbot, Grok, to fact-check and verify the credibility of online content and claims made by politicians. On Facebook, AI-generated media can also be flagged and reviewed by Meta’s detection systems or human moderators, and may be labelled as AI-generated to alert other users. 

However, once the content leaves the confines of these apps and filters into other platforms where fact-checking tools do not exist, its falsehood becomes nearly impossible to detect.

This vulnerability is most apparent on WhatsApp, the messaging app with 51 million active users in Nigeria as of April 2024. Arguably, more campaigning during the 2023 presidential elections took place on WhatsApp groups than on the streets. From broadcast messages that had been “forwarded many times” to questionable voice notes claiming to reveal conversations from closed-door government meetings, political information spreads easily through messaging platforms like WhatsApp, where content is difficult to trace or fact-check.

“It is great that people can fact-check with tools like Grok. It makes it easier to know what is real and what is not,” Charles Ekpo, a Lecturer of Peace Studies at Arthur Jarvis University, says. “The problem is that when this content is forwarded to platforms like WhatsApp, where no one can verify or track its movement, it becomes even more dangerous. We must also pay attention to people who do not have Internet access. If there is a false statement on X, people can fact-check it immediately, but if it is printed out and distributed to people, controlling it at that level becomes a big problem.”

Otoibhi echoes this concern. He notes that even when AI-generated content is easily detectable, it spreads more widely and quickly on WhatsApp than on platforms like Facebook or X because it cannot be monitored or flagged in the same way.

Moreover, the fact-checking credibility of tools like Grok is a concern because, like every AI tool, they can make mistakes and be used to spread misinformation.

“The use of tools like Grok for fact-checking can affect credibility. If Grok says a piece of content is not AI-generated and a journalist or fact-checker with more context says it is, it leads to widespread distrust in the media,” Tijani says.

screenshot of Grok admitting it can spread misinformation
Screenshot: Grok admitting it can be used to spread misinformation

The influence of artificial intelligence on elections is already visible worldwide. In 2024, when more than 60 countries held national elections, political actors increasingly experimented with AI-generated campaign content. In India’s general election, an estimated $50 million was reportedly spent on AI-generated content, including deepfakes of deceased political figures and fabricated celebrity endorsement videos. In Indonesia, the use of AI was comparatively limited; campaign teams used generative tools to cartoonise and rebrand an otherwise stern-looking candidate, making him more appealing to younger voters online.

Elsewhere, the technology has also been deployed in more disruptive ways. In Pakistan’s 2024 election cycle, AI-generated speeches were used to simulate addresses from an imprisoned political leader, enabling him to continue communicating with supporters despite his detention. In the United States, AI also surfaced in the form of deceptive robocalls that mimicked the voice of then-President Joe Biden, falsely advising voters in New Hampshire not to participate in the state’s presidential primary, an incident that triggered investigations and renewed calls for tighter regulation of AI in political communications.

In Nigeria, Ijasan highlights that the primary threat posed by AI is the absolute distrust in the electoral process.

“The actual threat is not the fact that voters are not able to distinguish between truth and fiction. It is that trust that erodes. Once the citizens begin questioning anything, including valid findings, the democracy becomes shaky.”

Expert recommendations

African fact-checkers and AI experts emphasise early funding and last-mile outreach as key measures to implement in the run-up to the elections.

“We need to understand that a bigger part of the voting population is still not online, so whatever intervention we are designing has to go beyond the Internet ecosystem. Now is the time to leverage WhatsApp and radio stations more to reach the last mile. They are the generation raised on the saying “seeing is believing,” so to convince them otherwise, we have to go beyond online platforms,” Tijani advises.

He adds that funding for fact-checking and detection systems must come earlier to counter early misinformation. Regardless, individuals must be educated on how to detect AI-generated media.

Ijasan believes that extreme censorship or outright prohibitions are not the best way to manage the risks of AI use during the election. However, guardrails must be put in place to ensure they are not left unchecked.

“Political actors who utilise AI-generated content during campaigns must be obliged to reveal the usage of the same. Unambiguous standards of labelling can help decrease deception without curtailing legitimate speech.”

Follow Techpoint Africa on WhatsApp!

Never miss a beat on tech, startups, and business news from across Africa with the best of journalism.

Follow

Read next

Events

|


|


|


No events for now. Check back soon.