When a Lagos High Court ruled that Meta was liable for a video falsely claiming that prominent human rights lawyer Femi Falana, SAN, was suffering from a terminal illness, the judgment immediately raised questions across Nigeria’s tech and legal communities. Why was Meta held responsible for content it did not directly create? And why did the court treat the case as a privacy violation rather than defamation?
According to Olumide Babalola, Falana’s lawyer and a data privacy professional, the answer lies in both the nature of the harm and the absence of any identifiable third-party publisher.
At the heart of the case was a Facebook video that portrayed Falana as granting an interview about a serious illness, an entirely false claim. While such content could amount to defamation, Babalola says the legal team deliberately pursued the case as a breach of privacy, particularly because it involved false health information.
“Someone’s state of health is personal information. Once you expose that without consent, you have invaded the person’s privacy. Privacy law protects against false light, the disclosure of private facts, and the misuse of a person’s image,” Babalola told Techpoint Africa.
He further explained that Nigerian constitutional privacy protections, as interpreted by the court, cover nearly every aspect of a person’s life, including health, relationships, and personal choices. In this case, the video placed Falana in a “false light” by attributing a medical condition to him that never existed, triggering a clear privacy violation.
The more contentious issue, however, was why Meta was sued instead of the individual or entity that posted the video.
Why was the publishing entity not sued?
After discovering the video, Babalola said the legal team attempted to trace the publisher, identified on Facebook as “Afri Health Centre.” Their search yielded nothing.
“We searched everywhere, on Facebook, outside Facebook, globally. There was no such entity. No contact details, no real identity, nothing,” Babalola explained.
“And that’s why we went after the owner of the platform, thinking that in the court, the owner of the platform would produce the owner of the page. Nothing was produced because none existed.”
Since Meta was unable to produce any verifiable third party during the proceedings, the court was left with only two identifiable actors: the victim and the platform.
“If you say you’re an intermediary, there must be a third party. In this case, the third party did not exist. So if nobody else published it, then Meta published it,” Babalola said.
He also stated that it seemed the proper vetting processes for opening such pages and accounts were not implemented, and so the culprit could not be identified.
The court rejected Meta’s attempt to rely on intermediary immunity, concluding that the platform could not deny liability where it failed to identify the alleged publisher or demonstrate adequate safeguards.
Babalola describes the judgment as one of the most significant platform accountability rulings in Africa, even though he acknowledges that its facts are unusual.
“The idea that platforms have no liability because billions of people use them is no longer acceptable. You can limit liability, but you cannot deny it completely,” Babalola said.
For African users, the ruling opens a new pathway to holding platforms accountable, especially in cases involving anonymous accounts, fake pages, or delayed responses to reports. Babalola argues that platforms could reduce both abuse and legal exposure by making it clear that users who engage in cyberbullying or privacy violations will be identifiable.
“If platforms truly want to limit liability, they should stop shielding wrongdoers,” he said. “Freedom of expression has limits. Once you violate someone else’s rights, there must be consequences.”
While Babalola says the judgment may not apply to every case, it has, however, shifted the conversation in Africa’s courts, as many global tech platforms can no longer assume automatic immunity.










