When Olayinka*, a fact-checker, was scanning Facebook for health-related misinformation, he noticed something odd. Two different wellness brands, Eco Puzzling and Laut Product, were running nearly identical ads.
He observed that both ads featured AI-generated visuals of public figures endorsing wellness brands: Laut Products, registered in India, and Eco Puzzling, which listed its address in Ajah, Nigeria. But while Meta swiftly removed Laut’s content, Eco Puzzling’s ad remained live for days, until the creator took it down themselves.
READ ALSO: AI-generated video falsely presented as Rihanna tribute to Diogo Jota


“I reported both accounts to Facebook, and I got no response. I also reported this year, but I never got any response. I noticed that Laut Products has stopped operating, but Eco Puzzling is still operating because Laut Products was targeting Europe, but Eco Puzzling is still operating because the website links are also operating,” Olayinka told The FactCheckHub.

It wasn’t the first time he had seen this pattern. And he’s not alone. Across Africa, digital rights advocates and fact-checkers have raised growing concerns about what they see as Meta’s gap in moderating AI-generated misinformation.
While the tech giant claims to be investing heavily in automated detection tools and human reviewers, African users say they are being left behind, with content takedowns slower, flags ignored, and scams left to circulate widely.
A review by The FactCheckHub revealed that both Eco Puzzling and Laut Products were inactive, with their last posts dating back to 2023 and 2022, respectively. Still, their content was resurfacing, often targeted at distinct audiences. Laut Products catered to a non-Nigerian base, while Eco Puzzling’s ads were directed at Nigerian users.

This moderation gap is becoming more dangerous as AI-generated content floods platforms like Facebook and Instagram, often impersonating trusted figures and institutions.
The discrepancy highlights a wider pattern of weak content moderation across Africa, where AI-generated misinformation and scam ads often go unchecked for extended periods.
Digital rights advocates say Meta’s approach points to a global moderation gap, where harmful content aimed at African users receives less scrutiny, slower response times.
Take, for instance, a Facebook video ad The FactCheckHub spotted in May (archived here) showed a TVC News anchor endorsing a trading platform Kai Traders while using CBex platform that had already collapsed months earlier, as an example of why people should trust Kai Traders. The presenter’s face and voice seemed real, but a closer look revealed signs of AI generation: unnatural eye movement, misaligned lip-syncing, and distorted transitions between frames. The FactCheckHub identified the presenter to be made to look like Olamide Odekunle.
The video, posted on May 22, garnered over 18,000 views. Comments beneath it quickly identified the video as fake and flagged it as a financial scam. Yet, as of mid-July, the post remained online, unflagged by Meta’s moderation systems. It is pertinent to note that earlier in the same month, TVC launched it’s first AI news anchors as a new method of telling stories.

When The FactCheckHub reached out to Odekunle, she noted that she did not give her consent for her likeness to be used in the video in question.
“As you know, I am one of the AI anchors for TVC news, and when we launched this feat, there was immense pushback because of the fears surrounding AI. Please note that I did not give my consent to be used for the video [ad video] in question,” she stated.
In another instance, a video ( archived here) featuring Tony Elumelu, the CEO of the United Bank for Africa, circulated on Facebook in late May. In the clip, Elumelu appeared to be endorsing a government investment scheme that allegedly offered ₦1 million in weekly returns on a ₦400,000 deposit — a promise too good to be true. The voice and mannerisms were synthetic, a product of AI voice cloning and facial synthesis. Yet, despite hundreds of comments calling it a scam and the link in the post returning an error page, the paid ad remained live.

The FactCheckHub and some other fact-checking organisations have previously flagged dozens of similar AI-generated ads, especially ones using the likenesses of public figures, influencers, celebrities, and government officials to promote fake investment platforms and crypto scams. Meta’s failure to act on many of these, even after reports, highlights a persistent accountability gap.
READ ALSO: How Meta’s policy change may boost fake death hoaxes on Facebook
The issue isn’t limited to financial scams
Beyond financial scams, AI-manipulated content is also being used to spread dangerous health misinformation, amplifying false medical claims under the guise of credibility. These doctored videos often feature trusted media personalities or fabricated experts, misleading vulnerable audiences.
In 2023, The FactCheckHub investigated a viral video claiming a Nigerian doctor had discovered a cure for high blood pressure. Findings revealed the footage was manipulated using AI, falsely featuring Kayode Okikiolu, a Channels TV presenter, and a fabricated testimonial.
Okikiolu, said he first became aware of the manipulated content when a colleague sent him a link. According to him, the impersonation began with sex enhancement drugs but quickly spread to include hypertension treatments and even Ponzi schemes.
“It was a colleague that sent me a link to one of the posts. She said Kayode, have you seen this? She thought I had signed a deal or contract to advertise some sex enhancement drugs, and it graduated to hypertension drugs and then to games and Ponzi schemes. Quite a number of different schemes.
“A top government official called me and said ‘Kayode, I saw this thing you did about the hypertension drug, ’ and he wanted it. That was when I discovered the hypertension one,” he said.
Just days after the death of former President Muhammadu Buhari on July 13, 2025, a fake X post surfaced online, allegedly from his widow, Aisha Buhari. The post falsely claimed that the former president had actually died in 2017 and that Nigerians had been deceived ever since. Despite several users calling out the post as fake in the comments, it remained visible.
The FactCheckHub’s investigation revealed that the screenshot of the post was generated by AI. The profile picture and username did not match any real accounts associated with Aisha Buhari. Searches across X, Facebook, and Instagram yielded no official pages linked to her — only parody accounts. Nonetheless, the manipulated image spread quickly, stoking confusion during an already sensitive national moment.
Experts say this failure to promptly remove harmful AI-generated misinformation is a systemic problem. While Meta says its moderation systems are improving and that it has invested in AI detection tools, critics argue these tools are not sufficiently adapted to African languages, cultural nuances, or political dynamics.
Reflecting on the first time he saw the manipulated videos, Okikiolu described a wave of confusion, fear, and anger.
“It was scary at first, and I was trying to ask myself if I said it. It looks and sounds like me. So maybe I actually said or did this. And it turned to anger that this is unfair, then someone mentioned at least people are seeing your face, there’s nothing like bad publicity, so I was almost thinking maybe it’s not bad,” Okikiolu noted
The emotional toll deepened as he grappled with the unfairness of being used to mislead others.
“I was more afraid at some point cause you had people saying, I saw what you advertised, I tried it, tried the game, and they almost want to blame you, particularly the ones that were cheated or swindled by it.”
Speaking with The FactCheckHub, the executive director of Human Rights Journalists Network of Nigeria, Kehinde Adegboyega, stated that Meta faced backlash for scaling back fact-checking in Africa, replacing verified systems with crowdsourced notes, despite warnings that this would weaken the fight against misinformation in countries like Nigeria and Kenya.
“In many African countries, Meta had already begun phasing out its third‑party fact‑checking partnerships in favour of a crowdsourced “Community Notes” system. That shift echoed a broader decision announced in early 2025, which experts and local coalitions warned would drastically weaken the region’s ability to counter misinformation.
In contrast, Meta’s operations in the Global North and countries like South Africa have remained comparatively robust,” Adegboyega stated.
He added that “Ahead of South Africa’s 2024 elections, Meta launched a dedicated Election Operations Centre, partnered closely with the Electoral Commission of South Africa (IEC), and backed multilingual fact-checking efforts with teams handling languages like English, Zulu, Sotho, Afrikaans, and Setswana. That infrastructure was complemented by digital literacy programmes and local civil society outreach
In many cases, the only barrier standing between users and digital deception are local fact-checkers and civil society groups, under-resourced, understaffed, and often ignored by the platforms they monitor.
Olayinka observed a noticeable gap in how Meta handles flagged content across regions. “There’s a clear disparity. Content flagged in Europe or the U.S. often gets reviewed and removed within hours. But here, we wait days or weeks, if anything happens at all.
“It often feels like Meta follows stricter laws in Europe, while in Africa, there’s little accountability for the digital manipulation happening on its platforms. I came across a report about scammers using Facebook, and it showed how authorities in Europe were able to communicate with Meta to get those pages taken down. But in Africa, it’s a constant struggle, reaching Meta or getting a response about harmful content targeting our region is incredibly difficult.” Olayinka said
He added that the only content he sees flagged on Facebook is content reviewed by third-party fact-checkers in partnership with Meta through the third-party fact-checking programme.
Okikiolu shares the same stance. He escalated the issue to his legal team. But what followed was a frustrating and disheartening journey. He reached out to the platforms, only to be met with silence or slow responses.
“A lot of them don’t have fact-checking departments and mostly have to rely on third-party fact-checking platforms. At least a lot of my videos were debunked or flagged as fake by some of these third-party fact-checking platforms. Facebook picked that and flagged some of them, but there were dozens more videos that they didn’t do anything and I still get these videos sent to me,” he stated.
This underscores Meta’s dependence on external partners for content moderation, especially in regions like Africa, where enforcement from the platform itself is often inconsistent. Yet, despite this reliance, Meta has begun phasing out these partnerships, starting with the U.S, signaling plans to eventually discontinue its Third-Party Fact-Checking Programme globally.
Meta goes against its rules in Africa
According to Meta’s own AI Terms of Service, users are prohibited from using its AI tools to create or promote harmful, deceptive, or misleading content. Specifically, the terms ban:
- Generating or disseminating content that causes emotional harm or discrimination.
- Using AI to mislead, including scams, phishing, and disinformation.
- Violating others’ rights, including privacy and intellectual property.
Meta also outlines in its Community Standards and Transparency Center for misinformation that it prohibits fake accounts, fraud, and coordinated inauthentic behavior, practices often tied to AI-driven misinformation.
With regard to manipulated media, Meta says it will remove AI-generated videos that falsely depict someone saying or doing things they never did, especially when such edits aren’t obvious to the average viewer. This includes technical deepfakes that blend or superimpose content in misleading ways.
However, a review of several AI-manipulated videos spreading scams or health misinformation on Facebook, many of which violate these terms, shows they remain online and unflagged. This raises concerns over Meta’s uneven enforcement, particularly in African countries where moderation infrastructure has been deprioritised or dismantled.
READ : How Facebook Ads fuel phishing scams in Nigeria
Adegboyega stated that Meta’s response to AI-driven misinformation in Africa has been inconsistent and reactive, unlike its more robust approach in the U.S. and Europe.
He noted that while the platform showed some readiness during South Africa’s elections, its withdrawal of funding and moderation in other regions has allowed harmful content to spread unchecked.
“Where Meta withdrew funding or outsourced moderation, misinformation in African languages and contexts slipped through the cracks, with real consequences during elections, conflicts, and social unrest. To protect African users equally, Meta must rebuild fact-checking partnerships, invest in language-aware moderation, activate proactive election-centre contingency, improve labour conditions, and open its process to local oversight and transparency,” Adegboyega finalised.
This report was produced with support from the Centre for Journalism Innovation and Development (CJID) and Luminate
Seasoned fact-checker and researcher Fatimah Quadri has written numerous fact-checks, explainers, and media literacy pieces for The FactCheckHub in an effort to combat information disorder. She can be reached at sunmibola_q on X or fquadri@icirnigeria.org.


