AI has become the bedrock of much of the misinformation being debunked by fact-checkers today.
From artificially generated images to deepfake audio and videos, as well as subtle image and video manipulations, AI tools have made creating convincing fakes easier than ever.
This shift has raised concerns in the fact-checking community. While the world embraces AI for its potential in health, education, and other knowledge work, misinformation fueled by AI spreads faster, appears more credible, and is more difficult to detect. A fake photo of a protest, an AI-cloned politician’s voice, or a fabricated news broadcast can now travel globally within minutes.
READ ALSO: Five ways to spot AI-generated videos
While many observers suggest that AI has made it easier to create convincing fakes that spread quickly and appear believable, others point out that the same technology is also being adopted in useful ways across different sectors, from journalists using AI for transcription and research, to legal professionals sorting documents, designers experimenting with generative tools, and health workers applying AI in diagnostics.
For fact-checkers, the dilemma seems to be that the technology driving new forms of misinformation is the same one being explored as a possible solution, through tools designed to detect deepfakes and verify manipulated content. The story is still unfolding, with debates ongoing about whether AI will ultimately prove to be a greater threat or an ally in the fight against misinformation.
To better understand these complexities, The FactCheckHub spoke with fact-checkers, information disorder analysts, AI ethics researchers, and policy advocates, who shared their perspectives on how the industry is grappling with these challenges.
Editor of Cable Check, Ebunoluwa Olafusi, states that one of the major challenges with AI is bias.
“I would say AI bias is one of the major problems, as you can be fed with what has been programmed into the AI system, which in turn reinforces biases and stereotypes, leading to more misinformation,” she noted.
Limitations of AI in verification
An information disorder analyst and researcher, Ahmad Aluko, noted that many open-source AI tools often provide inaccurate data, especially when it comes to research references. According to them, AI sometimes misattributes citations, cites the wrong author, or overlooks multiple contributors, while noting that it could, in contrast, be used for social media monitoring to source claims.
“A lot of these tools, like CrowdTangle and Meltwater, have AI functionalities embedded in them. So it is these AI tools that crawl across social media pages and find the relevant searches of your keywords. Yes, AI tools are very, very good [with] that because if you say you want to do that manually, how many handles are you going to search?”
Aluko explained that AI tools should not be relied upon for verification workflows, as they often fall short in accuracy. Instead, he suggested AI could be better applied in monitoring substance claims and social media activities. For verification, he emphasised the importance of independent research and the use of alternative tools outside AI.
This was also echoed by Olafusi as she stressed the importance of transparency in declaring when AI tools are used in fact-checking or reporting, while also emphasising that editorial oversight remains essential for providing context, cultural nuance, and ethical guidance that AI alone cannot deliver.
“AI can’t do all the work of a journalist or fact-checker. There’s always the place of editorial oversight where you provide context and link events, and explain cultural nuances. AI may be unable to determine the sequence of events based on a narrative. It may be unable to relate how one event leads to another, especially in local contexts.” She pinpointed.
READ MORE : Navigating information space: experts debate the role of community notes in fact-checking
Aluko also warned that AI tools often struggle to capture African and Nigerian contexts, sometimes labelling local perspectives as wrong and privileging others, which raises concerns about fairness and representation in fact-checking and risks reinforcing biases rather than addressing misinformation within cultural contexts.
“In Africa or the Nigerian context. There are slang terms, and there are some narratives when you are doing sentiment analysis with AI tools. They don’t see that as wrong. So context is very important” Aluko stated.
He stressed that because AI is developed and trained by people, it inevitably inherits human biases, which can distort representation and context.
Beyond individual perspectives, the debate reflects a wider concern within the fact-checking community about how AI tools are integrated into existing workflows. While their potential is widely acknowledged, questions remain about accuracy, contextual relevance, and the risk of over-reliance without proper safeguards.
Where AI can still support workflows
Building on this, Aluko suggested that while AI tools already exist that can listen and interpret speech, there is a need for tools specifically designed to extract verifiable claims from conversations. He argued, however, that AI should not replace the verification process itself, since that remains the core responsibility of fact-checkers. Instead, AI could be more useful in supporting workflows, such as scanning and monitoring across different social media platforms to identify claims that require verification.
“There are already AI tools that can listen and can decide whatever you are saying. But there should be AI tools that can pull out the claims from the discussion. If AI tools can do verification for me, then why am I being paid to be a fact-checker?” Aluko asked.
He expressed concern that the proliferation of open-source AI tools has led to misuse, suggesting that some form of regulation may be necessary.
While concerns about misuse and the need for regulation persist, some fact-checkers acknowledge that AI still has practical benefits when applied to specific tasks.
Senior researcher and fact-checker at The FactCheckHub, Nurudeen Akewushola, noted that AI can be useful in verification, particularly for translation and transcription when a claim is embedded in a video or audio.
“AI tools are helpful for basic tasks like translation and transcription. For example, if a claim is made in a video or an audio recording, these tools can help us convert it into text or translate it into English so that it’s easier to verify,” he said.
But he added that one key limitation is AI’s struggle with accuracy in fast-changing news cycles, where timely verification requires context and judgment that the tools often lack.
Ethical safeguards and global standards
Speaking with The FactCheckHub, IFCN Director Angie Holan emphasised that ethical use of AI in fact-checking requires strict safeguards. She explained that fact-checkers must prioritise preventing AI “hallucinations” or errors from entering their work, while maintaining high standards of accuracy, diverse sourcing, and transparency so findings can be replicated. Holan added that it is equally important to disclose AI tool usage to audiences and guard against bias that could undermine fairness.
While highlighting that its existing Code of Principles already emphasises transparency of methodology, she noted that the IFCN is evaluating whether AI adoption requires more specific guidance.
“Key areas under consideration include disclosure requirements when AI tools are used, maintaining transparency about the limitations of automated systems, and ensuring AI assistance doesn’t compromise our commitment to rigorous sourcing and verification.
“Fact-checkers are actively working on these issues in a variety of contexts, and we are developing our own expertise as a community,” Holan stated.
On its approach to AI in verification, the IFCN explained that it is drafting guidance for members, stressing that AI should support rather than replace human judgment. The leadership of the network noted that fact-checkers are already experimenting with AI for tasks such as scanning content, identifying claims in large datasets, and responding to audience questions using databases of verified fact-checks.
READ: ‘Undress her’: How generative AI tools are used to violate human rights
“There should always be ‘humans in the loop’ in these processes, with fact-checking journalists maintaining editorial control,” Holan stated.
Here, Aluko and Olafusi’s point on AI being better suited for support tasks, rather than direct verification, echoes Holan’s emphasis that AI should enhance human judgment, not replace it. Both highlight the risks of over-reliance and the need for human oversight to preserve accuracy and contextual integrity.
Building safeguards and partnerships
To guard against over-reliance on technology, experts emphasise the need for safeguards that clearly define where human oversight is mandatory, with training programmes ensuring fact-checkers retain their core verification skills and know when human judgment must prevail over AI outputs.
“Organisations should establish clear guidance for their staff to define where human oversight is mandatory and to ensure staff maintain core verification skills independent of technological assistance. Training programs should emphasise when to trust AI insights and when human judgment must prevail” Holan added.
Akewushola also stressed the urgent need for continuous training on how and why to use AI tools responsibly, noting that data shared online leaves permanent digital footprints that can be exploited. Also, building awareness and confidence in these systems is essential to prevent long-term risks and misuse.
We must keep training ourselves on AI; otherwise, these tools could be misused, and the data we share online may expose us to risks.” Akewushola noted.
Olafusi stressed that regulators, civil society, and tech developers all have a role in ensuring responsible AI adoption in the media space. noting that while regulators and civil groups should prioritise media literacy, big tech companies must design AI tools transparently and with minimal bias.
“They should be decisive about combating disinformation by taking decisive actions against sites and accounts that are known to spread disinformation on social media and have been flagged; instead of promoting their ads, their visibility should be limited or they should be outrightly suspended until they adhere to rules and regulations.” She concluded
At the same time, partnerships are being encouraged, with IFCN using its network meetings and GlobalFact summit to create spaces where fact-checkers can share their needs with developers, rather than adapting to tools designed for other contexts.
This report was produced with support from the Centre for Journalism Innovation and Development (CJID) and Luminate
Seasoned fact-checker and researcher Fatimah Quadri has written numerous fact-checks, explainers, and media literacy pieces for The FactCheckHub in an effort to combat information disorder. She can be reached at sunmibola_q on X or fquadri@icirnigeria.org.


