Experts working in the disinformation space on Friday identified the need for more advanced methods in addition to the traditional fact-checking process to check the spread of misinformation.
The experts spoke at the webinar FactCheckHub hosted as part of activities to mark the organisation’s third-year anniversary.
The webinar themed: The Impact of Artificial Intelligence and deep fake technologies on information space, had in attendance Christopher Guess, a Lead technologist at Duke Reporter’s Lab, a senior investigations manager at the Code for Africa, Allan Cheboi, David Ajikobi Nigeria editor of the Africa Check.
READ : Fact-checking not enough amidst disinformation and influence operations – Experts
During the webinar, the editor of the International Centre for Investigative Reporting, Bamas Victoria , sought how fact-checkers could develop better capacity to address sophisticated online misinformation in the future.
Responding to the question, Guess raised the for fact-checkers to start investigating disinformation campaigns rather than individual pieces of disinformation content.
He said fact-checkers should consider investigating the source of disinformation campaigns, the motive behind the campaign, and who is funding it.
Guess pointed out the need to incorporate investigative journalism in fact-checking, urging fact-checkers to combine fact-checking tools and investigative research in unravelling disinformation campaigns.
“Collaboration between fact-checkers and investigative journalists is going to be the next big win,” he added.
Giving tips on how to manually identify AI-generated images, Guess, highlighted that a common feature to note is the way some parts of the body look distorted in the images.
He stated that AI-generated images are difficult to detect because they are generated from complete randomness.
“One way to identify AI images is to look at their hands, ears, shadows and edges. Instead of five fingers, the fingers may look twisted or three or four fingers instead. The ears are often missing. They are all telltale signs for the next few years.”
“One of the kickers as technology moves forward, a lot of these problems would be fixed is”
He added that looking at the text, language, and characters which are often gibberish is also away to tell an AI-generated image.
While sharing tools and techniques that can be used to fact-check AI-generated images, he stated that regular journalism techniques need to be deployed.
“The thing about artificial intelligence and generated AI images and texts is that you have to do regular journalism; you have to treat it just like it’s a politician that is lying to you but is an actual human being talking.
“Assume and act like it’s fake. Regular journalism and fact-checking techniques that is what you want to do. Anything that has been purported has been real… how old is it? Has it appeared on the internet before? Was the person where it says they were? Does it make sense? Treat it like regular reporting and check it out,” Christopher emphasised.
On his part, Cheboi charged fact-checking organisations to devise technology that will promote easy identification of behaviours, narratives around disinformation contents and the actors behind its dissemination.
“When we focus on content alone, we will never beat the bad guys. Now that AI is here, they will keep churning out [false] contents that are very convincing, but you will never know,” he said.
He urged stakeholders to tackle disinformation holistically by looking at the ABC template – Actors, Behaviour and Content,
He explained how disinformation actors use emojis, wrong spellings and other means to evade detection, adding that the actors need to be fished out using technology and research to unravel the source of a disinformation campaign.
Cheboi highlighted the struggles that social media platforms face in containing toxic content such as disinformation and hate speech across Africa. The reason being the context of the narratives are in the local vernacular.
“In Africa, we have more than 2,000 languages. In Nigeria alone, the number of languages can’t be estimated. In Kenya, we have about 42 languages. For the platforms to detect content that is created in all these languages, it’s really hard. That is why AI is becoming something that a lot of investment is being put in place for us to be able to do that.”
Citing examples of how AI can be used for disinformation campaigns, he shared videos that were created by an extremist group with fictitious people pretending to be American pan-Africanists in support of what was going on in their countries while highlighting the influx of generated AI content in Africa.
“We’re talking about AI, and you know, maybe people imagine that in Africa, no one really produces AI to produce content that is divisive, content that at the end of the day would be polarising to citizens, but it is here, it is being used already, and that is the reason we are having this conversation,” Allan said.
Guess further stated that there is a need for disinformation experts to start discussing the policy, advocating and pushing for the criminalisation of misinformation, while Allan emphasised the need for social media platforms to invest more in the African region in order to counter disinformation, hate speech and polarising contents on their platforms.
You can watch the full video of the webinar here