Stakeholders at the just concluded GlobalFact 11 conference have criticised social media platforms for their insufficient efforts in combating misinformation and disinformation despite their financial support to fact-checking cause globally.
They include those from the academia, media, tech industry and the global fact-checking community who attended the annual fact-checking conference hosted by the International Fact-Checking Network (IFCN) held in Sarajevo, Bosnia and Herzegovina.
The three-day event underscored the complex relationship between fact-checkers and tech companies, highlighting the urgent need for greater transparency and collaboration to address the pervasive spread of false information on social media.
READ: GlobalFact 11: ‘Now is the time to do something’ – Ressa tells tech giants
On one hand, the tech companies often provide considerable monetary support to fact-checking organizations. They also host the monitoring tools fact-checkers use to track misinformation and disinformation.
But fact-checkers say the companies do not do enough to stem the flow of the misinformation and disinformation on their various platforms. Moderation decisions and algorithms from these companies are shrouded in secrecy, and fact-checkers struggle to do their jobs effectively under a barrage of falsehoods and harassment.
TikTok and Meta representatives held panels at the event on June 26th and 27th respectively, to answer questions and address concerns from the fact-checking community.
During a panel session with TikTok representatives, Tommaso Canetta, the deputy director of Pagella Politica and Facta.news, discussed Georgia’s new “foreign agent” law which classifies organizations that receive foreign funding such as NGOs and some media outlets as foreign agents. Prior to the law’s approval, disinformation about the bill proliferated on TikTok, leading to significant harassment of Georgian fact-checkers.
Canetta noted that TikTok’s policy primarily focuses on removing misinformation rather than providing additional information and context. This approach raises freedom of speech concerns. Fact-checkers argue that offering users more context is more effective in combating and preventing misinformation than simply removing the misleading content.
Lorenzo Andreozzi, TikTok’s T&S Integrity and Authenticity Regional Programme Lead, mentioned that the company has enforcement mechanisms beyond content removal, such as labeling unverified content. He added that TikTok is conducting more internal tests with these labels but did not provide further details, including the timeline for rolling out this initiative to users.
In a panel with Meta, Faktograf executive director and moderator, Ana Brakus, questioned representatives about the company’s decision to de-emphasize news content on users’ pages amid widespread misinformation. Meta public policy manager, Lara Levet, explained that the decision is aimed at allowing users more control over their feeds.
“We’ve received quite a lot of feedback from our users on the kind of content that they do and don’t want to see more or less of and it’s not quite news content, but it’s political content that at large, we have gotten feedback that people want to see less of.
“Meta products are rooted in personalization, so if a user wants to see less political content, they have the user controls to do that,” Levet said.
Several fact-checkers reported that journalistic content is often mistakenly flagged as misinformation on social media platforms. Konkret24 journalist, Gabriela Sieczkowska, noted that Palestinian fact-checkers’ media has been mislabeled as harmful content on Instagram.
Similarly, Belarusian fact-checkers have encountered issues on TikTok, where the Belarus Investigative Center had multiple videos incorrectly flagged as disinformation or violence/terrorism because they referenced the original falsehoods they were debunking.
Andreozzi apologized for any errors TikTok had made in mislabeling fact-checkers’ content and urged fact-checkers to report any errors.
“There is also a way to request a second review, you can appeal a decision that a moderator took, so the content goes back to us, and we can have a secondary assessment of the content,” Andreozzi said.
Many fact-checking organizations heavily rely on funding from tech platforms, complicating their relationships.
Filipe Pardal, director of operations at Polígrafo, mentioned that 85% of their revenue came from platform partnerships last year. Polígrafo aims to diversify, reducing this dependency to 50-60% now, with a goal of 30%.
This reliance is common. At a panel on funding independent fact-checking organizations, representatives shared similar figures. Rakesh Dubbudu, founder and CEO of Factly Media & Research, stated that 70% of their revenue comes from platform partnerships. Giovanni Zagni, director of Pagella Politica and Facta.news , reported a similar dependency, with 60-65% of their revenue from these partnerships.
The rise of artificial intelligence has caused concern among fact-checkers. During panels with Meta and TikTok, moderators sought assurance from representatives about their commitment to fact-checking programmes and human involvement in combating misinformation. Both companies confirmed their use of AI to identify policy-violating content.
However, Tom Bonsundy-O’Bryan, Meta’s head of misinformation policy for Europe, Middle East, and Africa, emphasized that the critical task of investigating claims and determining truth still relies on human fact-checkers.
“We use AI on misinformation in a different way, as you know, to help surface content based on human signals, based on technology-driven signals that could be misinformation, so the fact-checkers can then go and do 90% of the job of working out, is this misinfo or not? It is absolutely not, unequivocally, substituting for fact-checkers,” Bonsundy-O’Bryan said.
READ: Fact-checking essential to free speech, not censorship – IFCN
TikTok representatives noted that they couldn’t ensure human oversight would always have the final say in reviewing potentially problematic content, instead of AI.
However, Jakub Olek, TikTok’s government relations and public policy director for the Nordics and Central Europe, suggested that current moderation practices might offer insight into future approaches for fact-checkers.
“Ninety-eight percent of the content that is being removed before anyone sees it — it’s exactly because the AI is doing the moderation under clear situations, whether it’s violence, hate, nudity, etc. But whenever there’s this gray zone, those come to the human moderators, and they are moderating in local languages,” Olek said.
YouTube representatives were absent from GlobalFact 11, where fact-checkers sought opportunities to pose questions. Despite TikTok and Meta hosting panels at the event, some attendees felt that the tech company representatives did not introduce any novel information.
During panels with the tech representatives, the contrast in priorities between fact-checkers and tech firms was evident.
Seasoned fact-checker and researcher Fatimah Quadri has written numerous fact-checks, explainers, and media literacy pieces for The FactCheckHub in an effort to combat information disorder. She can be reached at sunmibola_q on X or [email protected].