A new report by the Center for Countering Digital Hate (CCDH) has warned that the impact of Meta‘s policy changes could be significant, with as much as 97% of enforcement actions in key areas like hate speech being discontinued.
The report notes that this shift could lead to an estimated 277 million pieces of harmful content spreading unchecked each year.
Recall that Meta‘s recent overhaul of its content moderation policies has raised critical questions about the safety of its platforms, the spread of harmful content, and the company’s commitment to combating misinformation.
The policy changes, announced on January 7, 2025 include halting proactive enforcement of several policies on harmful content, reducing content demotion, eliminating independent fact-checking, and revising policies on hate speech, gender identity, and immigration.
READ THIS: YouTube profits from Ads on channels promoting climate misinformation – Report
One of the most pressing concerns raised in the report is Meta’s lack of clarity on which policies will no longer receive proactive enforcement. While the tech company has stated that it will continue enforcing rules against terrorism, child sexual exploitation, fraud, and drug-related content, it has not explicitly confirmed whether areas such as hate speech, violence and incitement and self-harm will still be proactively moderated.
The report states that the vast majority of Meta’s enforcement actions were previously proactive, meaning that if this system is dismantled, enforcement will rely almost entirely on user reports, which have historically resulted in far fewer actions against harmful content.
The CCDH argues that Meta must clarify the scope of this policy shift and how it plans to mitigate the risks associated with it.
The decision to demote less content that “might violate our standards” has also drawn scrutiny. Mark Zuckerberg had previously stated that limiting the reach of borderline content was an effective way to curb misinformation and prevent the spread of divisive narratives. However, Meta is now abandoning this approach without explaining why it is reversing a strategy it once described as successful.
The CCDH report questions what assessment Meta has made regarding the potential increase in misinformation and harmful content if this measure is discontinued.
Further controversy surrounds Meta’s decision to drop policies on content related to immigration, gender identity, and race. Leaked internal moderation guidelines reveal that statements such as “Black people are more violent than Whites” and “Trans people aren’t real” will now be allowed under the new rules.
The CCDH also questions whether Meta conducted any risk assessment on the impact of these changes, particularly for marginalized communities who may face increased online harassment asking whether the company engaged with affected groups before implementing these policy shifts.
The replacement of independent fact-checking with a crowdsourced “Community Notes” system has raised additional concerns.
The CCDH report notes that studies have found Community Notes ineffective in addressing divisive misinformation, particularly during elections or public health crises. It also highlights that the system often fails to reach a consensus on controversial topics, resulting in misinformation remaining unchecked.
In addition, the report questions how Meta intends to address these known weaknesses, especially given its past commitments to tackling election-related disinformation.
DON’T MISS THIS: How Meta’s policy change may boost fake death hoaxes on Facebook
Meta’s decision to reverse its previous policy of limiting civic content in users’ feeds has also drawn questions. In 2021, the company justified reducing the visibility of political content by citing user feedback that it contributed to a negative experience.
However, the new policy will treat political content like any other, potentially amplifying misinformation and divisive rhetoric.
The CCDH report asks why Meta is making this change despite its own past research showing that civic content was more likely to be rated negatively by users.
Another key question revolves around Meta’s claim that it will relocate its trust and safety teams from California to Texas. Zuckerberg had suggested that this move will reduce concerns about bias in content moderation, but the CCDH report notes that Meta already has major content moderation operations in Texas.
It questions whether this move will result in a reduction of trust and safety staffing, particularly as the company shifts away from proactive enforcement.
CCDH argues that Meta has not provided adequate explanations for these sweeping changes, nor has it outlined a clear plan to prevent an increase in harmful content.
ALSO READ: Meta to test ‘Community Notes’ using algorithms from X
The report calls on legislators, regulators, journalists, and civil society to press Meta for answers on the real-world consequences of these policy shifts, urging greater transparency from a company that continues to shape global discourse.
Nurudeen Akewushola is a fact-checker with FactCheckHub. He has authored several fact checks which have contributed to the fight against information disorder. You can reach him via nyahaya@icirnigeria.org and @NurudeenAkewus1 via Twitter.