Experts call for restructuring of social media algorithms to prioritize users well-being

Share

A new report from the Knight-Georgetown Institute (KGI) in the United States has called for a fundamental shift in how social media and online platforms design their content recommendation algorithms.

The report titled: Better Feeds: Algorithms That Put People First, warns that the current engagement-driven systems fuel misinformation, promote divisive content, and compromise social media users well-being. Instead of optimizing for clicks, likes, and time spent on platforms, the authors advocate for a model that prioritizes long-term user satisfaction and societal benefits.

READ: Nigeria is NOT the cheapest country to live in Africa

The report, by the Washington D.C.-based institute, highlights how major online platforms, including social media giants like Meta, TikTok, and X (formerly Twitter), rely on recommendation algorithms that maximize engagement at the expense of user’s welfare. 

These systems prioritize content that generates immediate reactions such as outrage, controversy, or sensationalism without considering the long-term effects on individuals and communities. 

The report states that, “Maximizing the chances that users will click, like, share, and view content this week, this month, and this quarter aligns well with the business interests of tech platforms monetized through advertising.”

It stresses that this short-term focus can contribute to “the spread of low-quality or harmful information, reduced user satisfaction, problematic overuse, and increased polarization.”

To reform recommender systems, the report proposes a three-pronged framework. First, it calls for platforms to publicly disclose how their algorithms rank content, including data sources, weighting methods, and the metrics used to evaluate long-term user value.

Second, it suggests that users should be given the ability to switch between different recommendation models, with minors automatically receiving content designed to promote well-being.

Third, it recommends that platforms conduct year-long studies to analyze the long-term effects of algorithmic changes and make these results public.

DON’T MISS THIS: Typological analysis of fact-checking by the Nigerian Fact-checkers’ Coalition during Nigeria’s 2023 general election

Instead of relying solely on engagement-based ranking, the report explores alternative models. Bridging systems should prioritize content that fosters constructive dialogue and diverse perspectives rather than reinforcing echo chambers, it notes.

 “Platforms must offer users an easily accessible choice of different recommender systems. At least one of these choices must be optimized to support long-term value to users,” the report recommends.

It also recommends that platforms should collect direct feedback from users on content quality and satisfaction to refine recommendations, adding that systems should factor in credibility, informativeness, and safety in ranking content rather than just engagement metrics.

The findings come at a time when governments around the world are scrutinizing the role of algorithms in shaping public discourse.

The European Union’s Digital Services Act (DSA) has already implemented transparency and risk assessment requirements for online and tech platforms, and several U.S. states have passed laws targeting social media algorithms that disproportionately affect minors. 

ALSO READ: Misinformation deters 62% of Nigerian youths from participating in politics – Report

Despite legislative interest, many regulatory efforts have been challenged in court on First Amendment grounds. The KGI report suggests that instead of outright bans on personalization or chronological feeds, a more nuanced approach is needed—one that encourages responsible algorithmic design while preserving user experience.

The report emphasizes that the goal is not to eliminate recommender systems but to refine them to serve users and society better. 

“By following this expert working group’s guidance summarized below, platforms and policymakers can help to address the harms associated with recommender systems while preserving their potential to enhance user experiences and societal value.

“This report serves as a roadmap for any policymaker or product designer interested in promoting algorithmic systems that put users’ long-term interests front and centre,” the report concludes.

+ posts

Nurudeen Akewushola is a fact-checker with FactCheckHub. He has authored several fact checks which have contributed to the fight against information disorder. You can reach him via [email protected] and @NurudeenAkewus1 via Twitter.

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Most Read

Recent Checks