Researchers recommend ways to reduce viral misinformation on Twitter


RESEARCHERS at the University of Washington Center for an Informed Public have recommended steps that could reduce viral misinformation on Twitter.

Following a study carried out to look at millions of tweets, several steps were found to be effective in curtailing the spread of false information on the microblogging platform.

The study’s findings published in the Journal Nature revealed that the combination of multiple measures which include; de-platforming repeat misinformation offenders, removing false claims and warning people about posts that contain false information, could help reduce the volume of misinformation on Twitter by 53.4%.

READ: ‘Blue Check’ verified accounts are flooding Twitter with misinformation – Report

The researchers assessed twenty-three million tweets related to the 2020 presidential election in the United States, from September 1 to December 15.

Each of the posts was said to be connected to at least one of 544 viral events that were identified by the researchers.

The researchers were able to determine different measures that Twitter could apply to its platform to curtail the spread of misinformation, using the model which was created from the data gotten from the assessment of the tweets.


According to one of the co-authors of the study, Jevin West, an associate professor at the University of Washington Information School and director of the Centre, adopting one of those measures could be effective, but there could be diminishing returns.

The study found that combining multiple measures would bring about significant improvement in the results.

Recommending de-platforming and removal of the misinformation from the platform as the most effective, the study suggests that Twitter implement a three-strike rule.

“We should take that one [de-platforming] serious, especially with discussions of free speech,” West said.

He noted that using Twitter’s algorithm to make posts or accounts spreading false information less visible on the platform would be effective.

But Twitter, like other social media sites, has spent the past two years trying to stop false information about the 2020 US presidential election and COVID-19 from spreading on its platform.

Twitter already takes some measures related to this, including making tweets from offending accounts ineligible for recommendation, preventing offending posts from showing up in search, and moving replies from offending accounts to a lower position in conversations, according to Twitter’s policy page.

ALSO READ: EXPLAINER: Twitter checkmarks and what they symbolise

The study also references nudges. These are the warnings and tags used on tweets advising people that a post has false info. Twitter has made use of these extensively throughout the COVID-19 pandemic regarding misinformation about the virus, treatments and vaccines.

When asked for comment, a Twitter spokesperson said many of the measures explored in the study are already part of its misinformation policies. They also pointed to the company’s “How we address misinformation on Twitter” page.

Meanwhile, the researchers, according to West, chose Twitter first because it was the easiest platform to gather data on.

He said the next step would be to use the model on other platforms like Facebook.


Please enter your comment!
Please enter your name here

Most Read

Recent Checks