Steve Rathje's Findings Reveal That Twitter's Algorithm Is More Inclined To Support Right-wing Content Going Viral

JAKARTA - A post on the Twitter blog revealed that Twitter's algorithms promote right-leaning content more often than left-leaning ones. However, the reason is still unclear. These findings are drawn from an internal study of the algorithmic amplification of Twitter's political content.

During the study, Twitter looked at millions of tweets posted between April 1 and August 15, 2020. These tweets came from news outlets and elected officials in Canada, France, Germany, Japan, Spain, the UK, and the US.

In all countries studied, except for Germany, Twitter found that right-leaning accounts “receive more algorithmic amplification than political leftists.” It also found that right-leaning content from news outlets benefited from the same bias.

“Negative posts about political outgroups tend to receive more engagement on facebook and twitter”

Twitter says it doesn't know why the data shows its algorithm supports right-leaning content, noting that it's a "much harder question to answer because it's a product of interactions between people and platforms."

However, that may not be the case with Twitter's algorithm specifically according to Steve Rathje, a Ph.D. candidate, who studies social media. He published the results of his research explaining how divisive content about political outgroups is more likely to go viral.

"In our research, we were also interested in what types of content are amplified on social media and found a consistent trend: negative posts about political outsiders tend to receive more engagement on Facebook and Twitter," Rathje said as quoted by The Verge.

"In other words, if a Democrat makes negative comments about a Republican (or vice versa), this kind of content will usually receive more engagement," Rathje added.

Taking Rathje's research into account, this could mean that right-leaning posts on Twitter do trigger more outrage, and generate amplification. Perhaps Twitter's algorithm problems have to do with promoting "toxic" tweets more than certain political biases.

As mentioned earlier, Twitter research says that Germany is the only country that doesn't experience a right-leaning algorithm bias. This could be related to Germany's agreement with Facebook, Twitter and Google to remove hate speech within 24 hours. Some users even changed their country to Germany on Twitter to prevent Nazi images from appearing on the platform.

Twitter has been trying to change the way we Tweet for a while now. In 2020, Twitter started testing a feature that alerts users when they're about to post a rude reply, and just this year, Twitter started testing a message that pops up when you think you're getting into a heated Twitter argument.

These are signs of just how much Twitter already knows about issues with bullying and hate posts on their platform.

Frances Haugen, the whistleblower who leaked internal documents from Facebook, claims that Facebook's algorithm supports hate speech and divisive content. Twitter could easily be in the same position but publicly share some internal data checks before any possible leaks.

Rathje points to another study that found moral outrage strengthened posts going viral from both liberal and conservative perspectives but was more successful than content coming from conservatives.

He said that when it comes to features like algorithmic promotion leading to social media virality, "further research should be done to examine whether these features help explain the amplification of right-wing content on Twitter."

If the platform digs deeper into the issue and opens up access to other researchers, it might be better to tackle the divisive content at the heart of the issue.