Twitter’s algorithm favours the political right, a recent study finds

Posted by
Check your BMI

Ink Drop/Shutterstock

toonsbymoonlight

If you’re a Twitter user, you’ll know that when scrolling through your home feed, in between posts from accounts you follow, you’ll sometimes see tweets tagged “you might like”. In other words, Twitter is recommending content to you that it deems may appeal to you.

This is done using an algorithm based on your past activity on the platform, such as the tweets that you have liked or engaged with. It may also be based on your preferences on your profile, where you have indicated topics you would like to see in your Twitter feed. “Machine learning” is used to automatically learn from user preferences and apply this to data the system hasn’t seen before.

As more and more technologies come to use machine learning, an associated challenge is bias, where an algorithm produces results that favour one set of outcomes or users over another, often reinforcing human prejudices. Twitter has on various occasions been accused of political bias, with politicians or commentators alleging Twitter’s algorithm amplifies their opponents’ voices, or silences their own.

In this climate, Twitter commissioned a study to understand whether their algorithm may be biased towards a certain political ideology. While Twitter publicised the findings of the research in 2021, the study has now been published in the peer-reviewed journal PNAS.

The study looked at a sample of 4% of all Twitter users who had been exposed to the algorithm (46,470,596 unique users). It also included a control group of 11,617,373 users who had never received any automatically recommended tweets in their feeds.

This wasn’t a manual study, whereby, say, the researchers recruited volunteers and asked them questions about their experiences. It wouldn’t have been possible to study such a large number of users that way. Instead, a computer model allowed the researchers to generate their findings.




Read more:
Twitter’s ban on political ads does change the game in one way


The authors analysed the “algorithmic amplification” effect on tweets from 3,634 elected politicians from major political parties in seven countries with a large user base on Twitter: the US, Japan, the UK, France, Spain, Canada and Germany.

Algorithmic amplification refers to the extent to which a tweet is more likely to be seen on a regular Twitter feed (where the algorithm is operating) compared to a feed without automated recommendations.

So for example, if the algorithmic amplification of a particular political group’s tweets was 100%, this means that in feeds using the algorithm, that party’s tweets were seen by twice as many users than among users scrolling without the automated recommendations (the control group).

The researchers computed amplification based on counting events called “linger impressions”. These events are registered every time at least 50% of the area of a tweet is visible for at least 0.5 seconds, and provide a good indication that a user has been exposed to a tweet.

A woman drinking a cup of coffee looks at her smartphone.
Twitter uses an algorithm to automatically recommend personalised content to users.
astarot/Shutterstock

The researchers found that in six out of the seven countries (Germany was the exception), the algorithm significantly favoured the amplification of tweets from politically right-leaning sources.

Overall, the amplification trend wasn’t significant among individual politicians from specific parties, but was when they were taken together as a group. The starkest contrasts were seen in Canada (the Liberals’ tweets were amplified 43%, versus those of the Conservatives at 167%) and the UK (Labour’s tweets were amplified 112%, while the Conservatives’ were amplified at 176%).

Amplification of right-leaning news

In acknowledgement of the fact that tweets from elected officials represent only a small portion of political content on Twitter, the researchers also looked at whether the algorithm disproportionately amplifies news content from any particular point on the ideological spectrum.

To this end, they measured the algorithmic amplification of 6.2 million political news articles shared in the US. To determine the political leaning of the news source, they used two independently curated media bias-rating datasets.

Similar to the results in the first part of the study, the authors found that content from right-wing media outlets is amplified more than that from outlets at other points on the ideological spectrum.

This part of the study also found far-left-leaning and far-right-leaning outlets were not significantly amplified compared with politically moderate outlets.




Read more:
Six ways Twitter has changed the world


While this is a very large study which draws pertinent conclusions, there are some things we need to be aware of when interpreting the results. As the authors point out, the algorithms might be influenced by the way different political groups operate. So for example, some political groups might be deploying better tactics and strategies to amplify their content on Twitter.

It is pleasing to see Twitter taking the initiative to carry out this kind of research, and reviewing the findings. The next steps will be to gather more detailed data to understand why their algorithm might be favouring the political right, and what they can do to mitigate this issue.

The Conversation

Shoaib Jameel receives funding from Innovate UK.

Source: TheConversation