Led by Germain Gauthier from Bocconi University in Italy, it is a rare, real-world randomised experimental study on a major social media platform. And it builds on a growing body of research that shows how these platforms can shape people’s political attitudes.
Two different algorithms
The researchers randomly assigned 4,965 active US-based X users to one of two groups.
The first group used X’s default “For You” feed. This features an algorithm that selects and ranks posts it thinks users will be more likely to engage with, including posts from accounts that they don’t necessarily follow.
The second group used a chronological feed. This only shows posts from accounts users follow, displayed in the order they were posted. The experiment ran for seven weeks during 2023.
Users who switched from the chronological feed to the “For You” feed were 4.7 percentage points more likely to prioritise policy issues favoured by US Republicans (for example, crime, inflation and immigration). They were also more likely to view the criminal investigation into US President Donald Trump as unacceptable.
They also shifted in a more pro-Russia direction in regards to the war in Ukraine. For example, these users became 7.4 percentage points less likely to view Ukrainian President Volodymyr Zelenskyy positively, and scored slightly higher on a pro-Russian attitude index overall.
The researchers also examined how the algorithm produced these effects.
They found evidence that the algorithm increased the share of right-leaning content by 2.9 percentage points overall (and 2.5 points among political posts), compared with the chronological feed.
It also significantly demoted the share of posts from traditional news organisations’ accounts while promoting or boosting posts from political activists.
One of the most concerning findings of the study is the longer-term effects of X’s algorithmic feed. The study showed the algorithm nudged users towards following more right-leaning accounts, and that the new following patterns endured even after switching back to the chronological feed.
In other words, turning the algorithm off didn’t simply “reset” what people see. It had a longer-lasting impact beyond its day-to-day effects.
One piece of a much bigger picture
This new study supports findings of similar studies.
For example, a study in 2022, before Elon Musk had bought Twitter and rebranded it as X, found the platform’s algorithmic systems amplified content from the mainstream political right more than the left in six out of the seven countries.
An experimental study from 2025 re-ranked X feeds to reduce exposure to content that expresses antidemocratic attitudes and partisan animosity. They found this shifted feelings towards their political opponents by more than two points on a 0–100 “feeling thermometer”. This is a shift the authors argued would have normally taken about three years to occur organically in the general population.
My own research offers another piece of evidence to this picture of algorithmic bias on X. Along with my colleague Mark Andrejevic, I analysed engagement data (such as likes and reposts) from prominent political accounts during the final stages of the 2024 US election.
Our findings unearthed a sudden and unusual spike in engagement with Musk’s account after his endorsement of Trump on July 13 – the day of the assassination attempt on Trump. Views on Musk’s posts surged by 138%, retweets by 238%, and likes by 186%. This far outstripped increases on other accounts.
After July 13, right-leaning accounts on X gained significantly greater visibility than progressive ones. The “playing field” for attention and engagement on the platform was tilted thereafter towards right-leaning accounts – a trend that continued for the remainder of the time period we analysed in that study.
Not a niche product
This matters because we are not talking about a niche product.
X has more than 400 million users globally. It has become embedded as infrastructure – a key source of political and social communication. And once technical systems become infrastructure, they can become invisible – like background objects that we barely think about, but which shape society at its foundations and can be exploited under our noses.
Think of the overpass bridges Robert Moses designed in New York in the 1930s. These seemed like inert objects. But they were designed to be very low, to exclude people of colour from taking buses to recreation areas in Long Island.
Similar to this, the design and governance of social media platforms also has real consequences.
The point is that X’s algorithms are not neutral tools. They are an editorial force, shaping what people know, whom they pay attention to, who the outgroup is and what “we” should do about or to them – and, as this new study shows, what people come to believe.
The age of taking platform companies at their word about the design and effects of their own algorithms must come to an end. Governments around the world – including in Australia where the eSafety Commissioner has powers to drive “algorithmic transparency and accountability” and require that platforms report on how their algorithms contribute to or reduce harms – need to mandate genuine transparency over how these systems work.
When infrastructure become harmful or unsafe, nobody bats an eye when governments do something to protect us. The same needs to happen urgently for social media infrastructures.
Timothy Graham, Associate Professor in Digital Media, Queensland University of Technology
This article is republished from The Conversation under a Creative Commons license. Read the original article.
