Amnesty International analysis found X’s algorithm prioritises posts that are most likely to contain lies and hatred – and say nothing has changed amid heightened tensions
Elon Musk’s X played a “central role” in fuelling last year’s racist riots as it is designed to amplify dangerous hate posts, a damning study has found.
Analysis by Amnesty International suggests the social network’s algorithm prioritises comments that are most likely to contain misinformation and hatred. This led to an enormous spread of vile lies following the Southport attacks last July, researchers found – and they warned nothing has changed.
Within 24 hours posts wrongly claiming the killer was a Muslim or had come to the UK by small boat had been seen a staggering 27million times, the study says. And the attack was seized on by far-right agitator Tommy Robinson and notorious influencer Andrew Tate, who had previously been banned for hate speech, with their posts reaching millions of people.
READ MORE: Small boat migrants will be detained for return to France ‘in a matter of days’READ MORE: UK-France small boats returns deal – all you need to know as new details released
Hundreds of people were arrested as violence broke out following the murder of three schoolgirls by British-born Axel Rudakubana, who was 17 at the time.
The report said X, formerly known as Twitter, “dismantled or weakened” key safeguards after Musk took over in 2022. Sacha Deshmukh, Amnesty International UK’s chief executive, said: “By amplifying hate and misinformation on such a massive scale, X acted like petrol on the fire of racist violence in the aftermath of the Southport tragedy.
“The platform’s algorithm not only failed to ‘break the circuit’ and stop the spread of dangerous falsehoods; they are highly likely to have amplified them.”
The charity’s study found X gives top priority to content that drives conversation – regardless of whether this is driven by misinformation or hatred. And it said posts from users who pay a premium are even more visible – further ramping up the risk of “toxic, racist, and false” content.
Pat de Brún, Amnesty’s head of big tech accountability, said: “X’s algorithm favours what would provoke a response and delivers it at scale. Divisive content that drives replies, irrespective of their accuracy or harm, may be prioritised and surface more quickly in timelines than verified information.”
The report states: “In the critical window after the Southport attack, X’s engagement-driven system meant that inflammatory posts, even if entirely false, went viral, outpacing efforts to correct the record or de-amplify harmful content – some of which amounted to advocacy of hatred that constitutes incitement to discrimination or violence.”
It said this “contributed to heightened risks amid a wave of anti-Muslim and anti-migrant violence” which was seen across the UK. And it warns the platform “continues to present a serious human rights risk today”.
Amnesty’s report points to an infamous post by Lucy Connolly, who was jailed for 31 months for stirring up racial hatred. It said X’s failure to remove it was “telling” – saying it was seen 310,000 times, despite her account having less than 9,000 followers at the time.
The Mirror has contacted X for comment.
READ MORE: Join our Mirror politics WhatsApp group to get the latest updates from Westminster