On August 4, 2024, in Rotherham, England, riot police drove away anti-immigration protesters outside the Holiday Inn Express hotel housing asylum seekers.
Christopher Furlong | Getty Images News | Getty Images
Soon after, false claims appeared on social media Three young girls were killed July in the English town of Southport.
Within hours, false information about the attacker’s name, religion and immigration status gained huge traction, triggering a wave of disinformation that fueled days of violent unrest across the UK
“A post on More than 30,000 mentions, Hannah Rose, a hate and extremism analyst at the Institute for Strategic Dialogue (ISD), told CNBC via email.
Other false information shared on social media reportedly claimed that the attacker was on an intelligence watch list, that he came to the UK on a small boat in 2023 and that he was known to local mental health services. Analysis of ISD.
Police refuted the claims The day after they first appeared, Said the suspect was born in Englandbut the narrative has raised eyebrows.
Fake information fuels prejudice and stereotypes
Joe Ondrak, UK head of research and technology at tech company Logically, which is developing artificial intelligence tools to fight immigration, said the disinformation was closely tied to the rhetoric that has fueled Britain’s anti-immigration movement in recent years.
“You know, it’s really catnip to them,” he told CNBC via video call. “It was the right thing to say, but it sparked an even more angry response than if it wasn’t disinformation.”
On August 4, 2024, Rotherham, England, riot police drove away anti-immigration protesters outside
Christopher Furlong | Getty Images
Far-right groups soon began organizing anti-immigration and anti-Islam protests, including demonstrations at planned vigils for the murdered girls. this Escalating into days of riots In Britain, mosques, immigration centers and hotels housing asylum seekers have been attacked.
Ondlak explained that false information spread online exploits pre-existing biases and stereotypes, adding that inaccurate reporting often flourishes when emotions are running high.
“It’s not a case of this false claim coming out and everyone believing it,” he said. Rather, the reports serve as a “basic to rationalize and reinforce pre-existing prejudices and biases and speculation before any established facts surface.” a way”.
“Whether that’s true or not doesn’t matter,” he added.
Many right-wing protesters claim Britain’s high immigration fuels crime and violence. immigrant rights groups deny these claims.
The spread of false information on the Internet
Rose of the Independent Security Bureau said that social media provides an important way for the spread of false information, both through algorithm amplification and through large accounts sharing information.
She explained that accounts with hundreds of thousands of followers, as well as paid blue check accounts on X, shared false information, which was then pushed to other users by the platform’s algorithms.
“For example, when you search for ‘Southport’ on TikTok, in the ‘Other Searches’ section that recommends similar content, the attacker’s pseudonym is promoted by the platform itself, including eight hours after police confirmed the information. No. Correct,” Ross said.
Store fronts were boarded up to protect them from damage ahead of a rally against the far right and racism.
Thabo Jayeshmi | Sopa Images | Light Rocket | Getty Images
ISD’s analysis shows that the algorithm works similarly on other platforms such as X, where the attacker’s incorrect name became a hot topic.
As the riots continued, owner X Musk Post controversial comments regarding violent demonstrations on his platform. His comments were met with resistance from the British government, with the country’s courts minister calling on Musk to “act responsibly”.
TikTok and X did not immediately respond to CNBC’s request for comment.
The false claims have also appeared on Telegram, and Ondrak said the platform plays a role in solidifying narratives and exposing more and more people to “harder beliefs.”
“All of these narratives are being funneled into what we call the post-pandemic Telegram environment,” Ondrak added, explaining that this includes channels that were initially anti-vaccination but were later exploited by far-right figures promoting anti-immigration topics.
In response to CNBC’s request for comment, Telegram denied that it helps spread misinformation. The company said its moderators were monitoring the situation and removing channels and posts that incited violence, which is not allowed under its terms of service.
At least some of the calls for participation in the protests can be traced to the far right, According to logical analysisincluding some linked to the banned right-wing extremist group National Action, which was classified as a terrorist organization under the UK’s Terrorism Act in 2016.
Ondrak also noted that many groups that had previously spread disinformation about the attack have begun to retract their messages, calling it a hoax.
Thousands of anti-racism protesters rallied in towns across Britain on Wednesday, far outnumbering recent anti-immigration protests.
Content moderation?
There is one in the UK Cyber Security Law It is intended to combat hate speech, but it will only come into effect early next year and may not be enough to protect against some forms of disinformation.
On Wednesday, UK media regulator Ofcom sent a letter to social media platforms saying they should not wait for the new law to come into force. The British government also said social media companies should do more.
Many platforms already have terms and conditions and community guidelines in place that cover and take action against harmful content to varying degrees.
A protester holds a sign reading “Racists are not welcome here” during a counter-demonstration against anti-immigration protests staged by far-right figures in the London suburb of Walthamstow on August 7, 2024 Card.
Benjamin Kremel | AFP | Getty Images
The ISD’s Ross said the companies “have a responsibility to ensure that hate and violence are not promoted on their platforms,” but added that they need to do more to enforce their rules.
She noted that the Home Office had identified a range of content on multiple platforms that may have breached its terms of service but remained online.
Henry Parker, vice president of corporate affairs at Logically, also noted the nuances of different platforms and jurisdictions. He told CNBC that companies spend varying amounts on content moderation, and there are issues with different laws and regulations.
“So there’s a dual role here. The platform’s role is to take on more responsibility, abide by its own terms and conditions, and work with third parties like fact-checkers,” he said.
“Then it’s the government’s responsibility to be really clear about what their expectations are … and then be very clear about what will happen if those expectations are not met. We’re not at that stage yet.”