Friday, December 27, 2024
HomeTechnologyFalse news online fuels UK riots but regulators fail to take action ...

False news online fuels UK riots but regulators fail to take action | Real Time Headlines

Kirill Kudryavtsev | Kirill Kudryavtsev AFP | Getty Images

LONDON – Britain’s media regulator Ofcom was chosen by the government last year to oversee harmful and illegal content online under tough new cybersecurity rules.

But even as online disinformation related to the UK stabbings led to real-world violence, Ofcom, the UK’s cybersecurity regulator, found itself unable to take effective enforcement action.

Last week, a 17-year-old knifeman attacked several children attending a Taylor Swift-themed dance class in the British town of Southport, Merseyside.

Three girls were killed in the attack. Police later identified the suspect as Axel Rudakubana.

In the immediate aftermath of the attack, social media users were quick to mistakenly identify the perpetrator as an asylum seeker who arrived in the UK by boat in 2023.

On X, posts sharing the perpetrator’s pseudonym were actively retweeted and viewed by millions of people.

This in turn sparked far-right anti-immigration protests that have since plunged into violence, with shops and mosques attacked, bricks and petrol bombs hurled.

Why can’t Ofcom take action?

But Ofcom, the regulator responsible for taking action against misinformation and other harmful material online, is currently unable to take effective action against tech giants for allowing harmful posts that incite ongoing unrest because all the powers of the bill have not yet been implemented.

The Cybersecurity Law stipulates new responsibilities for social media platforms requiring companies to actively identify, mitigate and manage the risk of harm caused by illegal and harmful content on their platforms, but it has not yet taken effect.

Once the rules come into full effect, Ofcom will The power to impose fines of up to 10% of a company’s global annual revenue If violations are repeated, individual executives can even be jailed.

But until then, regulators are unable to punish companies for cybersecurity violations.

Reddit CEO Steve Huffman said rapid user growth affects revenue per user

Sending false messages intended to cause substantial harm is considered a punishable criminal offense under the Online Safety Act. This may include misinformation intended to incite violence.

Ofcom’s response?

An Ofcom spokesperson told CNBC on Wednesday that the company was moving quickly to implement the bill as quickly as possible, but that new duties requiring tech companies to proactively police harmful content on their platforms in accordance with the law would not be fully implemented.

Ofcom is still consulting on risk assessment guidance and the Unlawful Harm Code of Practice, saying these need to be developed before measures under the Online Safety Act can be effectively implemented.

An Ofcom spokesman said: “We are urgently discussing their responsibilities with relevant social media, gaming and communications companies.”

“While platforms’ new duties under the Online Safety Act won’t come into effect until the new year, they can take action now – without waiting for new laws – to make their sites and apps safer for their users.”

Gill Whitehead, director of Ofcom’s cyber safety group, echoed the statement in an open letter to social media companies on Wednesday, warning that platforms were being used to incite hatred and violence amid recent violence in the UK. Increased risk of violence

Finance Minister Rachel Reeves said this would be the most business-friendly Treasury ever in the UK

Whitehead said: “In a few months’ time, the new security obligations under the Online Security Act will be in place, but you can take action now – without waiting, to make your website and app better for users. Safety.

She added that while the regulator was working to ensure businesses remove illegal content from their platforms, it still recognized “the importance of protecting free speech”.

Ofcom said it plans to publish a final code of practice and guidance on online harm in December 2024, after which platforms will have three months to conduct risk assessments of illegal content.

The guidelines will be subject to review by the UK Parliament and unless lawmakers object to the draft guidelines, online security duties on platforms will begin to be enforced shortly after the process is completed.

Rules to protect children from harmful content will come into force in spring 2025, while tariffs on the largest services will be enforced from 2026.

RELATED ARTICLES

Most Popular

Recent Comments