Sébastien Bozon | AFP | Getty Images
LONDON — Britain formally implemented sweeping online safety laws on Monday, paving the way for tighter regulation of harmful content online and the potential for hefty fines on tech giants. YuanGoogle and TikTok.
Ofcom, the UK’s media and telecommunications regulator, has released the first version of its code of conduct and guidance for technology companies, setting out what measures technology companies should take to address illegal harms such as terror, hate, fraud and child sexual abuse on their platforms.
The measures form the regulator’s first set of responsibilities under the Online Security Law, a sweeping law requiring tech platforms to do more to combat illegal content online.
The Online Safety Act imposes certain so-called “duties of care” on these technology companies to ensure that they are held accountable for harmful content uploaded and distributed on their platforms.
Although the bill, which was passed into law in October 2023, has not yet fully come into force, Monday’s development effectively marks the official entry into force of the security duties.
Ofcom said tech platforms must complete unlawful harm risk assessments by March 16, 2025, effectively giving them three months to bring their platforms into compliance with the rules.
Ofcom said once the deadline passes, platforms must start implementing measures to prevent the risk of unlawful harm, including better review, easier reporting and built-in security testing.
Melanie Dawes, chief executive of Ofcom, said: “We will be watching the industry closely to ensure businesses meet the stringent safety standards set out for them in our first code and guidance, and quickly follow further requirements in the first half of next year.
Risk of huge fines, service suspension
Under the Cybersecurity Law, Ofcom can impose fines of up to 10% of global annual revenue on companies found to be in breach of regulations.
Individual senior executives could face jail for repeated breaches, while in the most serious cases Ofcom could seek court orders to block access to UK services or restrict access to payment providers or advertisers.
Ofcom had Under pressure to tighten laws Far-right riots broke out in the UK earlier this year, in part due to false information spread on social media.
Ofcom said the duties would cover social media companies, search engines, messaging, gaming and dating apps, as well as porn and file-sharing sites.
According to the first version of the code, reporting and complaint functions must be easier to find and use. For high-risk platforms, companies will be required to use a technology called hash matching to detect and remove child sexual abuse material (CSAM).
The hash matching tool links known CSAM images in police databases to an encrypted digital fingerprint of each content (called a “hash”) to help the social media site’s automated filtering systems identify and remove them.
Ofcom stressed that the code published on Monday was only the first set of codes and that the regulator would consult on more codes in spring 2025, including blocking accounts found to share CSAM content and allowing the use of artificial intelligence to tackle illegal harm.
“Ofcom’s Unlawful Content Code is a major change for online safety, meaning that from March, platforms will have to proactively remove terrorist content, child abuse and intimate images, and many other illegal content, bridging the gap between protection laws .
Kyle added: “If platforms fail to tighten regulation, I support regulators using their full powers, including imposing fines and asking the courts to block access to websites.”