Online Safety Bill - everything you need to know about the UK's security legislation

Person using a laptop with a padlock symbol
(Image credit: Shutterstock)

First published on May 30, 2022. The article is regularly updated to reflect new developments over the status of the proposed legislation. 

As digital technologies keep shaping our everyday lives, governments across the world are crafting new rules to better control tools and platforms. If some of these regulations are focused on how security software - like VPNs or antivirus services - are handling users' data, others seek to draw a legal framework on how to treat the content posted online. 

The UK Online Safety Bill falls into the latter category. First published as a draft a year ago, a revised version was finally introduced to Parliament in March 2022 to kickstart the reviewing process. 

Its goal is quite ambitious: to make the UK the safest place in the world to be online. The bill seeks to tackle a wide range of harmful content - with special attention on protecting children - whilst holding tech giants to account.   

However, many commentators have criticized how the stricter control brought by these new directives could end up undermining internet freedom. Free speech and end-to-end encryption seems to be the areas most at risk, according to civil liberties groups.

Here's everything you need to know about the Online Safety Bill.  

What is the Online Safety Bill?

Flagged as a 'world-first' law of this type, the Online Safety Bill is a huge piece of legislation that aims to regulate the digital space and protect internet users from online harm. 

The bill introduces a 'duty of care' for big tech companies who will have to follow its regulations to ensure a safe environment for their users. This includes the responsibility to amend their Terms and Conditions to be in line with the new directives, while removing all the harmful content posted on their platforms. 

Specifically, the law applies to user-generated platforms - these include social media of the likes of Facebook and Twitter, online forums and messaging app such as WhatsApp - as well as big search engines like Google. 

A group of cubes all displaying social media logos

(Image credit: Shutterstock/Bloomicon)

What does the Online Safety Bill do?

As mentioned before, big tech companies will have the responsibility to protect users from harmful content. This include: 

  • Preventing the spread of illegal content by requiring organizations to remove this as soon as they see it. Examples are posts and images related to children sexual abuse, terrorism, cyberflashing and content encouraging self-harm
  • Protecting children by ensuring they are not exposed to inappropriate content online. This include stricter age-verification processes to access certain websites - like pornography sites - and, in some cases, the need to monitor private chat for child sexual abuse materials
  • Securing adults from 'legal but harmful content' by removing such content from their platforms. This rule applies on major social media like Instagram (already in the spotlight for damaging mental health) and TikTok. While the details haven't been outlined yet, these categories are likely to include abuse, harassment, self-harm and eating disorders
  • Thwarting online fraud by forcing the biggest platforms to take action against paid-for-scam adverts published or hosted on their services. 

The body that will be in charge to make sure these regulations are implemented is the UK's communications regulator Ofcom. Among others, the government Office of Communications will have the power to gather information to support its investigations as well as taking measures to make companies change their behavior.

In a major change from the draft version presented last year, the UK government has reduced the enforcement period from 22 months to just two - that means companies will have just over eight weeks from the Royal Assent of the law to make sure that they're in full compliance to avoid penalties. Sanctions could reach up to two years jail time if they are found guilty of obstructing an Ofcom investigation in any way.

Also, companies that do not comply with their responsibilities could face fines up to £18 million or the 10% of their global annual turnover, if higher.

Top view of a little boy sitting alone on a sofa holding a tablet feeling frustrated while reading bad comments

(Image credit: Shutterstock)

The good...

The Online Safety Bill represents an important first step to try to minimize a wide range of online harm - from online frauds and cyberbullyng to child abuse - making companies more proactive and accountable in coping with these issues.

Particularly, the bill will require social media and search engine platforms to be legally more transparent with their users. In their Terms and Conditions, they will have to include what type of legal content is allowed and what isn't in a comprehensive, clear and accessible way. In this way, adults will be able to make informed decisions before joining the platform. Companies will also need to be more transparent when they enforce these conditions. 

In an attempt to defend freedom of speech and a pluralism of voices, the bill clearly states an obligation from these platforms to protect journalism and democratic content

To foster press freedom, all the news outlets and any individuals delivering journalistic material online will be exempt from any regulations under the bill. At the same time, to contribute to the UK political debate, online platforms will have the duty to take into account the democratic importance of the content posted whilst ensuring the respect of any political opinions. 

...and the bad

The draft Online Safety Bill has received much criticism from individuals and civil liberties commentators fearing these regulations could be at the detriment to users' privacy and freedom of expression.

Particularly worrying is its directive related to 'legal but harmful content' for its potential to radically shape what we will be able to see online. After conducting a legal analysis about the impact of the bill on free speech, the charity Index of Censorship concluded in May that it will "significantly curtail freedom of expression."

The vagueness about the categories considered legal but harmful, together with the fact that politicians will have a say in what social media platforms will need to censor, has sparked many concerns across internet users. A petition against this point has already reached more than 50,000 signatures

Following further pressure from Conservative MPs, the harmful content offence was dropped at the end of November last year. However, Big Tech companies will be required to provide users with a way of filtering out such offensive material from their feeds.  

Provisions that require companies to actively monitor private chats on the lookout for child abuse image or terroristic material are still under fire, though. Similarly to what is happening in the EU, critics fear that this could practically ban end-to-end encryption technology. 

Many faces creating two big faces and a red pencil writing a red cross on a mouth

(Image credit: Shutterstock)

What 2022 meant for the Online Safety Bill

As the UK government tried to fill the power vacuum left by the wave of MP resignations that culminated in Boris Johnson renouncing his role as the UK Prime Minister, the Online Safety Bill's review stage in the House of Commons saw a halt in the summer. 

Then, the switch from Liz Truss' ruinously quick run in power to Rishi Sunak's government pushed the reviewing process to the new year. 

Even though some amendments had already been made at that stage - such as tech companies' duty to shield users from state-sponsored misinformation - 19 campaigner groups were still suggesting further actions in July. At that time, they claimed that the Online Safety Bill was "on the verge of being unworkable." 

Also the House of Commons Committee raised additional issues that needed to be addressed in a report published on July 4

In November, the new government announced a series of changes to the Bill aiming to address such issues. More on this below.

As expected, 2022 has shown how hard it is to regulate the online world. Branded 'illiberal and impractical,'  the new law will surely change the internet as we know it.

New PM, a much-changed Online Safety Bill in 2023

As mentioned before, the Online Safety Bill has been experiencing a few substantial changes since Rishi Sunak came to power. 

As the proposed law finally returned to the House of Commons on January 17, here's how the Online Safety Bill has changed so far.

Child safety online: Announced on January 16 following an internal political rebellion, big tech firms' executives are now liable for criminal charges if they breach their duty of care towards children. 

While digital activists have already expressed some concerns about such a move, according to Magdalena Zima, Criminal Associate at legal firm Kingsley Napley, this change will give the legislation additional teeth. 

She said: "Even though the Bill is likely to face a long journey through the House of Lords, companies should start thinking about having a robust legal back-up, not only to ensure they act lawfully when the Bill is enacted, but also that they do not put their employees at risk of prosecution."

Under further changes already introduced in November, companies will also be required to publish risk assessments and enforcement notices around child safety breaches.

Harmful communications: While the controversial 'legal but harmful' clause has been scrapped from the Bill, new directives for Big Tech companies have been announced. These include transparent terms and conditions detailing the content moderation strategies employed by the platforms, the right to appeal account bans and/or content removal, as well as tools allowing users to have more control over the content they see and engage with.

Additional online crimes: The new version of the Online Safety Bill even includes some new criminal offences. Sharing pornographic deepfakes, for example, will now be a crime. Same goes for 'downblousing' - the unsavory act of taking non-consensual photos down someone's top.

About the future of the Bill, Umar Zeb, Senior Partner at London-based criminal solicitors firm JD Spicer Zeb said: "For the first time ever, social media platforms and search engines will have a legal duty of care to regulate content and protect users.

"Arguably one of the most highly contested issues concerning the Bill was finding a balance between protecting individuals online from harm while simultaneously not intercepting their privacy and respecting their freedom of expression.” 

Chiara Castro
Senior Staff Writer

Chiara is a multimedia journalist committed to covering stories to help promote the rights and denounce the abuses of the digital side of life—wherever cybersecurity, markets and politics tangle up. She mainly writes news, interviews and analysis on data privacy, online censorship, digital rights, cybercrime, and security software, with a special focus on VPNs, for TechRadar Pro, TechRadar and Tom’s Guide. Got a story, tip-off or something tech-interesting to say? Reach out to chiara.castro@futurenet.com