After a long journey and much controversy, the Online Safety Act received royal assent in Parliament on Thursday (October 27), bringing the new rules into effect, with massive consequences for the ways people access online content.
The bill is also aimed at keeping children safe online, allowing parents to have more control over the content their children see online, such as harmful bullying and self-harm materials.
Technology Secretary Michelle Donelan said it meant the UK would become “the safest place to be online in the world”.
According to the Government, the Online Safety Bill is designed to make the UK “the safest place in the world to be online” by regulating how tech platforms should handle harmful content, such as child sexual abuse imagery, cyberbullying, and misinformation.
Since it was published in 2021, the bill has been mired in political turmoil, with privacy campaigners and tech firms labeling it a threat to free speech and people’s data.
Following its clearance from Parliament earlier this year, Technology Secretary Michelle Donelan said the bill was "a game-changing piece of legislation.”
She continued: “Our common-sense approach will deliver a better future for British people, by making sure that what is illegal offline is illegal online. It puts protecting children first, enabling us to catch keyboard criminals and crack down on the heinous crimes they seek to commit.
Tech companies have a moral duty to ensure they are not blinding themselves and law enforcement to the unprecedented levels of child sexual abuse on their platforms
A Government official to the BBC
What is the Online Safety Bill?
The Online Safety Bill is a new set of laws to protect children and adults online. It is intended to make social media companies more responsible for their users’ safety on their platforms.
Now it has become enshrined into law, the bill forces social media companies to remove illegal content.
The Online Safety Bill has been working its way through Parliament since being published in draft form in May 2021. The Government says it would put the media regulator Ofcom in charge of checking whether platforms were protecting their users. Firms that break rules for harmful content would face large fines.
Which measures were scrapped in the Online Safety Bill?
In November, controversial measures that would have forced big technology platforms to take down “legal but harmful” material were scrapped from the bill. Critics of the section in the bill claimed it posed a risk to free speech and gave big tech companies too much power.
Ministers axed the provision on regulating “legal but harmful” material accessed by adults — offensive content that does not constitute a criminal offence. They are instead requiring platforms to enforce their terms and conditions for users.
If those terms explicitly prohibit content that falls below the threshold of criminality — such as some forms of abuse — Ofcom will then have the power to ensure they police them adequately.
Culture Secretary Michelle Donelan denied weakening laws protecting social media users. She said adults would have more control over what they saw online.
She told the BBC the bill was not being watered down, and that tech companies had the expertise to protect people online.
“These are massive, massive corporations that have the money, the know-how, and the tech to be able to adhere to this,” she said in December.
She warned that those who did not comply would face significant fines and “huge reputational damage”.
But some criticised the changes, including Labour and the Samaritans, who called it a hugely backwards step.
What does ‘legal but harmful’ mean?
The bill previously included a section that required “the largest, highest-risk platforms” to tackle some “legal but harmful” material accessed by adults. This referred to offensive content that does not constitute a criminal offence.
It meant that the likes of Facebook, Instagram, and YouTube would have been tasked with preventing people from being exposed to content about topics such as self-harm and eating disorders, as well as misogynistic posts.
Why is the Online Safety Bill controversial?
The bill has been controversial since its introduction to Parliament in May 2021. It has drawn criticism from two main camps: online safety campaigners and free speech advocates.
Advocates demand tougher protections
Despite the planned changes referring to material accessed by adults, the former argue the legislation does not go far enough to protect people from the swathes of harmful content online.
The boss of charity the Samaritans, Julie Bentley, said “the damaging impact that this type of content has doesn’t end on your 18th birthday.
“Increasing the controls that people have is no replacement for holding sites to account through the law and this feels very much like the Government snatching defeat from the jaws of victory.”
In August, Children’s charity NSPCC warned that 34,000 online grooming crimes had been recorded during the wait for the updated online safety laws.
Concerns over mass surveillance
Campaigners against the bill have previously warned that the bill will result in the mass surveillance of every private online message and will ruin London’s reputation as a place to do business if it becomes law, WhatsApp, Signal, and Element have told the Standard.
Meredith Whittaker, president of not-for-profit secure messaging app Signal, previously told the Standard that if the bill doesn’t amend its language: “It will not only create a significant vulnerability that will be exploited by hackers, hostile nation states, and those wishing to do harm, but effectively salt the earth for any tech development in London and the UK at large.”
Encrypted messaging services WhatsApp, Session, Signal, Element, Threema, Viber, and Wire previously signed a letter calling on the Government to “urgently rethink” the proposed bill.
To allay those fears, ministers assured tech firms in September that they could not be forced to pry open private messages, which they cannot currently access due to a security technology called end-to-end encryption.
End-to-end encryption is a security method that keeps chats and messages secure. It is a system of communication where only the users communicating can read their messages.
At the eleventh hour, the Government appeared to back down by confirming in the Lords that it would only require messaging firms to scan for illegal content when it became “technically feasible”.
Crucially, however, that phrase didn’t make it into the amended legislation, leading many privacy advocates and industry bodies to worry that the can has simply been kicked down the road.
Accusations of broad-brush censorship
Free speech campaigners have argued the bill opened the door for technology companies to censor legal speech.
The “legal but harmful” provision was especially seen as restrictive and critics feared too much content would be removed. They have argued that social media sites would have been pressured into taking down material that people had a right to see.
It was “legislating for hurt feelings”, former Conservative leadership candidate Kemi Badenoch said.
In July, nine senior Conservatives, including former ministers Lord Frost, David Davis, and Steve Baker, who has since returned to Government, wrote a letter to then Culture Secretary Nadine Dorries. They said the provision could be used to clamp down on free speech by a future Labour government.
Following the outcry, Ms Donelan has ditched the “legal but harmful” restrictions. These would have imposed potential multi-million-pound fines on sites if they failed to prevent adults and children from seeing material that fell under this category, such as content on suicide and self-harm.
Defending the decision, she said the measure would have harmed free speech and created a “quasi-legal category”.
“It had (a) very, very concerning impact, potentially, on free speech,” Ms Donelan told Sky News. “There were unintended consequences associated with it. It was really the anchor that was preventing this bill from getting off the ground.
“It was a creation of a quasi-legal category between illegal and legal. That’s not what a Government should be doing. It’s confusing. It would create a different kind of set of rules online to offline in the legal sphere.”
Age verification deemed ‘unreliable’
Age checks are another part of the Online Safety Bill that have prompted a backlash. The bill puts the onus on website owners that carry content that “may be harmful to children” to check the age of visitors using an “age verification or age estimation” system.
Alongside the spy clause, this provision has also prompted an outcry from privacy advocates and companies. Experts say these unproven systems can’t be relied upon and will lead to children getting access to sites they shouldn’t and to adults being wrongly blocked.
Meanwhile, Wikipedia has refused to carry out the age checks, despite concerns that non-compliance could see it shut down in the UK.