Tough new UK online safety laws now in force to shield children from harmful content
- Love Ballymena

- Jul 30, 2025
- 2 min read

Major new laws now in force, and aimed at protecting under-18s from the most harmful content on the internet, mean that children in the UK are to experience a safer online environment.
The legislation which was enacted on 25 July, introduced legally enforceable requirements for online platforms to verify users’ ages, restrict access to harmful material, and minimise exposure to toxic algorithms.
The sweeping reforms will force digital platforms – including those hosting pornography and content promoting self-harm, suicide, or eating disorders – to implement secure age checks using facial age estimation, photo ID verification, or credit card checks.
Over 1,000 platforms, including PornHub, have confirmed to Ofcom that they now meet the legal standards.
UK Government Technology Secretary Peter Kyle said:
“Our lives are no longer split between the online and offline worlds – they are one and the same. What happens online is real. It shapes our children’s minds, their sense of self, and their future.
“We cannot – and will not – allow a generation of children to grow up at the mercy of toxic algorithms… The time for tech platforms to look the other way is over. They must act now to protect our children.”
What do the new online safety laws require?
Under the new framework, platforms are legally obliged to take the following actions:
Block access to harmful content:
Introduce highly effective age assurance measures.
Restrict access to primary priority content like pornography and material promoting self-harm, suicide, and eating disorders.
Block or restrict access for under-18s, replicating standards already seen for alcohol or gambling.
Reduce toxic algorithm exposure:
Tackle algorithmic recommendations that push harmful content to children, even if not actively searched for.
Reduce the likelihood of children being drawn into “risky rabbit holes.”
Take rapid action on harmful content:
Platforms must use robust content moderation to swiftly remove harmful material once identified.
Search engines should enforce child-safe search filters that cannot be disabled.
Improve user support and reporting:
Clear, accessible safety tools must be available for children and guardians.
Easy-to-use complaints procedures, safety information, and online safety resources are now mandatory.
What Content Is Being Targeted?
Primary priority content
(Requires age assurance or blocks):
Pornography
Content promoting:
Suicide
Self-harm
Eating disorders
Priority content
(Requires protective measures for under-18s):
Hate speech
Bullying
Violent or abusive material
Content encouraging:
Dangerous stunts
Ingestion of harmful substances
Depictions of serious injury or violence
What Happens If Platforms Don’t Comply?
Non-compliant platforms risk serious penalties from Ofcom, the UK’s communications regulator. These include fines of up to:
£18 million, or
10% of global annual revenue — whichever is higher.
Peter Kyle added:
“If they fail to do so, they will be held to account. I will not hesitate to go further and legislate to ensure that no child is left unprotected.”
Why these changes are needed
Recent figures from Ofcom reveal that:
Children as young as eight have accessed online pornography.
16% of teenagers reported encountering content that stigmatises body image or promotes eating disorders in the past month.
The UK Government’s response aims to reclaim the digital space for young people and create a safer, more humane online world.








