Technology

UK Social Media Ban for Under-16s: The Plans, the Pilot, and the Debate

The UK government is consulting on restricting social media access for under-16s, including overnight curfews, age verification requirements, and banning infinite scroll. Here is what's being proposed and what it means.

Technology Correspondent8 April 20267 min read
Teenager looking at a smartphone screen in a dark room

Ad Slot — leaderboard

A 300-teenager pilot programme is underway. A government consultation called Growing Up in the Online World has launched. And the Ofcom deadline for social media platforms to demonstrate compliance with child safety requirements falls on 30 April 2026.

Britain is moving towards some form of restriction on social media access for under-16s — but the precise form it will take, how it will be enforced, and whether it will actually work remain genuinely contested questions.

UK Social Media Restrictions — What's Being Proposed

  • 01300-teenager pilot testing social media restrictions underway — results due before summer
  • 02Proposed overnight curfew: no access between 10pm and 7am for under-16s
  • 03Age verification requirements for all major social media platforms
  • 04Ban on infinite scroll and algorithmic amplification features for under-16s
  • 05Ofcom deadline April 30 — platforms must show compliance with Online Safety Act
  • 06Australia has already banned social media for under-16s — UK watching closely

The 300-Teenager Pilot

The most concrete piece of evidence currently being gathered is a six-week pilot programme involving 300 teenagers across the UK, testing the effects of structured social media restrictions on wellbeing, sleep, and academic performance.

Participating teenagers have agreed to:

  • No social media between 10pm and 7am (the proposed overnight curfew)
  • No use of algorithmic recommendation feeds (content chosen by humans they follow only)
  • Weekly check-ins measuring sleep quality, anxiety levels, and self-reported wellbeing

The results of the pilot are expected before the end of the school summer term and will inform whether the government proceeds to legislation. Early indicators from similar programmes in other countries have been encouraging — sleep improvement in particular has been significant.

UK Social Media Pilot Programme.

Participants300teenagers across the UK
Duration6 weeks
Curfew tested10pm–7amno social media
Ofcom deadline30 April2026 compliance

What Restrictions Are Being Proposed?

The government consultation Growing Up in the Online World sets out several distinct types of restriction that could be applied:

1. Overnight Curfews (10pm–7am)

This is the most widely discussed measure. Research consistently links evening and late-night social media use with poor sleep quality in teenagers, which in turn affects mental health, academic performance, and physical health. A curfew would prevent platforms from serving content to verified under-16s during these hours.

The Sleep Evidence

The science on social media and teenage sleep is unusually consistent. Multiple large-scale studies across different countries find that teenagers who use social media after 9pm have significantly worse sleep quality, take longer to fall asleep, and are more likely to report symptoms of anxiety and depression. The blue light question is part of the picture, but researchers believe the social comparison and notification anxiety effects are more significant than the optical ones.

2. Age Verification

Platforms would be required to implement robust age verification — not just a tick box confirming you are 18, but verification that actually works. This could include:

  • Document-based verification (uploading a passport or driving licence)
  • Payment card verification (assuming under-16s don't have credit cards)
  • Parental consent systems

The challenge is that robust age verification either creates significant friction that reduces sign-ups (platforms will resist this) or creates privacy risks through the collection of identity documents (civil liberties groups will resist this).

3. Algorithmic Features Banned for Under-16s

The proposal to ban infinite scroll and algorithmic amplification for under-16s targets the specific design features that behavioural scientists believe are most harmful. Infinite scroll removes natural stopping points. Algorithmic amplification shows teenagers content optimised to maximise engagement — which research suggests disproportionately surfaces emotionally charged and harmful content.

The Design of Addiction

Social media platforms were explicitly designed to be addictive. Internal documents from Meta and others, disclosed through litigation, show that engineers knowingly built features — infinite scroll, like counts, notification systems — to maximise time spent on the platform by exploiting dopamine reward mechanisms. The argument for restricting these features for children is essentially the same as the argument for restricting other deliberately addictive products.

The Australia Precedent

Australia passed legislation in November 2024 banning social media for under-16s outright — the most stringent restriction adopted by any major country. The Australian law places the enforcement burden on platforms rather than parents or children, with significant financial penalties for platforms that fail to prevent underage users from accessing their services.

The early evidence from Australia is mixed:

  • Usage among under-16s has fallen, but enforcement is imperfect
  • Some teenagers have migrated to less regulated platforms
  • There is no clear evidence yet of significant wellbeing improvements at population level

The UK government has described the Australian approach as "instructive" — a useful data point but not a model it is certain to replicate.

Why a Full Ban Is Difficult

A complete ban on social media for under-16s, on the Australian model, faces several practical challenges in the UK. Enforcement requires platforms to implement effective age verification at scale — which raises privacy and friction concerns. It also requires international cooperation, since platforms based outside the UK cannot be forced to comply through domestic law alone without access to UK app stores, which brings Apple and Google into the equation.

Ofcom's Role

The Online Safety Act 2023 gave Ofcom significant new powers to regulate social media platforms' treatment of children. Ofcom has been developing codes of practice that platforms must implement by April 30 2026.

These include:

  • Age-appropriate design requirements
  • Proactive content moderation for harmful material in children's feeds
  • Clear and accessible reporting mechanisms for children encountering harmful content
  • Transparency about algorithmic systems and their effect on children's mental health

Platforms that fail to meet Ofcom's standards face fines of up to £18 million or 10% of global annual turnover, whichever is higher. For TikTok, Meta, and YouTube, that represents potential penalties in the hundreds of millions.

What Parents Think

Polling consistently shows majority support among British parents for stronger restrictions on children's social media use. But parental views diverge sharply on how restrictions should work:

  • Most parents support overnight curfews and age verification
  • Many parents are opposed to a complete ban, citing children's autonomy and the difficulty of enforcement at home
  • Almost all parents support more transparency about how algorithms work and what content platforms serve to their children

The consultation is specifically designed to capture these nuanced views before legislation is drafted.

The Counterarguments

Not everyone supports more restriction. The counterarguments include:

Digital literacy over restriction: Teaching children to use social media safely is more valuable than trying to prevent access, which technology will always find ways around.

Enforcement is a mirage: Short of biometric verification of every device used by every child, age restrictions are circumventable by any teenager motivated to circumvent them. Legislation may be more symbolic than effective.

Benefits of online community: For LGBTQ+ teenagers, young people with disabilities, and those in socially isolated circumstances, online communities provide genuine support that offline restriction would cut off.

Privacy risks of verification: Any system that verifies ages creates a database of children's identity documents — a significant and attractive target for hackers.

What Is Likely to Actually Happen

Based on the current political trajectory, the UK is likely to implement a combination of: mandatory age verification for social media platforms, algorithmic restrictions for verified under-16s, overnight curfew requirements, and significantly higher fines for platforms that fail to enforce their own age restrictions. A complete ban on the Australian model remains possible but is not the current direction of travel.

The Bigger Question

Behind the specific policy debate is a more fundamental question that British society is only beginning to grapple with: what kind of childhood do we want children in Britain to have?

The smartphone arrived in children's lives with extraordinary speed and without the kind of societal deliberation that normally precedes the introduction of transformative technologies. The question of whether we made the right choices — and whether we can correct them — is the most important one the consultation is really asking.


Follow UK technology and children's online safety news at UK News Live.

#social media#children online safety#under 16 ban#tiktok#online safety act#uk government

Ad Slot — rectangle

More Technology stories.