The UK officially began enforcing online age checks on July 25, 2025, as part of the Online Safety Act 2023, aiming to keep minors away from harmful content online. Websites and apps that host adult or potentially risky material must now verify that users are 18 or older, or risk being blocked or fined up to £18 million or 10% of global revenue.
Platforms like Reddit, X (formerly Twitter), Bluesky, Discord, and Grindr have started rolling out verification systems. These include uploading a government ID, using AI-powered selfie checks, verifying through mobile providers, or confirming age via email, all without storing users identities.
The law targets any platform where users could be exposed to content involving self-harm, suicide, eating disorders, hate, violence, or illegal activity. Ofcom has launched enforcement investigations and says it can block platforms, cut off payment systems, or limit services if companies don’t comply.
Following the rollout, VPN usage in the UK reportedly jumped between 500% and 1,800% as users tried to dodge age checks by masking their locations.
Related: No Free Pass: EU Refuses To Fund Big Tech’s Rulebook Costs.
While child safety advocates have welcomed the changes, critics warn of risks to online privacy and freedom of expression. Concerns remain that the rules could affect access to sensitive but legal content, like mental health or LGBTQ+ support. Despite safeguards for data minimization, civil rights groups are pushing for greater transparency.
Still, the message from UK regulators is clear: platforms must build the web with age-appropriate boundaries, or prepare to face the consequences.