U.K. regulators are urging major social media companies to strengthen protections for children on their platforms after lawmakers rejected a proposal for a blanket social media ban for users under 16.
Online safety regulators Ofcom and the Information Commissioner’s Office said on Thursday that they had written to platforms including YouTube, TikTok, Facebook, Instagram, and Snapchat. The regulators called on the companies to address a wide range of child safety concerns, including stronger age verification systems and measures to prevent child grooming.
The move follows a recent vote by U.K. lawmakers who rejected an amendment that would have introduced a social media ban for children under 16 as part of child welfare legislation being debated earlier this month.
At the same time, the U.K. government has begun consulting parents and young people to assess whether restricting children’s access to social media could be an effective solution.
Across Europe, governments are increasingly considering tougher regulations on teen social media use. The debate intensified after Australia became the first nation to implement a nationwide ban on social media access for under-16s in December. Countries such as Spain, France, and Denmark are now exploring similar policies.
Push for stronger age verification
Ofcom said it has formally asked social media platforms to explain the steps they are taking to prevent children from accessing their services, giving companies until April 30 to respond.
The regulator’s demands include stricter enforcement of minimum age rules, blocking strangers from contacting children, improving content safety for teenagers, and stopping companies from testing products — including artificial intelligence tools — on young users.
Tech companies are “failing to put children’s safety at the heart of their products,” and are falling short on promises to keep children safe online,” said Ofcom CEO Melanie Dawes.
“Without the right protections, like effective age checks, children have been routinely exposed to risks they didn’t choose, on services they can’t realistically avoid,” Dawes said.
The Information Commissioner’s Office also issued an open letter stating that social media firms should adopt technologies such as facial age estimation, digital identity verification, or one-time photo matching to improve age checks.
Currently, many platforms rely heavily on users simply declaring their age when signing up. According to regulators, this “self-declaration” method is “easily circumvented” and ineffective.
“This puts under-13s at risk by allowing their information to be collected and used unlawfully, without the protections they are entitled to,” ICO’s CEO Paul Arnold said in the letter.
“With ever-growing public concern, the status quo is not working, and industry must do more to protect children. You should act now to identify and implement current viable technologies to prevent children under your minimum age from accessing your service,” Arnold added.
Industry response and legal scrutiny
Some companies have already taken action in response to stricter global regulations. Meta complied with Australia’s under-16 social media ban by blocking more than 500,000 accounts believed to belong to minors from Instagram, Facebook, and Threads during the first days of enforcement.
However, Meta urged the Australian government to reconsider the blanket restriction, arguing that such bans may push teenagers to bypass the rules and access platforms without proper safety protections.
Instagram also said it will begin notifying parents if teenagers repeatedly search for topics such as suicide or self-harm within a short period.
Meanwhile, a major lawsuit involving Meta and Alphabet began in January. The case centers on a young woman and her mother who claim that design features on Instagram and YouTube contribute to addiction.
During the trial, Mark Zuckerberg and Adam Mosseri have already testified, and a verdict is expected in mid-March. The outcome could help determine the extent of responsibility social media companies hold for protecting young users.
Regulators are also increasing pressure elsewhere in Europe. The European Commission launched an investigation in January into X, owned by Elon Musk, over claims that its AI chatbot Grok spread sexually explicit material involving children.
Separately, the ICO imposed a £14 million ($18 million) fine on Reddit in February for unlawfully processing children’s personal data.
What tech companies say
A spokesperson for Meta told CNBC that the company already uses several of the measures highlighted by regulators, including artificial intelligence tools that estimate a user’s age based on activity and facial age estimation technology.
The company also said it provides a dedicated teen account with built-in safety features. “With teens using on average 40 apps per week, we believe the most effective way to complement our own age assurance approach is to verify age centrally at the app store level,” the spokesperson said.
TikTok said it has introduced enhanced technology across Europe since January to detect and remove accounts belonging to users under its minimum age requirement of 13, supported by specialized moderators.
The platform added that it uses methods such as facial age estimation, credit card authorization, or government-issued identification to verify users’ ages.
Snapchat and YouTube did not immediately respond to requests for comment from CNBC.










