Tech giants are warned they will be punished if they break the new rules to keep children safe online as Ofcom prepares to bring in new guidance for social media firms

Social media giants could be ordered to use facial recognition to check children’s ages under plans to be set out next spring.

Millions of kids would have their profiles taken down from online sites with tech giants warned they will be punished if they break the new rules. Ofcom, the online regulator, will announce guidance social media firms must follow to make sure their users are not under age.

As many as 60% of eight to 11-year-olds have social media profiles, according to Ofcom’s estimates. It is despite apps like Facebook, TikTok, Instagram and Snapchat having age restrictions of 13-years-old.

Jon Higham, Ofcom’s head of online safety policy, said kids were creating adult profiles to get onto apps. He said: “It doesn’t take a genius to work out that children are going to lie about their age. So we think there’s a big issue there.”

Ofcom will finalise its guidance for pornography providers on age assurance in January and its wider children’s safety guidance in April, including age assurance for social media sites.

BLUESKY: Follow our Mirror Politics account on Bluesky here. And follow our Mirror Politics team here – Lizzy Buchan, Jason Beattie, Kevin Maguire, Sophie Huskisson, Dave Burke, Ashley Cowburn, Mikey Smith

POLITICS WHATSAPP: Be first to get the biggest bombshells and breaking news by joining our Politics WhatsApp group here. We also treat our community members to special offers, promotions, and adverts from us and our partners. If you want to leave our community, you can check out any time you like. If you’re curious, you can read our Privacy Notice.

NEWSLETTER: Or sign up here to the Mirror’s Politics newsletter for all the best exclusives and opinions straight to your inbox.

PODCAST: And listen to our exciting new political podcast The Division Bell, hosted by the Mirror and the Express every Thursday.

“The sort of thing that we might look to in that space is some of this facial age estimation technology that we see companies bringing in now, which we think is really pretty good at determining who is a child and who is an adult,” Mr Higham told the Telegraph. “So we’re going to be looking to drive out the use of that sort of content, so platforms can determine who’s a child and who isn’t, and then put in place extra protections for kids to stop them seeing toxic content.”

The Online Safety Act finally became law last year(2023) after years of political chaos and divisions. But Ofcom can’t use its new powers to hold social media giants to account until the end of lengthy consultations to update its rules.

Ofcom’s final guidance for tech firms may not be published until summer 2025, after which companies will have three months to assess the guidance. Parliament will need to approve the code which could also take time.

Under the new laws, tech firms could be fined up to 10% of their global turnover or have their services blocked in the UK if they fail to protect kids online.

A Government spokeswoman said: “Under the Online Safety Act, services which are likely to be accessed by children must have highly effective age assurance. It is for the independent regulator to decide how to implement the Act, but the government is clear that services should be taking proactive action to keep children safe including when it comes to age verification, not waiting for measures to come into force.”

Share.
Exit mobile version