Social media firms abandon fight against Australia law banning under-16 users
October 28, 2025
Platforms expect to monitor a range of signals, but age detection will be spotty.
Social media platforms have agreed to comply with Australia’s social media ban for users under 16 years old, begrudgingly embracing the world’s most restrictive online child safety law.
On Tuesday, Meta, Snap, and TikTok confirmed to Australia’s parliament that they’ll start removing and deactivating more than a million underage accounts when the law’s enforcement begins on December 10, Reuters reported.
Firms risk fines of up to $32.5 million for failing to block underage users.
Age checks are expected to be spotty, however, and Australia is still “scrambling” to figure out “key issues around enforcement,” including detailing firms’ precise obligations, AFP reported.
An FAQ managed by Australia’s eSafety regulator noted that platforms will be expected to find the accounts of all users under 16.
Those users must be allowed to download their data easily before their account is removed.
Some platforms can otherwise allow users to simply deactivate and retain their data until they reach age 17. Meta and TikTok expect to go that route, but Australia’s regulator warned that “users should not rely on platforms to provide this option.”
Additionally, platforms must prepare to catch kids who skirt age gates, the regulator said, and must block anyone under 16 from opening a new account. Beyond that, they’re expected to prevent “workarounds” to “bypass restrictions,” such as kids using AI to fake IDs, deepfakes to trick face scans, or the use of virtual private networks (VPNs) to alter their location to basically anywhere else in the world with less restrictive child safety policies.
Kids discovered inappropriately accessing social media should be easy to report, too, Australia’s regulator said.
Tech companies have slammed Australia’s social media ban as “vague,” “problematic,” and “rushed,” AFP reported.
Each platform is expected to detect age based on a range of signals, such as “how long an account has been active,” whether the users engage with content geared toward younger users, how many friends “who appear to be under 18” they have, or whether they appear to be underage in their profile pictures or image uploads. The eSafety regulator said platforms can also rely on audio analysis to detect age based on users’ voices.
Platforms could also analyze users’ activities for clues and may dig through users’ interactions to analyze “the language level and style” of both the user and their friends, the official guidance said. Or they could detect that a suspected child user’s posting seems to align with “school schedules.”
These signals are not expected to perfectly detect users’ ages, but platforms simply need to show they took “reasonable steps” to block banned users, Australia’s law says. Officials have recommended that platforms use a “layered” approach to overcome attempts at circumvention, the BBC reported.
Everybody accepts that age checks won’t be perfect
When the ban takes effect in December, many kids will likely go undetected, and some adult users will inevitably be falsely flagged as being underage.
A study commissioned by Australia’s regulator found that all methods for detecting kids—including “formal verification using government documents, parental approval, or technologies to determine age based on facial structure, gestures,” or behaviors—were “technically possible.” But there is no “single ubiquitous solution that would suit all use cases, nor did we find solutions that were guaranteed to be effective in all deployments,” the BBC reported.
Perhaps most glaringly, face scans have a notably higher error rate when attempting to distinguish between a 16- and 17-year-old, the study showed.
Many platforms are concerned about enforcement risks, despite the regulator noting that compliance won’t be perfect—directly acknowledging in the FAQ that “no solution is likely to be 100 percent effective all of the time.” To shield adult users from any unintended censorship, the law requires platforms to provide a simple way for users to challenge underage account bans.
Meta’s policy director for Australia and New Zealand, Mia Garlick, told AFP that removing underage accounts will pose “significant new engineering and age assurance challenges.” Nevertheless, Meta plans to comply with the law and remove all users under 16 once the law kicks in later this year.
Australia’s law is supposed to reduce harms by keeping harmful content out of reach and reducing social media “pressures.” But experts have warned that kids can still access harmful content on platforms not impacted by the ban, and there’s no clear evidence that the ban will reduce kids’ screentime.
Instead, critics worry the ban will push kids to darker corners of the Internet while removing an important tool that allows some users, like kids with disabilities, to connect with others. Some advocates have pushed the government to consider exemptions for kids with disabilities. But Australia’s regulator backs the law as a necessary “delay” of all minors’ social media use, insisting that under the new regulations, “no under-16s have to feel like they’re ‘missing out’” since none of their peers will have social media.
Rachel Lord, an Australian spokesperson for YouTube, told AFP that “the legislation will not only be extremely difficult to enforce, it also does not fulfil its promise of making kids safer online.” YouTube is among the ban’s loudest critics.
Australia has proposed reviewing the law’s impacts after two years. In the meantime, other countries could adopt similar legislation, as concerns over child safety have only heightened. Age checks laws have become more popular, and artificial intelligence features that have alarmed parents and lawmakers are increasingly embedded in social media.
Search
RECENT PRESS RELEASES
Related Post


 
	 
	