Australia’s internet regulator has accused the world’s largest social media companies of failing to properly enforce the country’s ban on under-16s using their platforms, despite legislation that came into force in December. The eSafety Commissioner, Julie Inman Grant, has raised “serious concerns” about compliance from Facebook, Instagram, Snapchat, TikTok and YouTube, citing poor practices including allowing banned users to repeatedly attempt age verification and inadequate safeguards to stop new account creation. In its initial compliance assessment since the prohibition came into force, the regulator found numerous deficiencies and has now shifted from observation to active enforcement, warning that platforms must show they have put in place “appropriate systems and processes” to prevent children under 16 from accessing their services.
Regulatory Breaches Revealed in Initial Significant Review
Australia’s eSafety Commissioner has outlined a worrying pattern of failure to comply among the world’s biggest social media platforms in her inaugural review following the ban took effect on 10 December. The report demonstrates that Meta, Snap, TikTok, YouTube and Snapchat have jointly neglected to establish adequate safeguards to prevent minors from accessing their services. Julie Inman Grant raised significant concerns about systemic weaknesses in age verification processes, highlighting that some platforms have allowed children who originally stated themselves under 16 to later assert they were older, effectively circumventing the law’s intent.
The findings indicate a notable intensification in the regulatory response, with the eSafety Commissioner transitioning from monitoring towards direct enforcement. The regulator has stressed that simply showing some children still hold accounts is insufficient; platforms must instead furnish substantive proof that they have established robust systems and processes designed to prevent under-16s from opening accounts in the first place. This shift demonstrates the government’s determination to hold tech giants accountable, with potential penalties looming for companies that fail to meet the legal requirements.
- Allowing formerly prohibited users to confirm again their age and restore account access
- Enabling repeated attempts at the same age assurance method with no repercussions
- Insufficient safeguards to block accounts for under-16s from being opened
- Limited complaint mechanisms for families and the wider community
- Absence of clear information about enforcement efforts and account deletions
The Scope of the Issue
The substantial scale of social media usage amongst Australian young people highlights the compliance challenge confronting both the authorities and the platforms in question. With millions of accounts already restricted or removed since the ban’s implementation, the figures paint a picture of widespread initial non-compliance. The eSafety Commissioner’s findings suggest that the operational and technical barriers to implementing age restrictions have proven far more complex than anticipated, with platforms struggling to differentiate authentic age confirmations from fraudulent ones. This intricacy has placed enforcement authorities grappling with the fundamental question of whether existing age verification systems are adequate to the task.
Beyond the operational challenges lies a wider issue about the readiness of companies to place compliance ahead of user growth. Social media companies have long resisted stringent age verification measures, citing privacy concerns and the real challenge of confirming age online. However, the regulatory report suggests that some platforms may not be making sufficient effort to deploy the infrastructure required by law. The shift towards active enforcement represents a pivotal moment: either platforms will substantially upgrade their compliance infrastructure, or they stand to incur substantial fines that could reshape their business models in Australia and possibly affect compliance frameworks internationally.
What the Data Shows
In the opening month following the ban’s launch, Australian officials stated that 4.7 million accounts had been restricted or deleted. Whilst this number initially seemed to prove regulatory success, subsequent analysis reveals a more nuanced picture. The considerable quantity of account deletions indicates that many under-16s had managed to establish accounts in the beginning, demonstrating that protective safeguards were insufficient. Furthermore, the data casts doubt about whether suspended accounts reflect genuine enforcement or simply users deleting their accounts willingly in reaction to the new restrictions.
The minimal transparency surrounding these figures has troubled independent observers attempting to evaluate the ban’s actual effectiveness. Platforms have revealed little data about their compliance procedures, success rates, or the profile of deleted profiles. This absence of transparency makes it hard for regulators and the public to determine whether the ban is working as intended or whether teenagers are merely discovering alternative ways to reach social media. The Commissioner’s insistence on comprehensive proof of systematic compliance measures reflects mounting dissatisfaction with platforms’ resistance to disclosing full information.
Sector Reaction and Opposition
The major tech platforms have responded to the regulatory enforcement measures with a combination of assurances of compliance and doubts regarding the practical feasibility of the ban. Meta, which operates Facebook and Instagram, emphasised its dedication to adhering to Australian law whilst simultaneously arguing that precise age verification remains a significant industry-wide challenge. The company has advocated for a different approach, proposing that robust age verification and parental approval mechanisms implemented at the application store level would be more efficient than platform-level enforcement. This position reflects wider concerns across the industry that the current regulatory framework puts an unrealistic burden on separate platforms.
Snap, the creator of Snapchat, has adopted a more assertive public position, stating that it had suspended 450,000 accounts following the ban’s implementation and asserting it continues to suspend additional accounts each day. However, industry observers question whether such figures demonstrate genuine compliance or simply represent reactive account management. The core conflict between platforms’ commercial structures—which traditionally depended on maximising user engagement and expansion—and the statutory obligation to systematically remove an whole age group persists unaddressed. Companies have consistently opposed rigorous age verification methods, pointing to privacy concerns and technical limitations, creating a standoff between regulators and platforms over who bears responsibility for execution.
- Meta maintains age verification ought to take place at app store level rather than on individual platforms
- Snap claims to have locked 450,000 accounts following the ban’s implementation in December
- Industry groups cite privacy issues and technical challenges as impediments to effective age verification
- Platforms maintain they are doing their best whilst challenging the ban’s overall effectiveness
More Extensive Inquiries About the Prohibition’s Efficacy
As Australia’s under-16 online platform ban moves into its implementation stage, key concerns remain about whether the law will achieve its intended goals or merely push young users towards unregulated platforms. The regulator’s initial compliance assessment reveals that following implementation, substantial gaps remain—children keep discovering ways to bypass age verification mechanisms, and platforms have struggled to stop new underage accounts from being created. Critics argue that the ban’s success depends not merely on regulatory oversight but on whether young people will genuinely abandon major social networks or simply shift towards other platforms, secure messaging apps, or virtual private networks designed to mask their age and location.
The ban’s global implications contribute further complexity to assessments of its success. Countries such as the United Kingdom, Canada, and various European states are monitoring Australia’s approach closely, exploring similar legislation for their respective populations. If the ban does not successfully reduce children’s digital engagement or does not protect them from dangerous online content, it could damage the case for similar measures elsewhere. Conversely, if regulation becomes sufficiently robust to genuinely restrict underage participation, it may inspire other nations to adopt comparable measures. The outcome will likely influence worldwide regulatory patterns for the foreseeable future, making Australia’s regulatory efforts analysed far beyond its borders.
Who Gains and Who Is Disadvantaged
Mental health campaigners and child safety organisations have backed the ban as a essential measure to counter algorithmic manipulation and contact with harmful content. Parents and educators argue that taking young Australians off platforms built to maximise engagement could lower anxiety levels, improve sleep patterns, and reduce exposure to cyberbullying. Tech companies’ own research has recognised the mental health risks associated with social media use amongst adolescents, lending credibility to these concerns. However, the ban also removes legitimate uses of social media for young people—maintaining friendships, accessing educational content, and participating in online communities around shared interests. The regulatory approach assumes harm exceeds benefit, a calculation that some young people and their families dispute.
The ban’s concrete implications extends beyond individual users to affect content creators, small businesses, and community organisations reliant on social media platforms. Young people who might have pursued creative careers through platforms like TikTok or Instagram now face legal barriers to participation. Small Australian businesses that depend on social media marketing lose access to younger demographic audiences. Community groups, charities, and educational organisations struggle to reach young people through channels they previously utilised effectively. Meanwhile, the ban unexpectedly advantages large technology companies with resources to create age verification infrastructure, potentially strengthening their market dominance rather than reducing it. These unintended consequences suggest the ban’s effects reach well further than the simple goal of child protection.
What Follows for Regulatory Action
Australia’s eSafety Commissioner has announced a notable transition from passive monitoring to proactive action, marking a key milestone in the implementation of the youth access prohibition. The authority will now collect data to ascertain whether platforms have neglected to implement “reasonable steps” to restrict child participation, a regulatory requirement that extends beyond simply recording that young people stay within these platforms. This strategy demands tangible verification that platforms have established proper safeguards and protocols designed to exclude minors. The regulatory body has indicated it will pursue investigations methodically, constructing evidence that could result in considerable sanctions for non-compliance. This move from observation to intervention demonstrates increasing dissatisfaction with the platforms’ current efforts and suggests that consensual engagement alone will no longer suffice.
The implementation stage highlights critical issues about the sufficiency of sanctions and the practical mechanisms for ensuring platform accountability. Australia’s legislation offers compliance mechanisms, but their success hinges on the eSafety Commissioner’s willingness to pursue official proceedings and the platforms’ ability to adapt effectively. International observers, notably regulators in the United Kingdom and European Union, will carefully track Australia’s implementation tactics and consequences. A robust enforcement effort could establish a template for further jurisdictions contemplating comparable restrictions, whilst inadequate results might undermine the entire regulatory framework. The coming months will determine whether Australia’s groundbreaking legislation produces genuine protection for teenagers or remains largely symbolic in its influence.
