Canberra: Australia’s federal government is pressing ahead with its plan to ban children under 16 from using social media, with a new report revealing that the technologies available to enforce the law all carry risks and shortcomings.
The ban, due to take effect on December 10, will require major platforms including Facebook, Instagram, Snapchat and YouTube to take reasonable steps to stop under-16s from creating accounts and deactivate existing ones. Companies that fail to comply could face penalties of up to $32.5 million (A$50 million).
The government commissioned the UK-based Age Check Certification Scheme to assess how the ban could be implemented. The final report examined several approaches. Verification through official identity documents was found to be the most accurate but raised concerns that platforms could store sensitive data for longer than necessary or share it with regulators, leaving users at risk of breaches.

Facial recognition and behavioural analysis were 92 percent accurate for those aged 18 and over, but were unreliable for people close to the 16-year-old threshold, leading to false approvals and rejections. Parental approval systems were also flagged as problematic due to privacy and accuracy concerns.
The report concluded that no single method was foolproof. It is recommended that verification systems should be layered, combining different technologies to build a more reliable approach while also addressing the risk of circumvention through forged documents or the use of VPNs.
Communications Minister Anika Wells said the findings showed age checks could be private, efficient and effective, if tech companies deployed them responsibly. Anika Wells added that the wealthiest platforms had no excuse for failing to meet the requirements, pointing to their existing use of artificial intelligence and user data for commercial purposes.
Polling indicates that most Australians support the ban, particularly parents worried about the harmful impacts of social media. However, mental health advocates and digital rights groups caution that the policy could push young people towards less regulated corners of the internet and isolate them from valuable social connections. They argue the government should instead focus on stronger regulation of harmful content and improved digital education.
Despite the debate, the government maintains that platforms must have age assurance systems in place by December 10, setting the stage for what it describes as a world-first in online safety regulation.

