Published October 10, 2025
Last updated January 12, 2026

How to prepare for Australia's Social Media Minimum Age requirements

Compare age assurance methods, review eSafety's guiding principles, and design your compliant age assurance system.
Louis DeNicola
Louis DeNicola
Brandon Chen
Brandon Chen
7 min
Key takeaways
Australia's Social Media Minimum Age (SMMA) requires many social media platforms to prevent Australians who are under 16 years old from creating or having accounts. eSafety, the primary regulator, will begin enforcement on December 10, 2025.
You don’t need to use specific age assurance methods or systems to comply with the law, but eSafety requires platforms to take "reasonable steps" to assess users’ ages and preserve their privacy. Waterfalling several methods based on risk could be a reasonable approach.  
Use eSafety’s six guiding principles to inform your age assurance strategy. Regularly review your methods and processes to ensure you stay compliant as your platform and user base change.

Australia's social media minimum age framework (SMMA) requires many social media platforms to prevent Australians younger than 16 from creating or having accounts. 

eSafety, Australia’s online safety regulator, is set to start enforcing the SMMA on December 10, 2025. In the meantime, it commissioned a technology trial and released regulatory guidance to help social media platforms prepare. 

As an identity verification and age assurance platform involved in the trial, we’ve been responding to the same questions from organizations: How do I know if the law applies to me? How do I prepare? Which age assurance methods can I use? How will this impact our conversion rates?

We cover the basic SMMA requirements in a separate blog post, including which platforms are exempt, the age assurance and data privacy requirements, and the penalties for non-compliance. Below, we’ll discuss creating a compliant age assurance system and methods for assessing users’ ages. 

The Age Assurance Technology Trial’s influence

The Australian government commissioned the Age Check Certification Scheme to conduct an Age Assurance Technology Trial (AATT) to help guide policy decisions. The trial determined if organizations can use existing age assurance technology without compromising Australians’ privacy or security. 

The final results were published in August 2025, and the answer was yes — with some caveats. The technology exists, but there isn’t a one-size-fits-all solution. Organizations also need to consider the context when choosing age assurance methods, configuring tools, and setting up fallbacks for users. For example, platforms that are likely to attract children or host harmful content may need a different approach than platforms that rarely have harmful content. 

Although the AATT wasn’t an accuracy test and didn’t rank participants, Persona was a top-performing vendor for the age verification and age estimation portions. 

Persona achieved the highest readiness rating for a range of age verification methods in the AATT. We also scored the second highest among all providers in Mean Absolute Error when age prediction was available, a critical accuracy metric when evaluating age estimation models.

How to build compliant age assurance systems

After the trial results were published, eSafety released regulatory guidance clarifying that age-restricted social media platforms must take "reasonable steps" to prevent Australian children who are under 16 from creating or having accounts. 

According to the guidance, relying solely on user self-declaration isn't sufficient. However, eSafety doesn’t require platforms to use specific technologies or methods to assess users’ ages. 

Review the different age assurance methods

The guidance takes a principles-based approach and shares three general age assurance methods. Depending on the situation, a platform might want to use one or several methods: 

  • Age inference: Infer the user’s age or age range based on information you know or collect about the user. For example, how long the user has had an account, what types of content they interact with, and whether they have credit accounts, such as a credit card.  

  • Age estimation: Analyze physical or behavioral characteristics, such as a user’s face or hand gestures, to predict a likely age or age range without the user’s identity documents or date of birth. 

  • Age verification: Verify the user’s age by collecting official documents and records, such as the user’s government ID, and tying them to the user. 

Age inference and estimation methods often require a "buffer zone" (an error threshold) to account for inaccuracies. Age verification methods can be the most accurate, but they also require additional friction and data collection compared to many age estimation or inference methods. Additionally, eSafety forbids platforms from requiring government ID verifications as the only option for users. 

Age verification

Age estimation

Age inference

Example data sources

A government ID combined with a selfie

A selfie or recording

A user’s metadata or interaction history 

Technique

Collect and verify information from the user

Analyze a user’s selfie or recording

Analyze data about the user

Accuracy

Highest with binary outcomes

Moderate with a suggested buffer zone

Variable depending on signal quality

Use a waterfall approach 

eSafety encourages platforms to use a waterfall approach, which it also calls successive validation. The strategy involves using multiple age assurance methods, which might help platforms be transparent while scaling age assurance checks. 

In some cases, you can combine several low-assurance methods to increase overall assurance. Or, you might use a risk-based approach and require certain users to go through methods that offer higher certainty. 

For example, you might analyze a selfie — or selfies — to estimate if a user is 16 or older, but use a two-year buffer because the technology isn’t perfect. If the selfie estimate says the user is 30, you let them pass. However, if the selfie estimate returns 17, you require the user to verify their age using a different method. 

Follow eSafety’s six guiding principles

eSafety recognizes that what’s “reasonable” can depend on the platform, the types of content it hosts, its users’ demographics, and its overall risk profile. There is no one-size-fits-all approach to complying with the SMMA.

However, eSafety’s six guiding principles should inform your age assurance processes and systems. Here are examples of how you might be able to align with each: 

  1. Reliable, accurate, robust, and effective: Use independently certified age assurance methods, set error and buffer thresholds, conduct regular red-team testing, and track effectiveness metrics.

  2. Privacy-preserving and data-minimizing: Collect only the minimum information needed, use non-personal information wherever possible, and don't retain personal information.

  3. Accessible and fair: Test your methods across different demographics in Australia, clearly explain the methods you use, and offer multiple options so users can choose what works best for them. 

  4. Transparent: Explain when and why you require age assurance, describe what information you collect and how you use it, explain what can happen to age-restricted users’ accounts, and clearly explain what your process looks like so users can better detect scams. 

  5. Proportionate: Balance your age assurance methods based on your risk profile and try to actively avoid unnecessarily blocking access.

  6. Evidence-based and responsive to emerging technology and risk: Update your processes and systems based on changes in user demographics, user behavior, or your platform. Be prepared to prove that you continuously monitor and try to improve your systems. 

Your next steps

eSafety’s guidance leaves a lot of decisions up to platforms, but here are some of the steps you can take to prepare: 

  • Assess your current state: Audit your existing age assurance measures, data collection abilities, and effectiveness metrics to understand your baseline capabilities.

  • Identify gaps: Map where your current measures fall short of eSafety's expectations and determine what additional systems, processes, or technologies you need.

  • Prioritize existing accounts: Focus on detecting and deactivating accounts held by users under 16 while you create measures for blocking new underage users. 

  • Build your layered approach: Design a successive, risk-based system that combines multiple methods across the user journey.

  • Document everything: Keep records of your systems, processes, testing, effectiveness metrics, and ongoing evaluations. 

How Persona can help

Persona helps platforms around the world navigate complex age assurance, data protection, and know your customer (KYC) regulations. With December 10 approaching, our team can help you design and implement a strategy tailored to your platform's needs. 

Persona's verified identity platform provides the building blocks for creating compliant, user-friendly flows that align with eSafety's expectations and guiding principles. Our privacy controls also help you limit data collection, automatically redact sensitive information, and maintain audit trails without storing underage users' personal information.

Our comprehensive library of age assurance methods includes: 

  • Government ID verification with selfie comparison

  • Government ID verification with database validations

  • Mobile document verifications

  • Digital identity verifications

  • Selfie age estimation

  • Database checks

Then, with Persona's no-code Dynamic Flow, you can configure age assurance requirements during account creation, detect underage users through existing signals, and automate ongoing monitoring. You can also waterfall and layer methods based on user signals and risk, creating proportionate flows that balance compliance with user experience. 

Want to learn more about the flows we recommend and their trade-offs? Book a consultation to discuss your SMMA compliance strategy, or explore our age assurance solutions to see how we can help you meet eSafety's requirements.

The information provided is not intended to constitute legal advice; all information provided is for general informational purposes only and may not constitute the most up-to-date information. Any links to other third-party websites are only for the convenience of the reader.

FAQs

What counts as "reasonable steps" under the SMMA?

Toggle description visibility

eSafety doesn't define reasonable steps because it acknowledges that what’s considered reasonable will vary depending on the circumstances. Instead, it expects platforms to demonstrate they follow the six guiding principles and consider using a layered approach to age assurance at different points in the user journey.

Can platforms rely on self-declaration?

Toggle description visibility

No. eSafety explicitly states that relying entirely on self-declaration doesn't constitute reasonable steps. You need to implement age verification, age estimation, or age inference methods to estimate or verify users’ ages confidently.

Do platforms need to verify every existing user?

Toggle description visibility

No. You can use existing data and signals to infer age for many users. Age verification should be targeted based on risk signals and uncertainty. For example, if an account has existed for 10+ years, you can reasonably infer the user is over 16.

What age assurance methods does eSafety allow?

Toggle description visibility

eSafety doesn't require specific age assurance methods. However, it expects platforms to offer multiple options and never require government ID as the only choice. Methods include selfie age estimation, government ID verification, database checks, and age inference from behavioral signals.

When does SMMA enforcement begin?

Toggle description visibility

eSafety will start enforcing the Social Media Minimum Age requirements on December 10, 2025. It offers guidance on preparing for enforcement, and it expects platforms to initially focus on detecting and deactivating accounts held by users who are under 16.

Louis DeNicola
Louis DeNicola
Louis DeNicola is a content marketing manager at Persona who focuses on fraud and identity. You can often find him at the climbing gym, in the kitchen (cooking or snacking), or relaxing with his wife and cat in West Oakland.
Brandon Chen
Brandon Chen
Originally from Taiwan, Brandon Chen is a California resident who loves to go fishing. By day, he works on the product marketing team.