Meta’s New AI Scans Bone Structure to Spot Underage Users

Sazzad Yousuf
By Sazzad Yousuf - Review Editor 3 Min Read

Our editorial team is comprised of skilled technology experts and developers. To ensure that our research is easy to understand in simple and plain English, we may use AI-assisted tools for grammatical refinement and structural smoothness. However, every technical insight, test, and experience displayed has been fully completed and verified by our human team. All content remains the original property of Droid Expose. See more in our Privacy Policy.

Meta is turning to advanced AI to solve a decades old problem: kids lying about their age to get onto Facebook and Instagram. In a major update announced Tuesday, the company revealed it is now using AI to analyze physical traits like bone structure and height to identify and remove users under the age of 13.

How the Visual Analysis Works

Last week, Meta also introduced a feature for parents to monitor their children’s Chat. In this update, Meta’s new AI scans photos and videos for physical indicators of age. Rather than just looking at a birthdate on a profile.

  • Not Facial Recognition: Meta explicitly states this is not facial recognition and does not identify specific individuals.
  • Physical Cues: The system looks for general themes, such as height and bone development, to estimate if a user is a child or an adult.
  • Contextual Scanning: Beyond visuals, the AI also hunts for text-based context clues in bios, captions, and comments, such as mentions of school grades or birthday celebrations.

If the AI flags an account as being under 13, the profile is deactivated. To get back online, the user must provide proof of age or the account will be permanently deleted.

Automatic Teen Account Protections

Facebook is starting to capture underage users and convert them into Teen Accounts. | Image from Meta

Meta isn’t just looking for kids under 13; it is also proactively moving older children into safer environments.

  • Proactive Enrollment: The company is expanding technology that identifies users who might be teens even if they claim to be adults and automatically places them into Teen Accounts.
  • Expanded Reach: This tech is launching on Facebook in the US for the first time and expanding across 27 countries in the EU and Brazil for Instagram.
  • Built-in Safety: Teen Accounts automatically restrict who can contact the user, limit sensitive content, and (on Facebook) prevent users under 16 from livestreaming.

This sudden surge in safety measures follows a massive legal blow for Meta. A New Mexico jury recently ordered the company to pay $375 million for misleading the public about child safety and failing to protect minors from predators.

Perhaps because of this pressure, Meta is doubling down on its argument that individual apps shouldn’t be the age police. The company continues to push for legislation that would require App Stores (like Apple and Google) to verify a user’s age at the device level, creating a single, consistent safety standard for every app a teen downloads.


Sazzad Yousuf
Editor's Take by Sazzad Yousuf

Editor's Take

My younger brother tried to open TikTok last year while underage, but their AI flagged him almost instantly based on his behavior. It set a high bar for detection. I expect Meta’s new adult classifier to be even more aggressive given their data. Hopefully, it actually cleans up the platform rather than just being a legal shield. I just worry about the messy middle ground where adults get accidentally locked into Teen Accounts due to false flags.

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *