💥 Discover this trending post from TechCrunch 📖
📂 **Category**: AI,Apps,Social,Facebook,Instagram,Meta
📌 **What You’ll Learn**:
Meta announced on Tuesday that it will begin using artificial intelligence to scan photos and videos for visual clues to see if a user is under 13 and should be removed from Facebook and Instagram. She added that these visual clues include a person’s height or bone structure.
“We want to be clear: this is not facial recognition,” Mita explained in her blog post. “Our AI looks at general themes and visual cues, for example height or bone structure, to estimate someone’s general age; it does not identify the specific person in the photo. By combining these visual insights with our analysis of text and interactions, we can dramatically increase the number of underage accounts we identify and remove.
The visual analysis system is now working in select countries, but Meta says it is working to roll it out more widely.
Meta says this system is part of its efforts to keep children under 13 off its platforms. These efforts include using artificial intelligence to analyze entire profiles for contextual clues, such as birthday celebrations or references to school grades. The company looks for these mentions across different formats, such as posts, comments, bios, captions, and more. Meta plans to expand this technology to more parts of its apps, including Instagram Live and Facebook Groups, in the future.
If Meta determines that someone may be underage, their account will be deactivated, and the user will need to prove their age using the company’s age verification process in order to prevent their account from being deleted.
The announcement comes weeks after a New Mexico jury ordered Meta to pay $375 million in civil penalties for misleading consumers about the safety of its platforms and endangering children. The company was also ordered to implement fundamental changes to its platforms. Meta has since threatened to shut down social media services in the state.
It’s worth noting that this case is one of many lawsuits that Meta and other major tech companies are facing over child safety.
TechCrunch event
San Francisco, California
|
October 13-15, 2026
Meta also announced Tuesday that it is expanding its technology that automatically places teens in Instagram’s stricter “teen accounts” to 27 countries in the European Union and Brazil. These teen accounts put users through a stricter account experience with additional safeguards, such as only receiving direct messages from people they already follow or are connected to, hiding harmful comments, and setting accounts to private by default.
Additionally, Meta said it is expanding the technology to Facebook in the US for the first time, followed by the UK and EU in June.
When you buy through links in our articles, we may earn a small commission. This does not affect our editorial independence.
💬 **What’s your take?**
Share your thoughts in the comments below!
#️⃣ **#Meta #artificial #intelligence #analyze #height #bone #structure #determine #users #underage**
🕒 **Posted on**: 1777991502
🌟 **Want more?** Click here for more info! 🌟
