💥 Discover this awesome post from TechCrunch 📖
📂 Category: Apps,Instagram,Meta,teen safety
✅ Here’s what you’ll learn:
In an effort to protect its underage users from harmful content, Instagram is imposing new restrictions on teen accounts. By default, users under 18 will only watch content that adheres to PG-13 movie ratings, avoiding themes such as extreme violence, sexual nudity, and graphic drug use.
Users under the age of 18 will not be able to change this setting without explicit consent from their parents or guardians.
Instagram is also introducing a stricter content filter, called Limited Content, which will prevent teens from seeing and posting comments on posts where the setting is turned on.
The company said that starting next year, it will implement more restrictions on the types of chats teens can have with AI bots that have the limited content filter turned on. It’s already applying the new PG-13 content settings to AI conversations.

This move is where chat software makers like OpenAI and Character.AI are being sued for allegedly causing harm to users. Last month, OpenAI rolled out new restrictions for ChatGPT users under the age of 18, and said it was training the chatbot to refrain from “flirty talk.” Earlier this year, Character.AI also added new limits and parental control tools.
Instagram, which has been building teen safety tools across accounts, direct messages, search and content, is expanding controls and restrictions in different areas for underage users. The social media service will not allow teens to follow accounts that share age-inappropriate content, and if they follow such accounts, they will not be able to see or interact with content from those accounts, and vice versa. The company is also removing these accounts from recommendations, making them more difficult to find.

The company also prevents teens from seeing inappropriate content related to them in direct messages.
TechCrunch event
San Francisco
|
October 27-29, 2025
Meta already restricts teen accounts from discovering content related to eating disorders and self-harm. The company is now banning words like “alcohol” or “blood,” and says it’s also making sure teens don’t find content in those categories by misspelling those terms.

The company said it is testing a new way for parents to flag content that should not be recommended for teens using moderation tools. Flagged posts will be sent to the review team.
Instagram is rolling out these changes in the US, UK, Australia and Canada starting today, and globally next year.
💬 Share your opinion below!
#️⃣ #Instagram #defaults #PG13 #content #teens #adds #parental #controls
🕒 Posted on 1760446958
