Mark Zuckerberg Lied to Congress. We Can’t Trust His Testimony.

✨ Discover this insightful post from Hacker News 📖

📂 **Category**:

💡 **What You’ll Learn**:

WHAT HE SAID 

WHAT THE EVIDENCE PROVES 

“No one should have to go through the things that your families have suffered and this is why we invest so much and are going to continue doing industry leading efforts to make sure that no one has to go through the types of things that your families have had to suffer,” Zuckerberg said directly to families who lost a child to Big Tech’s products in his now-infamous apology.

Source: US Senate Judiciary Committee Hearing on “Big Tech and the Online Child Sexual Exploitation Crisis” (2024)

Despite Zuckerberg’s claims during the 2024 US Senate Judiciary Committee hearing, Meta’s post-hearing investment in teen safety measures (i.e. Teen Accounts) are a PR stunt. A report conducted a comprehensive study of teen accounts, testing 47 of Instagram’s 53 listed safety features, finding that:

  •  64% (30 tools) were rated “red” — either no longer available or ineffective.

  • 19% (9 tools) reduced harm but had major limitations.

  • 17% (8 tools) worked as advertised, with no notable limitations.

The results make clear that despite public promises, the majority of Instagram’s teen safety features fail to protect young users.

– Source: Teen Accounts, Broken Promises: How Instagram is Failing to Protect Minors  (Authored by Fairplay, Arturo Bejar, Cybersecurity for Democracy, Molly Rose Foundation, ParentsSOS, and The Heat Initiative)

“I don’t think that that’s my job is to make good tools.” Zuckerberg said when Senator Josh Hawley asked whether he would establish a fund to compensate victims.

Source: US Senate Judiciary Committee Hearing on “Big Tech and the Online Child Sexual Exploitation Crisis” (2024)

Expert findings in ongoing litigation directly challenge that claim. An expert report filed by Tim Ested, Founder and CEO of AngelQ AI, concluded that the defendants’ platforms were not designed to be safe for kids, citing broken child-safety features including weak age verification, ineffective parental controls, infinite scroll, autoplay, notifications, and appearance-altering filters, among others.

The report was filed after Mark Zuckerberg appeared before the US Senate Judiciary Committee in 2024 (published May 16, 2025).

– Source: Expert Report from Tim Estes 

“I think it’s important to look at the science. I know people widely talk about [social media harms] as if that is something that’s already been proven and I think that the bulk of the scientific evidence does not support that.” 

Source: US Senate Judiciary Committee Hearing on “Big Tech and the Online Child Sexual Exploitation Crisis” (2024)

The 2021 Facebook Files investigation by WSJ revealed that both external studies and Meta’s own internal research consistently linked Instagram use to worsened teen mental health—especially around body image, anxiety, depression, and social comparison. 

Internal findings showed harms were platform-specific, with evidence that the app amplified self-esteem issues and eating-disorder risk among adolescents, particularly girls, while design features encouraged prolonged engagement despite those risks.

– Source: The Facebook Files 

“We don’t allow sexually explicit content on the service for people of any age.”

Source: US Senate Judiciary Committee Hearing on “Big Tech and the Online Child Sexual Exploitation Crisis” (2024)

Meta knowingly allowed sex trafficking on its platform, and had a 17-strike policy for accounts known to engage in trafficking. “You could incur 16 violations for prostitution and sexual solicitation, and upon the 17th violation, your account would be suspended…by any measure across the industry, [it was] a very, very high strike threshold,” said Instagram’s former Head of Safety and Well-being Vaishnavi Jayakumar. 

– Source: Meta’s Unsealed Internal Documents Prove Years of Deliberate Harm and Inaction to Protect Minors 

  • 79% of all child sex trafficking in 2020 occurred on Meta’s platforms. (Link)

  • 22% of minors on Instagram reported a sexually explicit interaction. (Link)

“We don’t allow people under the age of 13 on our service. So if we find anyone who’s under the age of 13, we remove them from our service.”

Source: US Senate Judiciary Committee Hearing on “Big Tech and the Online Child Sexual Exploitation Crisis” (2024)

Internal document stating the goal for Meta to be the most relevant social products for kids worldwide. To do so, Meta will focus on “each youth life stage, ‘Kid’ (6-10), ‘Tween’ (10-13), and ‘Teen’ 13+.’”

– Source: Internal Document from Meta – Exhibit 45 

– Source: February 2020 Facebook presentation titled, ‘Tweens Competitive Audit’

– Source: Internal Facebook memo titled, ‘Youth Privacy: Defining five groups to guide age-appropriate design’

“The research that we’ve seen is that using social apps to connect with other people can have positive mental-health benefits,” CEO Mark Zuckerberg said at a congressional hearing in March 2021 when asked about children and mental health.”

Source: Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show (2021)

Internal messages show that it was company policy to delete Meta Bad Experiences & Encounters Framework (BEEF) research, which cataloged experience negative social comparison-promoting content; self-harm-promoting content; bullying content; unwanted advances. (Adam Mosseri’s Testimony on 2/11).

“We make body image issues worse for one in three teen girls,” said one slide from 2019, summarizing research about teen girls who experience the issues.

– Source: 2019 Instagram slide presentation called ‘Teen Mental Health Deep Dive’ 

“We are on the side of parents everywhere working hard to raise their kids”

Source: US Senate Judiciary Committee Hearing on “Big Tech and the Online Child Sexual Exploitation Crisis” (2024)

“If we tell teens’ parents and teachers about their live videos, that will probably ruin the product from the start (…) My guess is we’ll need to be very good about not notifying parents.” 

– Source: Meta Internal Evidence Exhibit 15 

Another internal email reads: “One of the things we need to optimize for is sneaking a look at your phone under your desk in the middle of Chemistry :)”. 

– Source: Meta Internal Evidence Exhibit 373

According to federal law, companies must install safeguards for users under 13, and the company broke the law by pursuing aggressive “growth” strategies for hooking “tweens” and children aged 5-10 on their products.

“We put special restrictions on teen accounts on Instagram. By default, accounts for under sixteens are set to private, have the most restrictive content settings and can’t be messaged by adults that they don’t follow or people they aren’t connected to.”

Source: US Senate Judiciary Committee Hearing on “Big Tech and the Online Child Sexual Exploitation Crisis” (2024)

An internal 2022 audit allegedly found that Instagram’s Accounts You May Follow feature recommended 1.4 million potentially inappropriate adults to teenage users in a single day. 

– Source: Meta’s Unsealed Internal Documents Prove Years of Deliberate Harm and Inaction to Protect Minors 

Meta only began rolling out privacy-by-default features in 2024, seven years after identifying dangers to minors.

– Source: Meta’s Unsealed Internal Documents Prove Years of Deliberate Harm and Inaction to Protect Minors 

“Mental health is a complex issue and the existing body of scientific work has not shown a causal link between using social media and young people having worse mental health outcomes. 

Source: US Senate Judiciary Committee Hearing on “Big Tech and the Online Child Sexual Exploitation Crisis” (2024)

According to internal documents, Meta designed a “deactivation study,” which found that users who stopped using Facebook and Instagram for a week showed lower rates of anxiety, depression, and loneliness. Meta halted the study and did not publicly disclose the results – citing harmful media coverage as the reason for canning the study.

An unnamed Meta employee said this about the decision, “If the results are bad and we don’t publish and they leak, is it going to look like tobacco companies doing research and knowing cigs were bad and then keeping that info to themselves?”

“We’re deeply committed to doing industry-leading work in this area. A good example of this work is Messenger Kids, which is widely recognized as better and safer than alternatives.”

– Source: Mark Zuckerberg Facebook Post 

Despite Facebook’s promises, a flaw in Messenger Kids allowed thousands of children to be in group chats with users who hadn’t been approved by their parents. Facebook tried to quietly address the problem by closing violent group chats and notifying individual parents. The problems with Messenger Kids were only made public when they were covered by The Verge.

– Source: Facebook design flaw let thousands of kids join chats with unauthorized users 

“We want everyone who uses our services to have safe and positive experiences (…) I want to recognize the families who are here today who have lost a loved one or lived through some terrible things that no family should have to endure.

Zuckerberg told survivor parents who have lost their kid due to Big Tech’s product designs. 

Source: US Senate Judiciary Committee Hearing on “Big Tech and the Online Child Sexual Exploitation Crisis” (2024)

An internal email from 2018 titled “Market Landscape Review: Teen Opportunity Cost and Lifetime Value,” stating that the “US lifetime value of a 13 y/o teen is roughly $270 per teen.” 

The email also states “By 2030, Facebook will have 30 million fewer users than we could have otherwise if we do not solve the teen problem.”

– Source: Meta Internal Evidence Exhibit 71

💬 **What’s your take?**
Share your thoughts in the comments below!

#️⃣ **#Mark #Zuckerberg #Lied #Congress #Trust #Testimony**

🕒 **Posted on**: 1771423401

🌟 **Want more?** Click here for more info! 🌟

By

Leave a Reply

Your email address will not be published. Required fields are marked *