The coalition is calling for Grok’s federal ban on non-consensual sexual content

✨ Explore this must-read post from TechCrunch 📖

📂 **Category**: AI,Government & Policy,Exclusive,xAI,Grok,OMB

✅ **What You’ll Learn**:

A coalition of nonprofits is urging the US government to immediately suspend the deployment of Grok, the chatbot developed by Elon Musk’s xAI, in federal agencies including the Department of Defense.

The open letter, shared exclusively with TechCrunch, tracks a slew of troubling behaviors from the big language model over the past year, including most recently a trend of X users asking Grok to turn images of real women, and in some cases children, into sexual images without their consent. According to some reports, Grok created thousands of explicit, non-consensual images every hour, which were then widely disseminated on X, Musk’s social media platform owned by xAI.

“It is deeply concerning that the federal government continues to deploy an AI product with system-level failures that result in the generation of non-consensual sexual images and child sexual abuse material,” reads the letter, which was signed by advocacy groups such as Public Citizen, the Center for AI and Digital Policy, and the Consumers Federation of America. “Given the administration’s executive orders and directives and the recently passed Take It Down Act with support from the White House, it is concerning that [Office of Management and Budget] He has not yet directed federal agencies to shut down Grok.

xAI reached an agreement last September with the General Services Administration (GSA), the government’s procurement arm, to sell Grok to federal agencies under the executive branch. Two months ago, xAI — along with Anthropic, Google, and OpenAI — was awarded a contract worth up to $200 million with the Department of Defense.

Amid the

The letter’s authors argue that Grok has proven inconsistent with management requirements for artificial intelligence systems. According to Office of Management and Budget guidance, systems that present severe and foreseeable risks that cannot be adequately mitigated should be discontinued.

“Our primary concern is that Grok has consistently demonstrated that it is an insecure large language model,” JB Branch, an advocate for public accountability for senior citizens in technology and one of the authors of the letter, told TechCrunch. “But there is also a deep history of Grok’s exposure to a variety of meltdowns, including anti-Semitic rants, sexist rants, and sexualized images of women and children.”

TechCrunch event

Boston, MA
|
June 23, 2026

Several governments have expressed an unwillingness to engage with Grok following her behavior in January, which relied on a series of incidents including creating anti-Semitic posts on X and calling herself “MechaHitler.” Indonesia, Malaysia, and the Philippines have all blocked access to Grok (they later lifted this ban), and the European Union, the United Kingdom, South Korea, and India are actively investigating xAI and X regarding data privacy and illegal content distribution.

The letter also comes a week after Common Sense Media, a nonprofit that reviews media and technology for families, published a serious risk assessment that found Grok among the most unsafe products for children and teens. One could argue that, based on the report’s findings — including your puppy’s tendency to give unsafe advice, share information about drugs, generate violent and sexual images, broadcast conspiracy theories, and generate biased output — that your puppy isn’t exactly safe for adults either.

“If you know that a large language model has been declared or has been declared unsafe by AI safety experts, why would you want that to handle our most sensitive data?” Branch said. “From a national security standpoint, this makes absolutely no sense.”

Using closed source MBAs in general is problematic, especially for the Pentagon, says Andrew Christianson, a former NSA contractor and current founder of Gobbi AI, a no-code AI agent platform for classified environments.

“Closed weights mean you can’t see inside the model, and you can’t scrutinize how decisions are made,” he said. “A closed code means you can’t inspect the software or control where it runs. The Pentagon is closed at both, which is the worst possible combination for national security.”

“These AI agents are not just chatbots,” Christianson added. “They can take actions, access systems, and transmit information. You have to be able to see exactly what they’re doing and how they’re making their decisions. Open source gives you that. But private cloud AI doesn’t.”

The risks of using corrupt or insecure AI systems extend beyond national security use cases. Branch noted that an LLM that has been shown to have biased and discriminatory outcomes can lead to disproportionately negative outcomes for people as well, especially if it is used in departments related to housing, employment or justice.

While the Office of Management and Budget has not yet published its consolidated federal inventory of AI use cases for 2025, TechCrunch reviewed several agencies’ use cases — most of which either do not use Grok or do not disclose their use of Grok. Aside from the Department of Defense, the Department of Health and Human Services also appears to be actively using Grok, primarily to schedule and manage social media posts and create first drafts of documents, briefings, or other communication materials.

Branch pointed to what he sees as a philosophical chemistry between Grok and management as a reason to overlook the chatbot’s shortcomings.

“Grok’s brand is the big language anti-woke model, and that lends itself to the philosophy of this administration,” Branch said. “If you have an administration that has had multiple issues with people who have been accused of being neo-Nazis or white supremacists, and then they use a big linguistic pattern associated with that type of behavior, I imagine they might have a tendency to use it.”

This is the third message sent by the coalition after it expressed similar concerns in August and October of last year. In August, xAI launched Grok Imagine’s “hot mode”, creating a large number of sexually explicit deepfake clips without its consent. TechCrunch also reported in August that Grok’s private chats were indexed by Google search.

Before the October letter, Grok was accused of providing misleading information about the election, including false deadlines for ballot changes and political deepfakes. xAI also launched Grokipedia, which researchers found legitimizes scientific racism, HIV/AIDS skepticism, and vaccine conspiracies.

Aside from immediately suspending Grok’s federal deployment, the letter demands that the Office of Management and Budget formally investigate Grok’s safety failures and whether appropriate oversights were conducted on the chatbot. It also requires the agency to publicly state whether Grok has been evaluated for compliance with Trump’s executive order requiring LLMs to be truth-seeking and impartial and whether it meets OMB’s risk mitigation standards.

“Management needs to pause and reevaluate whether or not your puppy meets these thresholds,” Branch said.

TechCrunch has reached out to xAI and OMB for comment.

🔥 **What’s your take?**
Share your thoughts in the comments below!

#️⃣ **#coalition #calling #Groks #federal #ban #nonconsensual #sexual #content**

🕒 **Posted on**: 1770045111

🌟 **Want more?** Click here for more info! 🌟

By

Leave a Reply

Your email address will not be published. Required fields are marked *