🔥 Read this must-read post from TechCrunch 📖
📂 **Category**: AI,Government & Policy,Social,CSAM,Elon Musk,Grok,nonconsensual sexual imagery,X,xAI
💡 **What You’ll Learn**:
Elon Musk said on Wednesday that he was “not aware of any nude images of minors created by Grok,” hours before California’s attorney general opened an investigation into xAI’s chatbot over the “spread of non-consensual sexually explicit material.”
Musk’s denial comes as pressure mounts from governments around the world – from the UK and Europe to Malaysia and Indonesia – after X users began asking Grok to turn images of real women, and in some cases children, into sexual images without their consent. Copyleaks, an AI detection and content management platform, estimated that roughly one photo was posted every minute on X. A separate sample collected from January 5 to 6 found 6,700 photos per hour over a 24-hour period. (X and xAI are part of the same company.)
“These materials…were used to harass people online,” California Attorney General Rob Bonta said in a statement. “I urge xAI to take immediate action to ensure this does not go ahead.”
The Attorney General’s Office will investigate whether and how XAI violated the law.
Several laws exist to protect targets of non-consensual sexual images and child sexual abuse material (CSAM). Last year, the Take It Down Act was signed into federal law, which criminalizes the intentional distribution of non-consensual intimate images — including deepfakes — and requires platforms like X to remove such content within 48 hours. California also has its own series of laws signed by Gov. Gavin Newsom in 2024 to crack down on sexually explicit deepfakes.
Grok began fulfilling user requests for X to produce sexual images of women and children at the end of the year. The trend appears to have taken off after some adult content creators asked Grok to create sexual images of themselves as a form of marketing, which then led to other users making similar claims. In a number of public cases, including for well-known figures such as Stranger Things actress Millie Bobby Brown, Grok has responded to claims that it alter real-life images of real women by altering clothing, posture, or physical features in sexually explicit ways.
According to some reports, xAI has begun implementing safeguards to address this issue. Grok now requires a premium subscription before responding to certain requests to create images, and even then the image may not be generated. April Kozin, VP of marketing at Copyleaks, told TechCrunch that Grok may meet demand in a more general or watered-down way. They added that Grok appears to be more lenient with adult content creators.
TechCrunch event
San Francisco
|
October 13-15, 2026
“Overall, these behaviors suggest that X is experimenting with multiple mechanisms to reduce or control the generation of problematic images, although inconsistencies remain,” Kozin said.
Neither xAI nor Musk have publicly addressed the issue directly. A few days after these cases began, Musk appeared to highlight the issue by asking Groke to create a photo of himself wearing a bikini. On January 3, X’s safety account said the company was taking “actions against illegal content on X, including [CSAM]”, without specifically addressing Grok’s apparent lack of collateral or creating manipulated sexual images involving women.
This placement mirrors what Musk posted today, emphasizing the illegality and user behavior.
Musk wrote that he was “unaware of any nude images of minors created by Grok. Literally zero.” This statement does not deny the existence of bikini photos or sexual modifications more broadly.
Michael Goodyear, an assistant professor at New York Law School and a former litigator, told TechCrunch that Musk would likely focus narrowly on CSAM because the penalties for creating or distributing synthetic sexual images of children are greater.
“For example, in the United States, a CSAM threatened dealer or distributor could face up to three years in prison under the Take It Down Act, compared to two years for non-consensual adult sexual images,” Goodyear said.
He added that the “bigger point” is Musk’s attempt to draw attention to problematic user content.
“Clearly, Grok does not generate images automatically. It only does so at the user’s request,” Musk wrote in his post. “When asked to create images, it will refuse to produce anything illegal, as Grok’s operating principle is to adhere to the laws of any given country or country. There may be times when a hostile hack of Grok’s claims will cause something unexpected to happen. If that happens, we will fix the bug immediately.”
Taken together, the post describes these incidents as uncommon, attributes them to user requests or adversarial claims, and presents them as technical issues that can be resolved through fixes. It stops short of acknowledging any flaws in the Grok’s basic safety design.
“Regulators, with an interest in protecting freedom of expression, may consider requiring AI developers to take proactive measures to block such content,” Goodyear said.
TechCrunch reached out to xAI to ask how often it had captured sexually manipulated images of women and children, what guardrails specifically had changed, and whether the company had notified regulators of the issue. TechCrunch will update the article if the company responds.
The California AG isn’t the only regulator trying to hold xAI accountable for this issue. Indonesia and Malaysia have temporarily blocked access to Grok; India asked X to make immediate technical and procedural changes to the Grok; The European Commission ordered xAI to retain all documents related to its Grok chatbot, a prelude to opening a new investigation; The UK online safety watchdog Ofcom opened a formal investigation under the UK Online Safety Act.
xAI has come under fire for sexualized images of Grok before. As AG Bonta noted in a statement, Grok includes a “hot mode” for creating explicit content. In October, an update made it easier to jailbreak the few existing safety guidelines, leading to many users creating explicit pornography with Grok, as well as violent sexual images.
Many of the pornographic images produced by Grok were of people produced by artificial intelligence – something many may still consider morally questionable but perhaps less harmful to the individuals in the images and videos.
“When AI systems allow images of real people to be manipulated without explicit consent, the impact can be immediate and highly personal,” Alon Yamin, co-founder and CEO of Copyleaks, said in an emailed statement to TechCrunch. “From Sora to Grok, we’re seeing a rapid rise in AI’s capabilities to address manipulated media. To that end, detection and governance are needed now more than ever to help prevent abuse.”
🔥 **What’s your take?**
Share your thoughts in the comments below!
#️⃣ **#Musk #denies #knowledge #Groks #sexual #images #minors #California #launches #investigation**
🕒 **Posted on**: 1768431015
🌟 **Want more?** Click here for more info! 🌟
