🔥 Explore this awesome post from The Verge 📖
📂 **Category**: AI,Elon Musk,News,Tech,xAI
💡 **What You’ll Learn**:
xAI’s Grok is removing clothes from people’s photos without their consent after rolling out a feature this week that allows X users to instantly edit any photo with a bot without needing the original poster’s permission. Not only does the original poster not get notified if his photo is edited, but Grok appears to have few guardrails in place to prevent anything less than full frank nudity. In the past few days, X has been filled with photos of women and children posing pregnant, without a skirt, wearing a bikini, or in other sexual positions. Images of world leaders and celebrities were also used in the images created by Grok.
The trend of removing clothes from photos started when adult content creators asked Grok for sexy photos of themselves after the release of a new photo editing feature, AI documentation company Copyleaks reported. Users then began applying similar prompts to the photos of other users, mostly women, who did not consent to the edits. Women have noticed the rapid rise in the creation of deepfakes on X in various news outlets, including subway and Petapixel. Grok was already able to edit photos in sexual ways when tagged in a post on
In one of X’s posts, which has now been removed from the platform, Grok edited a photo of two young girls wearing tight clothing and sexually suggestive poses. Another (Although it’s not clear whether images created by Grok would meet this standard, realistic, sexually explicit images of identifiable adults or children could be illegal under US law.) In another exchange with one user, Grok suggested that users report them to the FBI for CSAM, noting that he was “urgently fixing” the “gaps in safeguards.”
But Grok’s word is nothing more than an AI response to a user requesting a “sincere apology note” — it does not indicate that Grok “understands” what he is doing or necessarily reflects the actual opinion and policies of the xAI operator. Instead, XAI responded ReutersHe asked to comment on the situation with just three words: “Old media lies.” xAI did not respond EdgeComment requested in time for publication.
Elon Musk himself appears to have sparked a wave of bikini edits after asking Groke to replace a meme of actor Ben Affleck with himself wearing a bikini. Days later, the leather jacket worn by Kim Jong Un in North Korea was replaced by a multi-colored spaghetti bikini; US President Donald Trump stood nearby in matching swimsuits. (Cue jokes about nuclear war.) A photo of British politician Priti Patel, posted by a user with a sexually suggestive message in 2022, was turned into a bikini photo on January 2. In response to the wave of bikini photos on his platform, Musk jokingly reposted a photo of Toast in a bikini with the caption “Your puppy can put a bikini on everything.”
While some of the images – such as the toaster – were clearly intended as jokes, others were clearly designed to produce borderline pornographic images, including specific directions for Grok to use skimpy bikini styles or remove the skirt entirely. (The chatbot did remove the upskirt, but did not depict full, uncensored nudity in the responses Edge Saw.) Grok also responded to requests to replace toddler clothes with bikinis.
Musk’s AI products are prominently marketed as highly sexualized and minimalist. Annie flirts with xAI’s artificial intelligence companion edge Reporters Victoria Song and Jess Weatherbed discovered that Grok’s video generator easily generated fake topless images of Taylor Swift, despite xAI’s Acceptable Use Policy prohibiting the depiction of “people’s figures in an pornographic manner.” In contrast, Google’s Veo and Sora’s OpenAI video generators have guardrails around creating NSFW content, though Sora has also been used to produce videos of children in sexual contexts and sex videos. The spread of fake images is rapidly increasing, according to a report by cybersecurity firm DeepStrike, and many of these images contain non-consensual sexual imagery; A 2024 survey of American students found that 40% were aware of a deepfake of someone they know, while 15% were aware of an explicit or intimate deepfake without consent.
When asked why photos of women were turned into bikini photos, Grok denied publishing the photos without consent, saying: “These are artificial intelligence creations based on requests, not real photo edits without consent.”
Take the denial of the AI bot as you will.
💬 **What’s your take?**
Share your thoughts in the comments below!
#️⃣ **#puppy #anyones #clothes #including #minors**
🕒 **Posted on**: 1767491310
🌟 **Want more?** Click here for more info! 🌟
