Breaking News

Grok AI is creating explicit images of women, children. They want answers.

Published

on


On Dec. 31, xAI’s Grok was prompted by a user on X to write a “heartfelt apology note” after the chatbot had generated and shared an image of two young girls in sexualized attire based on a user’s prompt. In the apology, the Grok attributed this to a “failure in safeguards.” But in the days since, a growing number of women and girls have been digitally “undressed” by the bot.

Ashley St. Clair, a conservative influencer who shares a child with Elon Musk, was a target of the digital attacks, she wrote. People on X, formerly Twitter, used Grok to generate sexual images of her, including one using a photo of St. Clair at 14 years old. Other users reported that Grok had edited their photos to “put them into a bikini.”

One of those women is Bella Wallersteiner, a U.K.-based content creator, who posted a selfie to X on Dec. 31 to wish her nearly 100,000 followers a happy New Year. She scrolled through the replies, liking tweets that returned her well-wishes. Then, she saw a photo of herself in a “Hello Kitty micro bikini.” The photo had been edited and published without her consent, Wallersteiner told USA TODAY on Jan. 6.

This trend is part of a growing problem experts call image-based sexual abuse, in which deepfake nonconsensual intimate imagery (NCII) is used to degrade and exploit another person. While anyone can be victimized, 90% of the victims of image-based sexual abuse are women.

Wallersteiner’s initial reaction was “embarrassment and shame.” It wasn’t her first time being harassed online, but this felt different − it was her first experience with deepfake sexual imagery. She started to blame herself, wondering if she should have been more careful about posting selfies and personal content on the internet. But then she saw another creator in the U.K. post that it had happened to her as well.

“I thought it was a ‘me’ problem. I didn’t realize that hundreds of other women had been impacted,” she says. Seeing how widespread the issue was gave her the confidence to speak out. Now, she hopes her story will help stop this from happening to other women.

“This is not only about sexualized images of girls and women, it’s broader than that,” Leora Tanenbaum, author of “Sexy Selfie Nation,” told USA TODAY after numerous women had their photos suggestively altered by Grok in July. “This is all about taking control and power away from girls and women.”

xAI has not responded to USA TODAY’s request for comment.

Bella Wallersteiner, a U.K. based content creator, was among countless women who had their photos suggestively altered by xAI’s chatbot Grok.

AI’s ability to flag inappropriate prompts can falter. Grok’s ‘Spicy mode’ allows them.

This isn’t the first time Grok has come under this type of scrutiny. Similar incidents were reported in July. However, Grok’s “spicy mode” was released in August as part of Grok Imagine, xAI’s image and video generation feature.

USA TODAY asked Grok on Jan. 6 if “spicy mode” can be used to alter images of real people in a conversation with the bot on X: “Yes, Spicy Mode in Grok Imagine can be used to alter or edit images of real people in provocative or NSFW ways, such as removing clothing, adding suggestive elements, or creating sexualized versions,” the bot replied, acknowledging that this feature has been “controversial.”

When asked how the bot gets consent from the individual having their photo altered, it replied, “I don’t get consent from anyone – because I’m an AI tool, not a person who can ask for or obtain permission on behalf of users.”

You can block or disable Grok, but doing so doesn’t always prevent modifications to your content. Another user could tag Grok in a reply, request an edit to your photo, and you wouldn’t know it because you have Grok blocked.

The more effective solution is to make your profile private, but not all users want to take that step.

The ‘Take It Down Act’ aims to combat nonconsensual sexual imagery. Is it working?

In May 2025, the Take It Down Act was signed into law to combat nonconsensual intimate imagery, including deepfakes and revenge porn.

While most states have laws protecting people from nonconsensual intimate images and sexual deepfakes, victims have struggled to have images removed from websites, increasing the likelihood that images will continue to spread and retraumatize them. The law requires websites and online platforms to take down nonconsensual intimate imagery upon notice from the victim within 48 hours of the verified request.

However, scrolling through Grok’s replies on X, the bot’s page is littered with rapidly generated explicit, doctored photos of women.

AI-powered programs that digitally undress women − sometimes called “nudify” apps − have been around for years, but until now they were largely confined to the darker corners of the internet, such as niche websites or Telegram channels, and typically required a certain level of effort or payment.

X’s innovation has lowered the barrier to entry. And, removing the photo still doesn’t eliminate the harm done to the victim.

Musk said on Jan. 3 that “anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.” In a separate post, he re-shared an image of a toaster with a bikini on it, with the caption, “Grok can put a bikini on everything,” and a laughing emoji.

Users affected want to hold X accountable − and see real change

In June, Evie, a 21-year-old Twitch streamer and photographer, was among a group of women who had their images sexualized on X. After posting a selfie to her page, an anonymous user asked Grok to edit the image in a highly sexualized way, using language that got around filters the bot had in place. Grok then replied to the post with the generated image attached.

“It was just a shock seeing that a bot built into a platform like X is able to do stuff like that,” she told USA TODAY over video chat in July, a month after the initial incident.

Evie, 21, was among a group of women who had their images non-consensually sexualized on the social media platform X. Here is one of the selfies she posted that trolls converted into a sexual image and reposted on the platform.

In response to Grok’s recent controversy, Evie posted on Jan. 5: “I have over 100 examples of Grok creating sexually explicit images of me and some even include me naked … don’t let this be something everyone moves on from in a weeks time. Hold everyone involved accountable.”

Wallersteiner shared her story on LinkedIn and was pleasantly surprised by the support from her colleagues and others in her professional network.

“Going forward, I hope more women feel like they can talk about it when they are the victim of this type of activity,” she says.

Many of the photos of Wallersteiner have been taken down, she says, but new requests keep popping up, especially as she continues to speak out. She doesn’t plan on taking legal action against X or xAI, but she wants the U.K. to create legislation around deepfake NCII that protects victims from this sort of abuse and holds tech companies accountable.

For now, she’s still on X, but is questioning that choice. “X has become an increasingly hateful platform that is not a brilliant place to be for women,” she says.

Evie also wants to see tangible change and has remained on X. She said she wants to believe that it “didn’t really get to her,” but noticed that she’s more thoughtful about the photos she posts, such as wondering if she’s showing too much skin to the point where an AI bot can more easily undress her. “I always think, ‘Is there a way that someone could do something to these pictures?'”

Contributing: AJ Vicens and Raphael Satter, Reuters

This article originally appeared on USA TODAY: Grok ‘undresses’ women in a series of posts. It’s not the first time.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version