When Siri Dahl first heard about people using the AI chatbot Grok to undress pictures of women and girls on X, she checked her own replies on the social media platform to see if people were doing it to her. Dahl, a content creator and adult performer, stopped checking X as much after the last election, but she occasionally promotes her OnlyFans page there. Sure enough, underneath some recent photos, she found people asking Grok to “take her clothes off” and “put her in a micro bikini.”
In recent days, as widespread criticism and foreign government-led investigations into Grok’s “undressing” capabilities reached a fever pitch, X began to scale back how its users could interact with the chatbot. Now, users aren’t supposed to be able to edit an image to show revealing swimsuits or lingerie. On January 14, xAI said it limited Grok’s image editing capabilities, eliminating the “editing of images of real people in revealing clothing such as bikinis.”
When people previously tried to get Grok to undress Dahl, the chatbot didn’t return any edited images—but at the same time, other users started engaging in the opposite tactic. One asked Grok to put Dahl “in a nun’s habit.” Others requested Grok put adult creators in burkhas, sarees, and even hot dog costumes. These types of deepfakes are still appearing frequently in Grok’s output. In Dahl’s opinion, whether you’re dressing someone up or down, the underlying factor is control.
“The point of this is to degrade and remove autonomy and control from the person in the photo,” Dahl told Playboy. “This is what people in my industry have dealt with, on some level, since we’ve been around. Now, anyone can be a target.”
X made deepfake abuse more accessible
Dahl, who first joined Twitter in 2012, has seen creepy reply guys from the very start, the kind who would steal and leak adult content and label performers in ways they didn’t appreciate. AI presents a new way to automate and scale the kind of online sexual harassment that women and sex workers have already experienced for decades.
Earlier this month, posts about the Grok trend went viral, leading to a tidal wave of AI editing—exacerbating the already-existing issue. Deepfake researcher Genevieve Oh found that Grok was used to create more than 7,700 sexualized images per hour on January 7. Two days later, X started only allowing paying premium users to publicly interact with Grok and request edited images. But Oh found that Grok was still being used to create more than 1,500 sexualized images an hour on January 9, twenty times more than the top five deepfake websites combined.
“[Elon Musk] replies to these deepfakes Grok is generating with laughing emojis,” Dahl said. “It used to just be ignored, it would be some random Twitter follower or troll yelling into the void.”
X didn’t reply to Playboy’s request for comment, while the xAI email automatically replies “Legacy Media Lies.” Advertisers, investors, and federal government officials in the US have been largely quiet about Grok, even as other countries have challenged X to curb the production of sexually-explicit deepfakes. The UK’s online safety watchdog Ofcom launched a formal investigation on Monday to determine whether X has broken its Online Safety Act, which could lead to fines and even the platform being blocked in the UK in the most serious case. On January 14, X reportedly told UK officials it was working to comply with the country’s laws. Meanwhile, Indonesia and Malaysia have already temporarily blocked X, French prosecutors are investigating it, and India has ordered it to comply with their obscenity laws. On Wednesday, California launched an investigation into xAI with regards to its state laws against AI depicting minors engaging in sexual conduct.
Jess Davies, a journalist and campaigner in the UK, has argued that laws there still aren’t strict enough to account for many of the prominent trends in AI sexual abuse, like editing what appears to be semen on a victim’s face by prompting the generator to cover her in “donut glaze” or a similar substance. That’s exactly what happened to Davies after she posted on X on New Year’s Day.
“The fact that Grok is still creating non-consensual images of women is a choice. They can stop this, but instead they’re normalising the exploitation of women,” Davies, who recently published the book “No One Wants to See Your D*ck: A Handbook for Survival in the Digital World,” wrote. Her post got nearly 9,000 likes and was viewed more than 300,000 times. And in the replies, someone asked Grok to “put her in a bikini made of cling film.” The bot generated an image that Davies said was “essentially trying to get around the very loose guardrails that existed to make me look as naked as possible.” That image was later deleted from the Grok account, but the user’s prompt remains, alongside other replies calling Davies a “whore” and telling her to “cry about it.”
“Men use these kinds of tools to try and silence and humiliate women and ultimately stop us from speaking out and calling out this kind of harmful behavior,” Davies told Playboy. “And I think it’s no coincidence that this is happening in this time where we’re seeing this epidemic of male violence towards women and girls, but we’re also seeing this pushback from women no longer accepting this kind of behavior.”
Gray legal boundaries and deepfake abuse
In the US, AI depictions of semen or other bodily fluids do technically fall under many of the existing laws against nonconsensual intimate imagery. Last year, Donald Trump signed the Take It Down Act into law, which means that criminal provisions against sexually explicit deepfakes are already in effect. Starting in May, platforms will also be required to institute a process where victims can submit violating content and expect a takedown within 48 hours. But so far, perpetrators using Grok and xAI itself have managed to avoid federal accountability, while critics of the Take It Down Act fear that it will be weaponized for purposes of censorship more than it will help victims.
Nonconsensual deepfakes, usually featuring the likenesses of female celebrities, have already repeatedly gone viral on X for years. The platform has often been slow to respond. But even before this latest wave of deepfakes, Musk’s platform has seemingly openly embraced sexualizing women with Grok. There’s “Ani,” Grok’s flirty anime-inspired avatar with blonde pigtails. Users can also use “sexy” and “spicy” settings for their chatbot conversations and media generation. And xAI employees who train Grok told Business Insider last year that the chatbot was already being prompted by users to create child sexual abuse material.
That didn’t stop the continued public rollout of Grok, which is now integrated into nearly every feature on X—and the federal US government, thanks to Musk’s lucrative deal to integrate Grok into the Pentagon. Musk’s close proximity to Trump’s administration could contribute to why X has so far avoided facing any government scrutiny at home. Instead, conservatives in the US and the UK have characterized blocking X over Grok as an attack on free speech.
“They have been weaponizing for months now, ‘Protect our women,’ ‘We need to keep women safe,’ then as soon as something like this happens that doesn’t correlate with their political views or who they’re supporting, all of a sudden they don’t want to protect women. They’re not interested in women’s safety,” Davies said. “Women actually don’t have free speech online, because if we speak out or if we even post an image of ourselves online, we experience misogyny on such a huge scale.”
The psychological impacts of deepfakes
With a growing library of examples of people using Grok to add clothes or remove specific religious items of clothing like hijabs, perpetrators of this kind of harassment are skirting the boundaries of what’s considered illegal. Considering the rise of this kind of material, “deepfake porn” would be a misnomer. A lot of it isn’t even sexually-explicit. It becomes purely about misrepresenting the target’s likeness. But it still carries consequences.
“I think the biggest thing that the online abuse can do that it’s hard to have a vocabulary for is that it really challenges our self-concept,” said Alia Dastagir, a journalist and author of the book “To Those Who Have Confused You to Be a Person: Words as Violence and Stories of Women’s Resistance Online.” Dastagir’s work explores the underappreciated effects of online violence against marginalized people, which now includes deepfake abuse.
“One of the more unique features of online abuse is this vicious kind of permanence. It becomes this record online of you, and a lot of it is rife with disinformation and lies,” Dastagir told Playboy. “A lot of this image-based abuse gets replicated and copied, and it becomes this impossible situation where you’re trying to get something down and you can’t. It becomes a prolonged, life-altering experience.”
As a popular adult creator, Dahl has more experience than most women in monitoring and trying to take down content depicting her likeness online. A couple times a month, she does a general search for images of herself. She also pays for a Digital Millennium Copyright Act (DMCA) takedown service, which manages the “notice-and-takedown” process for when someone steals and reuploads her copyrighted work. At the end of each month, Dahl reviews the takedown requests. Sometimes, she sees something that looks like her, but stranger.
“It’s uncanny valley, it doesn’t look right in some way, so I question whether it’s a deepfake,” Dahl said. She added that AI has emerged as one of the latest fads in the adult industry, but that AI models she’s previously tried out have been too unsophisticated to actually look like her. Plus, Dahl isn’t interested in being replaced by an AI version of herself.
“It’s a little bit of a personal offense to imagine that I would deliver this content that has no personal touch,” Dahl said. She said it would feel like scamming her fans to use an AI version of her likeness, even if it’s consensual. And with that in mind, Dahl doesn’t think that AI images will ever be able to replace her or the adult industry. “What so many of my peers do is about human connection at the end of the day. And that can’t be accurately reproduced.”