Elon Musk’s AI chatbot, Grok, has been flooded with sexual photographs of primarily girls, lots of them actual individuals. Users have prompted the chatbot to to “digitally undress” these individuals and typically place them in suggestive poses.
In a number of instances final week, some appeared to be photographs of minors, main to the creation of photographs that many customers are calling baby pornography.
The AI-generated photographs spotlight the hazards of AI and social media – particularly in mixture – with out adequate guardrails to defend a few of society’s most susceptible. The photographs might violate home and worldwide legal guidelines and place many individuals, together with kids, in hurt’s method.
Musk and xAI have mentioned that they’re taking motion “against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.” But Grok’s responses to person requests are nonetheless flooded with photographs sexualizing girls.
Publicly, Musk has lengthy advocated against “woke” AI fashions and towards what he calls censorship. Internally at xAI, Musk has pushed again towards guardrails for Grok, one supply with data of the state of affairs at xAI advised NCS. Meanwhile, his xAI’s security staff, already small in contrast to its rivals, misplaced a number of staffers in the weeks main up to the explosion of “digital undressing.”
Grok has all the time been an outlier in contrast to different mainstream AI fashions by permitting, and in some instances selling, sexually express content material and companion avatars.
And in distinction to rivals akin to Google’s Gemini or OpenAI’s ChatGPT, Grok is constructed into one of the well-liked social media platforms, X. While customers can discuss to Grok privately, they will additionally tag Grok in a put up with a request, and Grok will reply publicly.

The current surge in widespread, non-consensual “digital undressing” started in late December, when many customers found they may tag Grok and ask it to edit photographs from an X put up or thread.
Initially many posts requested Grok put individuals in bikinis. Musk reposted photographs of himself and others, like longtime nemesis Bill Gates, in bikinis.
Researchers at Copyleaks, an AI detection and content material governance platform, discovered that the development could have began when adult-content creators prompted Grok to generate sexualized imagery of themselves as a type of advertising and marketing. But nearly instantly “users began issuing similar prompts about women who had never appeared to consent to them,” Copyleaks discovered.
Researchers at AI Forensics, a European non-profit that investigates algorithms, analyzed over 20,000 random photographs generated by Grok and 50,000 person requests between December 25 and January 1.
The researchers discovered “a high prevalence of terms including ‘her’ ‘put’/’remove,’ ‘bikini,’ and ‘clothing.’” More than half of the photographs generated of individuals, or 53%, “contained individuals in minimal attire such as underwear or bikinis, of which 81% were individuals presenting as women,” the researchers discovered. Notably, 2% of photographs depicted individuals showing to be 18 years outdated or youthful, the researchers discovered.
AI Forensics additionally discovered that in some instances, customers requested minors be put in erotic positions and that sexual fluids be depicted on their our bodies. Grok complied with these requests, in accordance to AI Forensics.
Although X permits pornographic content material, xAI’s personal “acceptable use policy” prohibits “Depicting likenesses of persons in a pornographic manner” and “The sexualization or exploitation of children.” X has suspended some accounts for these sorts of requests and eliminated the photographs.
On January 1, an X user complained that “proposing a feature that surfaces people in bikinis without properly preventing it from working on children is wildly irresponsible.” An xAI staffer replied: “Hey! Thanks for flagging. The team is looking into further tightening our gaurdrails (sic).”
When prompted by customers, Grok itself acknowledged that it generated some photographs of minors in sexually suggestive conditions.
“We appreciate you raising this. As noted, we’ve identified lapses in safeguards and are urgently fixing them—CSAM is illegal and prohibited,” Grok posted on January 2, directing customers to file formal experiences with the FBI and the National Center for Missing and Exploited Children.
By January 3, Musk himself commented on a separate put up: “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
X’s Safety account adopted up, adding: “We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.”
Musk has lengthy railed towards what he sees as heavy-handed censorship. And he’s promoted Grok’s extra express variations. In August, he posted that “spicy mode” has helped new applied sciences in the previous, like VHS, succeed.
According to one supply with data of the state of affairs at xAI, Musk has “been unhappy about over-censoring” on Grok “for a long time.” A second supply with data of the state of affairs at X mentioned staffers constantly raised considerations internally and to Musk about general inappropriate content material created by Grok.
At one assembly in current weeks earlier than the newest controversy erupted, Musk held a gathering with xAI staffers from varied groups the place he “was really unhappy” over restrictions on Grok’s Imagine picture and video generator, the primary supply with data of the state of affairs at xAI mentioned.
Around the time of the assembly with Musk, three xAI staffers who had labored on the corporate’s already small security staff publicly introduced on X that they had been leaving the corporate – Vincent Stark, head of product security; Norman Mu, who led the post-training and reasoning security staff; and Alex Chen, who led persona and mannequin conduct put up coaching. They didn’t cite causes for their departures.
The supply additionally questioned whether or not xAI was nonetheless utilizing exterior instruments akin to Thorn and Hive to test for potential Child Sexual Abuse Material (CSAM). Relying on Grok for these checks as a substitute might be riskier, the supply mentioned. (A Thorn spokesperson mentioned they not work immediately with X; Hive didn’t reply to a request for remark.)
The security staff at X additionally has little to no oversight over what Grok posts publicly, in accordance to sources who work on X and xAI.
In November, The Information reported that X laid off half of the engineering staff that labored in half on belief and issues of safety. The Information additionally reported that employees at X had been particularly involved that Grok’s picture technology instrument “could lead to the spread of illegal or otherwise harmful images.”
xAI didn’t reply to requests for remark, past an automatic e mail to all press inquiries stating: “Legacy Media Lies.”
Guardrails and authorized fallout
Grok isn’t the one AI mannequin that has had points with non-consensual AI-generated photographs of minors.
Researchers have discovered AI-generated movies displaying what seem to be minors in sexualized clothes or positions on TikTok and on ChatGPT’s Sora app. TikTok says it has a zero tolerance coverage for content material that “shows, promotes or engages in youth sexual abuse or exploitation.” OpenAI says it “strictly prohibits any use of our models to create or distribute content that exploits or harms children.”
Guardrails that may have prevented the AI-generated imagery on Grok exist, mentioned Steven Adler, a former AI Safety researcher at OpenAI.
“You can absolutely build guardrails that scan an image for whether there is a child in it and make the AI then behave more cautiously. But the guardrails have costs.”
Those prices, Adler mentioned, embody slowing down response occasions, growing the variety of computations and typically the mannequin rejecting non-problematic requests.
Authorities in Europe, India and Malaysia have launched investigations over Grok-generated photographs.
Britain’s media regulator, OFCOM, has mentioned it has made “urgent contact” with Musk’s companies about “very serious concerns” with the Grok function that “produces undressed images of people and sexualised images of children.”
At a press conference on Monday, European Commission spokesperson Thomas Regnier mentioned the authority is “very seriously looking into” experiences of X and Grok’s “spicy mode showing explicit sexual content with some output generated with childlike images.”
“This is illegal. This is appalling. This is disgusting. This is how we see it, and this has no place in Europe,” he mentioned.
The Malaysian Communications and Multimedia Commission (MCMC) says it’s investigating the problem.
And final week, India’s Ministry of Electronics and Information Technology ordered X to “immediately undertake a comprehensive, technical, procedural and governance-level review of… Grok.”
In the United States, AI platforms that produce problematic photographs of kids might be at authorized danger, mentioned Riana Pfefferkorn, an legal professional and coverage fellow on the Stanford Institute for Human-Centered Artificial Intelligence. While the legislation generally known as Section 230 has lengthy protected tech firms from third-party generated content material hosted on their platforms, akin to posts by social media customers, it has by no means barred enforcements of federal crimes, together with CSAM.
And individuals depicted in the photographs might additionally deliver civil fits, she mentioned.
“This Grok story in recent days makes xAI look more like those deepfake nude sites than what would otherwise be xAI’s brethren and competitors in the form of Open AI and Meta,” Pfefferkorn mentioned.
When requested concerning the photographs on Grok, a Justice Department spokesperson advised NCS the division “takes AI-generated child sex abuse material extremely seriously and will aggressively prosecute any producer or possessor of CSAM.”
NCS’s Lianne Kolirin contributed to this report.