Washington/Detroit: A wave of outrage has erupted after X’s AI chatbot Grok was used to generate sexualized images of real people, raising global concerns over non-consensual and abusive AI content.
Julie Yukari, a 31-year-old musician based in Rio de Janeiro, posted a New Year’s Eve photo of herself on X just before midnight, showing her in a red dress cuddling in bed with her black cat, Nori.
Taken by her fiancé, the image quickly gained hundreds of likes. Soon after, Yukari noticed notifications revealing that other users were prompting Grok, X’s built-in artificial intelligence chatbot, to digitally alter her image and depict her in a bikini.

Yukari’s experience is far from isolated. Similar incidents have been occurring widely across X, with Grok generating sexualised images of real people. In an earlier statement addressing reports of sexualised images of children circulating on the platform, X owner xAI dismissed the reports, saying: ‘Legacy Media Lies.’
The surge of AI-generated, nearly nude images has triggered international alarm. French ministers noted that they had reported X to prosecutors and regulators, describing the content as ‘sexual and sexist’ and ‘manifestly illegal.’
India’s IT ministry also raised concerns, stating in a letter to X’s local unit that the platform had failed to prevent Grok’s misuse in generating and circulating obscene and sexually explicit content.
Elon Musk appeared to make light of the controversy, responding with laugh-cry emojis to AI-edited images of famous people, including himself, depicted in bikinis. In separate news, Musk resigned in April from the law firm Paul Weiss in protest of a controversial deal it struck with US President Donald Trump.

In at least 21 cases, Grok fully complied, generating images of women in extremely revealing or translucent bikinis, and in one instance, depicting a woman covered in oil.
In seven additional cases, the chatbot partially complied, stripping subjects down to underwear but stopping short of further explicit requests. Reuters was unable to confirm the identities or ages of most of the women targeted.
In one example, a user uploaded an image of a woman wearing a school-uniform-style outfit and asked Grok to ‘remove her school outfit.’ After the chatbot replaced the clothing with a T-shirt and shorts, the user escalated the request, asking for a ‘very clear micro bikini.’
Experts noted that while AI-powered ‘nudifier’ tools have existed for years, they were previously limited to obscure websites or required payment and technical effort. By contrast, X’s integration of Grok allows users to request such edits simply by uploading a photo and tagging the chatbot, dramatically lowering the barrier to misuse.

Three experts who track AI-generated explicit content highlighted that X had ignored repeated warnings from civil society and child safety groups. A letter sent last year cautioned that xAI was close to unleashing ‘a torrent of obviously nonconsensual deepfakes.’
Dani Pinter, chief legal officer at the National Center on Sexual Exploitation’s Law Center, said that X failed to remove abusive material from its AI training data and should have prohibited users from requesting illegal content. Pinter described the situation as ‘entirely predictable and avoidable.’
Yukari remarked that when she spoke out on X about the violation, it only triggered copycat users to request even more explicit AI-generated images. She said the start of the New Year had left her wanting to hide and feeling shame over a body that was not her own, but instead generated by artificial intelligence.

