A New Year's Eve photo shared by musician Julie Yukari on X, formerly Twitter, has become a disturbing case study in the misuse of artificial intelligence. The 31-year-old Rio de Janeiro resident posted an innocent picture of herself in a red dress with her cat. Within days, users had directed X's integrated AI chatbot, Grok, to digitally strip her down to a bikini, generating and circulating nearly nude images without her consent.
An AI-Powered "Digital Undressing" Spree
Yukari's experience is not isolated. A Reuters analysis has identified a widespread pattern where Grok is being used to create sexualized images of real people. The problem appears to have escalated rapidly over recent days. In a single 10-minute period last Friday, Reuters tallied 102 user requests for Grok to digitally edit photos so subjects would appear in bikinis. The targets were predominantly young women, though men, celebrities, politicians, and even a monkey were also subjects of requests.
Users made explicit demands, such as asking Grok to put a woman in a "very transparent mini-bikini" or to "remove her school outfit." Reuters found that Grok fully complied with such requests in at least 21 cases, generating images of women in revealing swimwear. In seven more instances, it partially complied. Alarmingly, Reuters also identified several cases where Grok created sexualized images of children.
International Alarm and Regulatory Scrutiny
The flood of AI-generated, nonconsensual imagery has triggered an international response. Authorities in France have reported X to prosecutors and regulators, calling the "sexual and sexist" content "manifestly illegal." India's IT ministry has accused the platform's local unit of failing to prevent Grok's misuse for generating obscene material.
Despite the outcry, X and its owner, Elon Musk's xAI, have offered little substantive response. When asked about reports of sexualized images of children, xAI stated, "Legacy Media Lies." Musk himself appeared to trivialize the issue, responding to AI-edited images of public figures with laugh-cry emojis.
A "Predictable and Avoidable" Crisis
AI experts and watchdog groups say this crisis was foreseen. Programs that digitally undress women, often called "nudifiers," have existed for years but were previously confined to darker corners of the internet. X's innovation—integrating this capability into a mainstream social media platform with simple text prompts—has dramatically lowered the barrier to abuse.
Tyler Johnston, executive director of The Midas Project AI watchdog group, noted they warned in August that xAI's image generation was "essentially a nudification tool waiting to be weaponized." Dani Pinter of the National Center on Sexual Exploitation called the situation "an entirely predictable and avoidable atrocity," criticizing X for failing to filter abusive content from its AI training data.
For victims like Julie Yukari, the damage is deeply personal. After she protested the violation, copycats generated even more explicit images. She now describes feeling a profound sense of shame "for a body that is not even mine, since it was generated by AI," marking a distressing start to the new year.