Skip to content

X Didn't solve Grok's “undressing” problem. It just makes people pay for it

    After creating thousands of “undressing” photos of women and sexualized images of apparent minors, Elon Musk's X has apparently limited who can generate images with Grok. However, despite the changes, the chatbot is still used to create 'undressing' sexualized images on the platform.

    On Friday morning, the Grok account on The post also includes a link that pushes people to the social media platform's $395 annual subscription level. In one test where the system requested Grok to create an image of a tree, the system returned the same message.

    The apparent change comes after days of growing outrage and scrutiny of Musk's X and xAI, the company behind the Grok chatbot. The companies are facing an increasing number of investigations by regulators around the world for the creation of non-consensual explicit images and alleged sexual images of children. British Prime Minister Keir Starmer has not ruled out a ban on X in the country, saying the actions have been “unlawful.”

    Neither X nor xAI, the Musk-owned company behind Grok, has confirmed that it has made image generation and editing a paid feature. A spokesperson for X acknowledged WIRED's investigation but made no comment prior to publication. X has previously said it is “taking action against illegal content on X,” including cases of child sexual abuse material. Although Apple and Google have previously banned apps with similar 'nudify' features, X and Grok remain available on their respective app stores. xAI did not immediately respond to WIRED's request for comment.

    For more than a week, users on While a public feed of images created by Grok on Friday contained far fewer results of these “undressing” images, it still created sexualized images when requested by X users with paid “verified” accounts.

    “We observe the same kind of prompt, we observe the same kind of outcome, just less than before,” Paul Bouchaud, principal investigator at the Paris-based nonprofit AI Forensics, tells WIRED. “The model can continue to generate bikini [images]” they say.

    A WIRED review of some Grok posts on Friday morning found that Grok generated images in response to user requests for images that “put her in latex lingerie” and “put her in a plastic bikini and covered her in a white donut glaze.” The images appear behind a box with a 'content warning' stating that adult material is shown.

    On Wednesday, WIRED revealed that Grok's standalone website and app, which is separate from the version on X, has also been used in recent months to create highly graphic and sometimes violent sexual videos featuring celebrities and other real people. Bouchaud says it's still possible to use Grok to create these videos. “I was able to generate a video with sexually explicit content from an unverified account without any restrictions,” they say.

    While WIRED's test of generating images with Grok on X using a free account did not allow image creation, using a free account on Grok's app and website still generated images.

    The change on X could immediately limit the amount of sexually explicit and harmful material the platform creates, experts say. But it has also been criticized as a minimal step that acts as a band-aid for the real harm caused by non-consensual intimate images.

    “The recent decision to restrict access to paying subscribers is not only inadequate – it represents monetization of abuse,” Emma Pickering, head of technology-facilitated abuse at British domestic violence charity Refuge, said in a statement. “While limiting AI image generation to paying users may marginally reduce volume and improve traceability, it has not stopped the abuse. It has simply been put behind a paywall, allowing X to profit from the harm.”