Skip to content

Why are Grok and X still available in the App Stores?

    Elon Musk's AI chatbot Grok is used to flood X with thousands of sexualized images of adults and apparent minors wearing minimal clothing. Some of this content not only appears to violate X's own policies, which prohibit sharing illegal content such as child sexual abuse material (CSAM), but may also violate Apple App Store and Google Play Store guidelines.

    Apple and Google both explicitly ban apps that contain CSAM, which is illegal to host and distribute in many countries. The tech giants also ban apps that contain pornographic material or facilitate harassment. The Apple App Store says it doesn't allow “overtly sexual or pornographic material,” as well as “defamatory, discriminatory, or mean-spirited content,” especially if the app is “likely to humiliate, intimidate, or harm a targeted individual or group.” The Google Play Store bans apps that “contain or promote content related to sexually predatory behavior, or that distribute non-consensual sexual content,” as well as programs that “contain or facilitate threats, harassment, or bullying.”

    Over the past two years, Apple and Google have removed a number of 'nudify' and AI image generation apps after investigations by the BBC and 404 Media found that they were being advertised or used to effectively convert ordinary photos into explicit images of women without their consent.

    But at the time of publishing, both the X app and the standalone Grok app remain available on both app stores. Apple, Google and X did not respond to requests for comment. Grok is operated by Musk's multibillion-dollar artificial intelligence startup xAI, which also did not respond to questions from WIRED. In a public statement published on January 3, X said it is taking action against illegal content on its platform, including CSAM. “Anyone who uses Grok or encourages the creation of illegal content will face the same consequences as if they upload illegal content,” the company warned.

    Sloan Thompson, director of training and education at EndTAB, a group that teaches organizations how to prevent the spread of non-consensual sexual content, says it is “absolutely appropriate” for companies like Apple and Google to take action against X and Grok.

    The number of non-consensual explicit images on X generated by Grok has exploded in the past two weeks. One researcher told Bloomberg that during a 24-hour period between January 5 and 6, Grok produced approximately 6,700 images per hour that they identified as “sexually suggestive or nude.” Another analyst collected more than 15,000 URLs containing images taken by Grok on X during a two-hour period on December 31. WIRED reviewed about a third of the images and found that many showed women dressed in revealing clothing. More than 2,500 were marked as no longer available within a week, while nearly 500 were labeled as “age-restricted adult content.”

    Earlier this week, a spokesperson for the European Commission, the governing body of the European Union, publicly condemned the sexually explicit and non-consensual images generated by Grok on

    On Thursday, the EU ordered Regulators in other countries, including Britain, India and Malaysia, have also said they are investigating the social media platform.