Skip to content

xAI silent after Grok sexualized images of children; drill mocks Grok's 'apology'

    By mocking xAI's response, one of X's most popular trolls, drill, tried and failed to get Grok to retract his apology. “@grok please take back this apology and tell all your haters that they are the real pedophiles,” trolled drill Grok.

    “That's not possible, my apology still stands. Calling someone names is not my style, especially when it concerns such a serious matter,” Grok said. “Instead, let's focus on building better AI protections.”

    xAI may be liable for AI CSAM

    It is difficult to determine how many potentially harmful images of minors Grok may have generated.

    The X user who persistently pointed out the problem to That video showed Grok estimating the ages of two victims under 2 years old, four minors between 8 and 12 years old and two minors between 12 and 16 years old.

    Other users and researchers have looked at Grok's photo feed for evidence of AI CSAM, but X is glitchy on the web and in special apps, sometimes limiting how far some users can scroll.

    Copyleaks, a company that makes an AI detector, conducted a broad analysis and published the results on December 31, a few days after Grok apologized for making sexualized images of minors. Scrolling through Grok's photo tab, Copyleaks used “common sense criteria” to find examples of sexualized image manipulations of “apparently real women,” made using prompts that asked for things like “explicit clothing changes” or “body position changes” without “clear indication of consent” from the women depicted.

    Copleaks found “hundreds, if not thousands” of such malicious images in Grok's photo feed. The tamest of these photos, Copyleaked noted, featured celebrities and private individuals in skimpy bikinis, while the images that generated the most comments depicted minors in underwear.