Skip to content

Cops lure pedophiles with AI photos of teenage girl. Ethical triumph or new disaster?

    Cops lure pedophiles with AI photos of teenage girl. Ethical triumph or new disaster?

    Aurich Lawson | Getty Images

    Law enforcement officials are now using AI to generate images of fake children to help them track down child abusers online, according to a lawsuit filed this week by the state of New Mexico against Snapchat.

    According to the complaint, the New Mexico Department of Justice launched an undercover investigation in recent months to prove that Snapchat is “a primary social media platform for sharing child sexual abuse material (CSAM)” and sextortion of minors because its “algorithm presents children to adult child molesters.”

    As part of their investigation, 'a detective set up a fake account for a 14-year-old girl, Sexy14Heather.'

    Despite Snapchat making the minor's fake profile private and not adding any followers to the account, “Heather” was soon being recommended to “dangerous accounts, including accounts named 'child.rape' and 'pedo_lover10,' among others that are even more explicit,” the New Mexico Department of Justice said in a press release.

    And after “Heather” accepted a follow request from just one account, the recommendations got even worse. “Snapchat suggested over 91 users, including numerous adult users whose accounts contained or attempted to exchange sexually explicit content,” the New Mexico complaint alleged.

    “Snapchat is a breeding ground for child sex offenders who collect sexually explicit images of children and track, manipulate, and extort them,” New Mexico’s complaint reads.

    The investigator posed as “Sexy14Heather” and exchanged messages with adult accounts, including users who “sent inappropriate messages and explicit photos.” In one exchange with a user identified as “50+ SNGL DAD 4 YNGR,” “the fake teen noted her age, sent a photo, and complained that her parents were making her go to school,” prompting the user to send “his own photo,” as well as sexually suggestive chats. Other accounts asked “Heather” to “swap allegedly explicit content,” and several “attempted to coerce the underage persona into sharing CSAM,” the New Mexico DOJ said.

    “Heather” also tested Snapchat's search tool, finding that “although she did not use sexually explicit language, the algorithm must have determined that she was searching for CSAM” when she searched for other teenage users. It “began recommending users associated with trading” CSAM, including accounts with usernames such as “naughtypics,” “addfortrading,” “teentr3de,” “gayhorny13yox,” and “teentradevirgin,” the study found, “suggesting that these accounts were also involved in the distribution of CSAM.”

    This new use of AI was sparked after Albuquerque police charged a man, Alejandro Marquez, who pleaded guilty and was sentenced to 18 years for raping an 11-year-old girl he met through Snapchat’s Quick Add feature in 2022, according to the New Mexico complaint. More recently, the New Mexico complaint says, an Albuquerque man, Jeremy Guthrie, was arrested and convicted this summer of “raping a 12-year-old girl he met and raised through Snapchat.”

    In the past, police have posed as children online to catch child molesters, using photos of younger-looking adult women or even younger photos of police officers. Using AI-generated images could be seen as a more ethical way to carry out these stings, sex crimes attorney Carrie Goldberg told Ars, because “an AI decoy profile is less problematic than using images of an actual child.”

    But using AI could complicate investigations and raise ethical concerns, Goldberg warned, as child safety and law enforcement experts warn that the internet is increasingly awash with AI-generated child sexual abuse.

    “In terms of AI used for entrapment, defendants can defend themselves if they say the government induced them to commit a crime they weren't already inclined to commit,” Goldberg told Ars. “Of course, it would be ethically concerning if the government were to create deepfake AI child abuse material (CSAM), because those images are illegal and we don't want more CSAM in circulation.”

    Experts warn that AI image generators should never be trained on datasets that combine images of real children with explicit content. This is to prevent cases of AI-generated child sexual abuse, which is especially damaging when it appears to depict a real child or a victim of child abuse.

    New Mexico’s complaint includes only one AI-generated image, so it’s unclear how widely the state’s Justice Department is using AI or whether officers might be using more sophisticated methods to generate multiple images of the same fake child. It’s also unclear what ethical concerns were weighed before officers began using AI decoys.

    The New Mexico Department of Justice did not respond to Ars' request for comment.

    Goldberg told Ars that “there should be standards within law enforcement for how AI can be deployed responsibly.” He warned that “we're likely to see more anti-baiting defenses focused on AI, if the government is using the technology in a manipulative way to pressure someone into committing a crime.”