Skip to content

OpenAI's child exploitation reports have risen sharply this year

    During the first half of 2025, the number of CyberTipline reports OpenAI sent was about the same as the amount of content OpenAI sent the reports: 75,027 compared to 74,559. In the first half of 2024, it sent 947 CyberTipline reports on 3,252 pieces of content. Both the number of reports and the content of the reports showed a clear increase between the two periods.

    Content can mean several things in this context. OpenAI has said it reports all instances of CSAM, including uploads and requests, to NCMEC. In addition to the ChatGPT app, which allows users to upload files (including images) and generate text and images in response, OpenAI also offers access to its models via API access. The most recent NCMEC count would not include reports related to the video generation app Sora, because its September release fell after the time frame covered by the update.

    The spike in reports follows a similar pattern to what NCMEC has observed more broadly on the CyberTipline with the rise of generative AI. The center's analysis of all CyberTipline data found that reports on generative AI saw a 1,325 percent increase between 2023 and 2024. NCMEC has not yet released data for 2025, and while other major AI labs like Google publish statistics on the NCMEC reports they have created, they do not specify what percentage of those reports are AI-related.

    OpenAI's update comes at the end of a year in which the company and its competitors faced more scrutiny over child safety issues than just CSAM. Over the summer, 44 attorneys general sent a joint letter to multiple AI companies, including OpenAI, Meta, Character.AI, and Google, warning that they would “use every facet of our authority to protect children from exploitation by predatory artificial intelligence products.” Both OpenAI and Character.AI have faced multiple lawsuits from families or on behalf of individuals who claim the chatbots contributed to the deaths of their children. In the fall, the U.S. Senate Judiciary Committee held a hearing on the harms of AI chatbots, and the U.S. Federal Trade Commission launched a market study on AI companion bots with questions about how companies are mitigating the negative impacts, especially on children. (I was previously at the FTC and was assigned to work on market research before leaving the agency.)