Open the website from one explicit deepfake generator and you'll be presented with a menu of horrors. With just a few clicks, it offers you the ability to turn a single photo into an explicit eight-second video clip, placing women in realistic-looking graphic sexual situations. “Transform any photo into a nude version with our advanced AI technology,” the website says.
The possibilities for possible abuse are extensive. Among the 65 video templates on the website are a series of undressing videos in which the featured women will remove clothing, but there are also explicit video scenes called 'fuck machine deepthroat' and several 'cum' videos. Each video costs a small fee to generate; adding AI-generated audio costs more.
The website, which WIRED is not naming to limit further exposure, includes warnings that people should only upload photos that they have permission to transform with AI. It is unclear whether there are controls to enforce this.
Grok, the chatbot created by Elon Musk's companies, has been used to create thousands of non-consensual bikini images for the purpose of “undressing” or “nudifying” – further industrializing and normalizing the process of digital sexual harassment. But it is only the most visible – and far from the most explicit. For years, a deepfake ecosystem consisting of dozens of websites, bots, and apps has grown, making it easier than ever before to automate image-based sexual abuse, including the creation of child sexual abuse material (CSAM). This “nudification” ecosystem, and the harm it causes to women and girls, is probably more advanced than many people understand.
“It's no longer a very rough synthetic strip,” says Henry Ajder, a deepfake expert who has been monitoring the technology for more than half a decade. “We are talking about a much higher degree of realism of what is actually generated, but also a much wider range of functionality.” Combined, the services likely generate millions of dollars per year. “It's a societal scourge, and it's one of the worst and darkest parts of this AI revolution and the synthetic media revolution that we're seeing,” he says.
Over the past year, WIRED has tracked how multiple explicit deepfake services have introduced new functionality and quickly expanded to offer malicious video creation. Image-to-video models now typically require only one photo to generate a short clip. A WIRED investigation of more than 50 “deepfake” websites, likely receiving millions of views per month, shows that almost all now offer high-quality explicit video production and often list dozens of sexual scenarios in which women can be depicted.
Meanwhile, dozens of sexual deepfake channels and bots on Telegram have regularly released new features and software updates, such as different sexual poses and positions. For example, in June last year, a deepfake service promoted a “sex mode,” advertising it alongside the message: “Try different clothes, your favorite poses, age and other settings.” Another posted that “more styles” of images and videos were coming soon and that users could “create exactly what you envision with your own descriptions” using custom prompts for AI systems.
“It's not just, 'You want to undress someone.' It's like, “Here are all these different fantasy versions of it.” It's the different poses. It is the different sexual positions,” says independent analyst Santiago Lakatos, who, together with media channel Indicator, has investigated how “nudify” services often use the infrastructure of large technology companies and are likely to make a lot of money in the process. “There are versions where you can make someone [appear] pregnant,” says Lakatos.
A WIRED assessment found that there were more than 1.4 million accounts signed up to 39 deepfake creation bots and channels on Telegram. After WIRED asked Telegram about the services, the company removed at least 32 of the deepfake tools. “Non-consensual pornography – including deepfakes and the tools used to create them – is strictly prohibited under Telegram's terms of service,” a Telegram spokesperson said, adding that the company removes content when it is detected and that it removed 44 million pieces of content that violated its policies last year.
