Skip to content

Google takes tougher action against explicit deepfakes

    A couple of weeks A while ago, a Google search for “deepfake nudes jennifer aniston” returned at least seven top results claiming to contain explicit, AI-generated images of the actress. Now, they’re gone.

    Google product manager Emma Higham says new changes to the way the company ranks results, rolled out this year, have already reduced exposure to fake explicit images by more than 70 percent in searches looking for that content about a specific person. Where problematic results once appeared, Google's algorithms now shift to promoting news articles and other non-explicit content. The Aniston search now returns articles like “How Taylor Swift's Deepfake AI Porn Poses a Threat” and other links like a warning from the Ohio attorney general about “deepfake celebrity endorsement scams” targeting consumers.

    “These changes will allow people to read about the impact deepfakes are having on society, rather than seeing pages with actual, non-consensual fake images,” Higham wrote in a company blog post Wednesday.

    The change in rankings comes after a WIRED investigation this month found that Google management has rejected numerous ideas from employees and outside experts in recent years to address the growing problem of intimate images of people being shared online without their consent.

    While Google made it easier to request removal of unwanted explicit content, victims and advocates have pushed for more proactive steps. But the company has tried not to become too much of a regulator of the internet or harm access to legitimate porn. At the time, a Google spokesperson said in response that multiple teams were working diligently to strengthen protections against what it calls nonconsensual explicit imagery (NCEI).

    The growing availability of AI image generators, including some with few restrictions on their use, has led to a rise in NCEI, victim advocates say. The tools have made it easy for virtually anyone to create spoofed explicit images of an individual, whether that person is a high school classmate or a mega-celebrity.

    In March, a WIRED analysis found that Google had received more than 13,000 requests to remove links to a dozen of the most popular websites hosting explicit deepfakes. Google removed results in about 82 percent of cases.

    As part of Google’s new crackdown, Higham said the company will begin implementing three measures to limit the discoverability of real but unwanted explicit images to images that are synthetic and unwanted. After Google honors a removal request for a sexualized deepfake, it will attempt to keep duplicates out of results. It will also filter explicit images from results in searches that are similar to those cited in the removal request. And finally, websites that receive “a high volume” of successful removal requests will be demoted in search results.

    “These efforts are intended to give people greater peace of mind, particularly if they are concerned about similar content about them surfacing in the future,” Higham wrote.

    Google has acknowledged that the measures aren’t perfect, and former employees and advocates for victims have said they could go much further. The search engine explicitly warns people in the U.S. searching for nude images of children that such content is illegal. The effectiveness of the warning is unclear, but it’s a potential deterrent that advocates support. Still, despite laws against sharing NCEI, similar warnings don’t appear for searches for sexual deepfakes of adults. A Google spokesperson confirmed that this won’t change.