Skip to content

FBI warns of increasing use of AI-generated deepfakes in sextortion schemes

    FBI warns of increasing use of AI-generated deepfakes in sextortion schemes

    The FBI on Monday warned of the increasing use of artificial intelligence to generate fake videos for use in sextortion schemes that attempt to harass minors and non-consulting adults or force them to pay ransoms or meet other demands.

    The scourge of sextortion has been around for decades. It involves an online acquaintance or stranger who tricks a person into providing a payment, an explicit or sexually explicit photo or other inducement through the threat of sharing compromising images already obtained with the public. In some cases, the images in the scammers’ possession are real and were obtained from someone the victim knows or from an account that has been breached. Other times, the scammers claim to have only explicit material without providing any proof.

    After convincing victims that their explicit or compromising photos are in the possession of the scammers, the scammers demand some form of payment in exchange for not sending the content to family members, friends or employers. In the event that victims send sexually explicit images as payment, scammers often use the new content to keep the scam going for as long as possible.

    In recent months, the FBI said in a warning published Monday that the use of AI to generate fake videos that appear to show real people engaged in sexually explicit activities has increased.

    “The FBI continues to receive reports of victims, including underage children and dissenting adults, whose photos or videos have been turned into explicit content,” officials wrote. “The photos or videos are then publicly distributed on social media or pornographic websites for the purpose of harassing victims or sextortion schemes.”

    They went on to write:

    Since April 2023, the FBI has observed an increase in sextortion victims reporting the use of fake images or videos created from content posted on their social media sites or web posts, which are provided to the malicious actor upon request provided or captured during video chats. Based on recent victim reports, the malicious actors usually demanded: 1. Payment (e.g. money, gift cards) with threats to share the images or videos with family members or friends on social media if no money was received; or 2. The victim sends images or videos with an actual sexual theme.

    Software and cloud-based services for creating so-called deepfake videos abound online and range from freely available open source offerings to subscription accounts. With advancements in AI in recent years, the quality of this offering has improved dramatically to the point where a single image of a person’s face is all that is needed to create realistic videos that use the person’s likeness in a fake video .

    Most deepfake offerings at least ostensibly contain protections designed to prevent deepfake abuse, for example using a built-in checker designed to prevent the program from operating on “inappropriate media”. In practice, these guardrails are often easy to circumvent and services are available in underground markets that do not meet the restrictions.

    Scammers often take photos of victims from social media or elsewhere and use them to “create sexually themed images that appear lifelike on a victim, then distribute them on social media, public forums or pornographic websites,” FBI officials warned. . . “Many victims, including minors, are unaware that their images have been copied, manipulated and distributed until someone else alerts them. The photos are then sent directly to victims by malicious actors for sextortion or harassment, or until it was self-detected on the Internet. Once distributed, victims may face significant challenges in preventing the manipulated content from being continuously shared or removed from the Internet.”

    The FBI urged people to take precautions to prevent their images from being used in deepfakes.

    “While seemingly harmless when posted or shared, malicious actors can provide a plethora of content that they can misuse for criminal activity,” officials said. “Advances in content creation technology and accessible personal images online provide new opportunities for malicious actors to find and attack victims. This makes them vulnerable to embarrassment, harassment, extortion, financial loss or persistent re-victimization.”

    People who have received sextortion threats should keep all available evidence, especially any screenshots, texts, tape recordings, emails that document usernames, email addresses, websites or names of platforms used for communication, and IP addresses . They can report sextortion directly to: