Skip to content

The ACLU is fighting for your constitutional right to create deepfakes

    On January 29, during testimony before the Georgia Senate Judiciary Committee, Hunt-Blackwell urged lawmakers to remove the bill’s criminal penalties and add exemptions for news media organizations that wish to republish deepfakes as part of their reporting. The Georgia legislative session ended before the bill could proceed.

    Federal Law on Deepfake will also face resistance. In January, lawmakers in Congress introduced the No AI FRAUD Act, which would grant property rights to people’s likenesses and voices. This would allow those depicted in any form, and their heirs, to sue those who participated in the creation or distribution of the fake. Such rules are intended to protect people from both pornographic deepfakes and artistic imitations. Weeks later, the ACLU, the Electronic Frontier Foundation, and the Center for Democracy and Technology filed written opposition.

    They, along with several other groups, argued that the laws could be used to suppress much more than just illegal speech. The mere prospect of a lawsuit, the letter argues, could deter people from using the technology for constitutionally protected acts like satire, parody or opinion.

    In a statement to WIRED, the bill’s sponsor, Rep. María Elvira Salazar, noted that “the No AI FRAUD Act includes explicit recognition of First Amendment protections for speech and expression in the public interest.” Rep. Yvette Clarke, who sponsored a parallel bill that would require deepfakes depicting real people to be labeled, told WIRED that it had been amended to include exceptions for satire and parody.

    In interviews with WIRED, policymakers and advocates at the ACLU noted that they’re not opposed to narrowly tailored regulations targeting nonconsensual deepfake pornography. But they pointed to existing anti-harassment laws as a strong(ish) framework for addressing the problem. “There may be problems that you can’t address with existing laws, of course,” Jenna Leventoff, a senior policy advisor at the ACLU, told me. “But I think the general rule is that existing laws are sufficient to address many of these problems.”

    That’s far from a consensus among legal scholars, however. As Mary Anne Franks, a law professor at George Washington University and a leading advocate for strict anti-deepfake regulations, told WIRED in an email, “The obvious flaw in the ‘We already have laws to deal with this’ argument is that if it were true, we wouldn’t be seeing an explosion of this abuse without a corresponding increase in the filing of criminal charges.” Generally, Franks said, prosecutors in a harassment case must prove beyond a reasonable doubt that the alleged perpetrator intended to harm a specific victim — a high bar to clear when that perpetrator may not even know the victim.

    Franks added: “One of the recurring themes among victims who experience this abuse is that there are no obvious legal solutions for them – and they are the ones who should know.”

    The ACLU has No government has yet been sued over generative AI regulations. The organization’s representatives declined to say whether they were preparing a case, but both the national office and several affiliates said they were keeping a close eye on the legislation. Leventoff assured me, “We usually act quickly when something happens.”