Digital rights groups such as the Electronic Frontier Foundation are urging lawmakers to hand off deepfake police work to tech companies, or to use an existing legal framework that prohibits issues such as fraud, copyright infringement, obscenity and libel. tackles.
“That’s the best remedy for harm, rather than government interference, which in its implementation will almost always capture material that isn’t harmful, that gives people the chills of legitimate, productive speech,” says David Greene, a civil liberties attorney for the Electronic Borders Foundation.
Several months ago, Google began banning people from using its Colaboratory platform, a data analytics tool, to train AI systems to generate deepfakes. In the fall, the company behind Stable Diffusion, an image-generating tool, launched an update that cripples users who try to create nude and pornographic content, according to The Verge. Meta, TikTok, YouTube and Reddit ban deepfakes that are intended to be misleading.
But laws or prohibitions can struggle to contain a technology designed to constantly adapt and improve. Last year, researchers at the RAND Corporation demonstrated how difficult deepfakes can be to identify when they showed a series of videos to more than 3,000 subjects and asked them to identify the manipulated videos (such as a deepfake of climate activist Greta Thunberg’s existence). of climate change).
More than a third of the time, the group was wrong. Even a subset of a few dozen students studying machine learning at Carnegie Mellon University was wrong more than 20 percent of the time.
Initiatives from companies such as Microsoft and Adobe are now trying to authenticate media and train moderation technology to recognize the inconsistencies that mark synthetic content. But they’re constantly struggling to outperform deepfake creators who often discover new ways to fix defects, remove watermarks, and modify metadata to cover their tracks.
“There is a technological arms race between deepfake makers and deepfake detectors,” said Jared Mondschein, a physicist at RAND. “Until we start figuring out ways to better detect deepfakes, it’s going to be very difficult for any amount of legislation to have teeth.”