Skip to content

The use of AI in elections is causing a battle over guardrails

    In Toronto, a candidate for this week’s mayoral election vowing to clear out homeless camps has released a series of campaign promises illustrated by artificial intelligence, including fake dystopian images of people camping on a downtown street and a fabricated image of tents set up in a park.

    In New Zealand, a political party posted a realistic-looking depiction on Instagram of fake robbers rampaging through a jewelry store.

    In Chicago, the runner-up in April’s mayoral vote complained that a Twitter account posing as a news outlet had used AI to clone its vote in a way that suggested it was condoning police brutality.

    What started a few months ago as a slow trickle of fundraising emails and promotional images composed by AI for political campaigns has turned into a steady stream of campaign materials created by technology and the political playbook for democratic elections across the world. rewritten all over the world.

    Increasingly, political consultants, election researchers and legislators are saying that setting up new guardrails, such as legislation to rein in synthetically generated advertising, should be an urgent priority. Existing defenses, such as social media rules and services claiming to detect AI content, haven’t done much to stem the tide.

    As the 2024 U.S. presidential race begins to heat up, some campaigns are already testing the technology. The Republican National Committee released a video featuring artificially generated images of doomsday scenarios after President Biden announced his re-election bid, while Florida Governor Ron DeSantis posted fake images of former president Donald J. Trump with Dr. Anthony Fauci, the former health official. officially. The Democratic Party experimented with fundraising messages crafted by artificial intelligence in the spring — and found they were often more effective at encouraging engagement and donations than texts written entirely by humans.

    Some politicians see artificial intelligence as a way to reduce campaign costs, using it to provide instant answers to debate questions or attack ads, or to analyze data that would otherwise require expensive experts.

    At the same time, the technology has the potential to spread disinformation to a wide audience. An unflattering fake video, an email full of fake stories spread by the computer, or a fabricated image of urban decay can reinforce prejudice and widen the partisan divide by showing voters what they expect, experts say.

    The technology is already much more powerful than manual manipulation – not perfect, but improving quickly and easy to learn. In May, OpenAI CEO Sam Altman, whose company helped spark an artificial intelligence boom last year with its popular ChatGPT chatbot, told a Senate subcommittee he was nervous about election season.

    He said the technology’s ability “to manipulate, to persuade, to provide a kind of one-to-one interactive disinformation” was “a major concern.”

    Representative Yvette D. Clarke, a New York Democrat, said in a statement last month that the 2024 election cycle “is poised to be the first election in which AI-generated content predominates.” She and other congressional Democrats, including Minnesota Senator Amy Klobuchar, have introduced legislation requiring political ads that use artificially generated material to carry a disclaimer. A similar bill in Washington state was recently signed.

    The American Association of Political Consultants recently condemned the use of deepfake content in political campaigns as a violation of its code of ethics.

    “People will be tempted to push the boundaries and see where they can take things,” said Larry Huynh, the group’s new president. “As with any tool, there can be bad use and bad actions in using them to lie to voters, to mislead voters, to create a belief in something that doesn’t exist.”

    The technology’s recent intrusion into politics came as a surprise in Toronto, a city that supports a thriving ecosystem of artificial intelligence research and start-ups. The mayoral elections will take place on Monday.

    A conservative candidate in the race, Anthony Furey, a former news columnist, recently laid out his platform in a document that was dozens of pages long and filled with synthetically generated content to help him take his tough stance on crime.

    A closer look clearly revealed that many of the images weren’t real: One lab scene featured scientists looking like alien blobs. A woman in another view wore a pin on her vest with illegible letters; similar markings appeared in an image of construction site warning tape. Mr Furey’s campaign also used a synthetic portrait of a seated woman with two arms folded and a third arm touching her chin.

    The other candidates used that image to get a laugh during a debate this month: “We actually use real photos,” said Josh Matlow, who showed a photo of his family and added that “no one in our photos has three arms.”

    Still, the shoddy renderings were used to bolster Mr. Furey’s argument. He gained enough momentum to become one of the most recognizable names in an election with over 100 candidates. In the same debate, he acknowledged the use of the technology in his campaign, adding that “we’ll have a few laughs here as we continue to learn more about AI.”

    Political pundits worry that artificial intelligence, when misused, could have an undermining effect on the democratic process. Misinformation is a constant risk; one of Mr Furey’s rivals said in a debate that even though members of her staff used ChatGPT, they always checked the output.

    “If someone can make noise, create uncertainty or develop false narratives, that can be an effective way to influence voters and win the race,” Darrell M. West, a senior fellow for the Brookings Institution, wrote in last month. a report. “Since the 2024 presidential election could amount to tens of thousands of voters in a few states, anything that can push people in one direction or another could be decisive.”

    Increasingly sophisticated AI content is appearing more often on social networks that are largely unwilling or unable to monitor it, said Ben Colman, the CEO of Reality Defender, a company that offers services to detect AI. “irreversible damage” before being addressed, he said.

    “Explaining to millions of users that the content they already saw and shared was fake long after that is too little, too late,” Mr Colman said.

    For several days this month, a Twitch live stream carried on a non-stop, not safe-for-work debate between synthetic versions of Mr. Biden and Mr. Trump. Both were clearly identified as simulated “AI entities,” but if an organized political campaign were to create such content and distribute it widely without any public disclosure, it could easily erode the value of real material, disinformation experts said.

    Politicians could shrug off responsibility and claim that authentic images of compromising actions were not real, a phenomenon known as the liar’s dividend. Ordinary citizens could create their own counterfeits, while others could burrow deeper into polarized information bubbles, believing only the sources they wanted to believe.

    “If people can’t trust their eyes and ears, they might just say, ‘Who knows?'” Josh A. Goldstein, a research fellow at Georgetown University’s Center for Security and Emerging Technology, wrote in an email. “This could promote a shift from a healthy skepticism that encourages good habits (such as reading laterally and looking for reliable sources) to an unhealthy skepticism that it’s impossible to know what’s true.”