I confess to Hutchinson that if I were a politician, I’d be afraid to use BattlegroundAI. Generative AI tools are known to “hallucinate,” a polite way of saying that they sometimes make things up out of thin air. (They’re talking nonsense, to use academic terms.) I ask her how she ensures that the political content BattlegroundAI generates is accurate.
“Nothing is automated,” she replies. Hutchinson notes that BattlegroundAI’s copy is a starting point, and campaign people have to review and approve it before it’s sent out. “You may not have a lot of time or a big team, but you’re definitely reviewing it.”
Of course, there’s a growing movement against the way AI companies train their products on art, writing, and other creative work without asking permission. I ask Hutchinson what she would say to people who oppose the way tools like ChatGPT are trained. “These are incredibly valid concerns,” she says. “We need to talk to Congress. We need to talk to our elected officials.”
I ask if BattlegroundAI will consider offering language models that train only on public domain or licensed data. “I’m always open to that,” she says. “We also need to give people, especially those who are time-pressed, in resource-constrained environments, the best tools available to them. We want consistent results for users and high-quality information. The more models that are out there, the better for everyone, I think.”
And how would Hutchinson respond to people in the progressive movement, who tend to align themselves with the labor movement, who object to the automation of ad copy? “Of course, legitimate concerns,” she says. “Fears that come with any new technology: We’re afraid of the computer, we’re afraid of the light bulb.”
Hutchinson explains her point of view: She doesn’t see this as a replacement for human labor so much as a way to reduce the drudgery. “I worked in advertising for a long time, and there are so many elements that are repetitive, which frankly sucks the creativity,” she says. “AI takes away the boring elements.” She sees BattlegroundAI as a boon to overstretched and underfunded teams.
Taylor Coots, a Kentucky political strategist who recently started using the service, describes it as “very sophisticated” and says it helps identify groups of target voters and ways to tailor messages to reach them in a way that would otherwise be difficult for smaller campaigns. In battleground races in gerrymandered districts, where progressive candidates are big underdogs, budgets are tight. “We don’t have millions of dollars,” he says. “We’re looking for every opportunity to be more efficient.”
Will voters care if the copy they see in digital political ads was generated by AI? “I’m not sure there’s anything more unethical about having AI generate your content than having anonymous employees or interns generate your content,” says Peter Loge, an associate professor and program director at George Washington University who founded a project on ethics in political communication.
“If one could prescribe that all political texts written with the help of AI should be made public, then logically one should also prescribe that all political texts” – such as emails, advertisements and opinion pieces – “that are not written by the candidate should be made public,” he adds.
Still, Loge worries about what AI is doing to public trust on a macro level, and how it might affect the way people respond to political messages in the future. “A risk with AI is less what the technology does, and more how people feel about what it does,” he says. “People have been faking images and making things up for as long as we’ve had politics. The recent focus on generative AI has exacerbated the already incredibly high level of cynicism and distrust among people. If everything can be fake, then maybe nothing is.”
Hutchinson, meanwhile, is focused on her company’s short-term impact. “We really want to help people now,” she says. “We’re trying to move as quickly as possible.”