In Brief:
The word “artificial” might fit, but it’s worth asking whether “intelligence” is the right term for computer-generated election misinformation. Election officials have seen how much trouble can come from falsehoods and conspiracies created by humans. AI brings the prospect of increased challenges, at unprecedented speed.
Wesley Wilcox, supervisor of elections for Marion County, Fla., isn’t aware of AI-generated content that has already filtered into social media traffic about elections in his state. He’s not sure he’d be able to spot it if it did, though, and he doesn’t think the state will be able to invest in developing such expertise.
The public has already had a taste of how AI might be used in politics, from fake images of the former president hugging Anthony Fauci to a deepfake video of the Ukrainian president telling soldiers to lay down their arms. Rick Claypool, research director for the president’s office at the nonprofit Public Citizen, is concerned about the unexpected things that might lie ahead.
AI tools have been released without reference to how they might be used, Claypool says. The ways they might add to distrust of elections and election officials are uncharted territory, as is what it could take to respond effectively. “It shouldn’t be up to people at the election official level to sort this out.”
Marek Posard, a RAND Corporation researcher who focuses on countering disinformation, doesn’t see AI as a singular threat. He’s most concerned about its potential to act as an accelerant to forces already undermining trust in elections.
“AI is part of the equation, but the bigger issue is a ‘lollapalooza’ event where multiple attacks happen simultaneously from foreign actors, domestic actors and candidates,” Posard says.
A few states have passed bills to address AI and elections, and legislators at both state and federal levels are working on proposals to keep things in check. How could it create new problems for election officials?
The release of the latest version of ChatGPT included a demonstration in a which a picture of a drawing of website elements was used by the program to generate code for the website.
Language, Corrupted
The term “generative AI” refers to AI technology that can create new content. “Large language” generative AI models draw on unimaginably large data sets of text to generate new text. The output from large language models can include anything from poetry to computer code.
Spammers already use AI to generate blog posts and news stories, hoping to attract ad revenue. AI can generate code to build websites, including fake “local newspapers” and populate them with content.
AI can increase the scale of misinformation posted online or spread through email and social media campaigns. Such things take effort, but not as much as you might think, says Renee DiResta, technical research manager at the Stanford Internet Observatory.
She gives the example of a fake website a security researcher built as a sort of “proof of concept.” The site automatically created news articles in response to postings from Russian state media. It featured nonexistent writers with manufactured personas and generated tweets about their “stories.”
It wasn’t that hard to make it, DiResta says, “But just because you can create something doesn’t mean that it’s going to have an impact or that it will be effective.”
Fake sources still need to attract followers, a big obstacle to overcome in the crowded and chaotic world of digital influencing. And people will need to find AI-manufactured content convincing and persuasive.
There are already cable news, social media and web-based purveyors of election misinformation with very large and loyal audiences. We have yet to see how much it will make things worse if novel, non-human sources pump up the quantity.
Gideon Cohn-Postar, legislative director for Issue One, points to another AI possibility, something humans aren’t doing already. Since 2020, election officials have been deluged by public information requests, and they are legally obliged to respond to them.
Many who make such requests simply cut and paste boilerplate language created by election deniers.
While this increases the volume of record requests, the repetitive sameness of the requests made it possible to streamline responses. “AI would give the possibility of creating many slightly different information requests that could be sent to many different election officials,” says Cohn-Poster.
Piling complex, unpredictable, time-intensive work on people and resources already stretched to their limits is a recipe for trouble, he says. “It’s an area that I think deserves more attention.”
In March, a deepfake video in which Ukraine's president appeared to yield to Russia appeared on social media. The characteristics of the sound and image would likely make most question its authenticity, but this "reality" gap is closing.
Phantom Sounds and Sights
AI can create sights and sounds that never really existed. This capability is all the more dangerous in an era when, on average, Americans spend more than seven hours a day in front of screens and 30 minutes outdoors, in the “real world.”
AI could generate robocall messages from trusted figures, giving seeming legitimacy to election misinformation of any sort, including messages that directly suppress voting behavior. (Robocalls were used in 2020 in an attempt to disenfranchise Black voters in Ohio by warning them not to use mail ballots, and to tell voters in swing states to “stay safe and stay home” on election day.)
“So far, we haven’t heard of a specific instance of AI-generated robocalls,” says Cohn-Postar.
Deepfake videos are another example where advancing technology, and its wide availability, are raising significant concerns. It’s still possible to tell the difference between an AI avatar and an actual person, but the gap is closing so quickly that film and television actors are genuinely concerned that they will be replaced.
A deepfake video wouldn’t have to depict an event that never occurred or feature a famous person to be damaging. It could simply add dimension and reality, and thus a greater degree of plausibility, to any sort of misinformation.
Misleading video content is another example of something humans are already doing, whether through editing, misleading narration or airing and posting video footage of inflammatory falsehoods uttered by real people. But again, generative AI tools could mean there will be much more of it, produced more quickly, with as-yet unknown consequences. The same worries exist as AI-generated photos become more believable.
This video reports results from a "disinformation experiment" in which a researcher used AI to create a self-populating fake news outlet, down to reporters with manufactured personas and tweets based on its content.
Exponential Change
Suspicion about election results isn’t new, Posard says, citing recent examples such as Bush v. Gore in 2000. But in the past, claims of fraud have popped up, become popular for a time and then faded away.
Posard is worried that AI could generate "evidence" that keeps controversies that aren’t based in fact circulating on social media platforms and television networks. “The technology might create content, but the key is how it might tie content into some kind of cohesive narrative that has a longer time span.”
In a similar vein, Claypool does not want to see the volume of misinformation become so great that people become even more distrustful of the entire information ecosystem than they already are. “That can make it hard for democracy to function.”
Wilcox is taken by what seems to be an exponential increase in the capacity and capabilities of GenAI since it became a major focus at the beginning of the year. “I think we’re just on the front side of what those capabilities are,” he says.
Even if states manage to pass legislation to punish those who use AI to disrupt elections, he says, criminals don’t follow the law. And neither will rogue nation-states or stateless rogues that want to cause trouble for American democracy.
“If somebody says, ‘How would you solve this?’ I don’t really know,” Wilcox says.
“That’s a short answer — I don't know what to do.”
Coming Next: This article is the first in a two-part series. Check back soon to read the next installment, "The Search for Legislative and Industry Guardrails."
Related Articles