Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

Concerns About AI Election Impacts Are Overblown (So Far)

AI caused less damage through misinformation or election administration than predicted in 2024. New laws meant to combat political deepfakes, meanwhile, went largely unenforced.

Trump Truth Social post about fake Taylor Swift endorsement
During the campaign, President Trump's Truth Social account claimed an endorsement from Taylor Swift, which was fake news.
Truth Social
In the lead-up to the 2024 election, the potential for artificial intelligence to disrupt the political process received a great deal of attention. Concerns primarily centered on the technology’s ability to accelerate the creation and distribution of misinformation that could deceive the public or increase the risk of cyberattacks on election infrastructure. At the same time, some local election officials and political consultants were optimistic about AI’s potential to help streamline processes and increase the efficiency of election administration and political campaign management.

As I've outlined before, lawmakers at both the federal and state levels primarily focused on the risks presented by AI and responded by proposing new laws and regulations aimed at mitigating potential harms to the information environment. These proposals gained traction, particularly in state capitals, and 20 states now have laws on the books restricting the use of AI in certain political communications.

Now that the election has passed, many experts agree that AI’s negative impacts were far less extreme than originally feared. This article explores lessons learned, with a particular focus on key factors that minimized AI’s effectiveness as a tool for disruption.

New State Restrictions Go Untested


Twenty states now have laws targeting the use of AI to generate deceptive election content. Fifteen were passed in 2024 alone. Specifics vary from state to state, but the most common approach is to pair disclosure requirements with civil penalties. In most states, the 2024 election was the first since these laws went into effect. The impacts of these restrictions remain ambiguous, as there were no documented cases of the laws being enforced and little evidence showing whether they served as a deterrent.

The closest example occurred in the campaign for governor of Indiana. The eventual winner, Republican Sen. Mike Braun, ran a campaign advertisement on television that included a manipulated image of his Democratic opponent, Jennifer McCormick, speaking at a rally surrounded by supporters holding signs that expressed opposition to gas stoves. The ad originally aired without a disclaimer indicating that the video had been altered, as required by the state law enacted in March 2024. Statutory penalties for violating this law include damages and attorney fees; in this case, the campaign added the disclaimer and continued airing the ad without penalty.

There is no clear evidence that unlabeled deepfakes were any less likely to occur in states with a restriction than those where their use remained unregulated. Campaigns were slow to employ AI tools in general in 2024, and the different rules regarding AI labeling across jurisdictions could have played a role in decisions to avoid exposure to legal liability. On the other hand, foreign creators of AI-generated deepfakes are beyond the reach of any state laws regarding the use of deceptive AI in election communications, giving them little incentive to comply.

While the impact of new state restrictions on the use of AI-generated election content remains unclear, the issue is expected to remain top of mind in 2025. Given the limited impact of these laws so far, state lawmakers should think twice—or perhaps wait for additional research—before imposing further speech restrictions on American citizens.

Free Speech and Public Awareness


AI undoubtedly played a role in the 2024 election, primarily through the creation and distribution of deepfake images, audio, video and text. Deceptive AI-generated content mostly targeted political candidates, although AI was also used to convey false information about the election process. Fortunately, these attempts at deception failed to generate any widespread disruption.

A variety of factors contributed toward minimizing AI’s impact on the election. These include voluntary efforts by technology companies to enforce guardrails on the use of their AI tools; advances in deepfake detection technology; and competing influences (both online and in the real world) that make it difficult for even the highest quality deepfake to persuade voters one way or the other.

Public awareness of the risk was high heading into the election, thanks to extensive focus on the issue among media, policymakers, and election officials. As a result, the public was ready to consume information with a healthy dose of skepticism while the media and government officials were poised to push back on false narratives as they arose.

Two examples illustrate this dynamic. In August, Donald Trump posted a collection of deepfake images falsely showing Taylor Swift and her fans endorsing his campaign. Online users promptly questioned the authenticity of the images, and the media was quick to correct the record that Swift did not, in fact, endorse Trump. Days later, Swift endorsed Kamala Harris while noting that the deepfakes contributed to her decision to publicly share how she intended to vote and that “[t]he simplest way to combat misinformation is with the truth.” Swift certainly has a larger platform than most to respond to false claims, but the principle of true speech as the best remedy for false speech proved its applicability once again.

Separately, a deepfake video of an election worker unlawfully destroying ballots in Bucks County, Pa., circulated online in late October. The County Board of Elections and local law enforcement quickly identified the video as fraudulent and communicated that to the public.

This incident highlights an important distinction between campaign-related misinformation and misinformation regarding the election process. In political disputes, the competitive nature of campaigning creates an incentive for both sides to quickly identify and respond to false claims. At the same time, the public may be generally wary of claims made about politicians. These dynamics render it unnecessary for the government to be the arbiter of truth in political disputes; instead, we must rely on public discourse to resolve the issue.

Meanwhile, the Bucks County video impacted perceptions around the legitimacy of the election process itself. Because administrative matters do not have the same competitive dynamics as campaigns, there is no natural public constituency to push back on false information. As a result, steps taken by the government to respond to the video and counter the false claims were necessary and appropriate.

Overall, election-related AI deepfakes played a minor role in the 2024 election. While it is likely that a certain number of individuals were fooled by specific pieces of content, there is no evidence to suggest a widespread effect that generated a meaningful impact on election results. Additionally, public confidence in the election process rebounded strongly in 2024, with the percentage of voters believing the election was run and administered well rising from 59 percent in 2020 to 88 percent today. While this improvement was largely attributable to renewed faith in the process among Republicans who supported Trump, it still underscores the point that any measurable AI impacts were overshadowed by other factors with greater influence.

Cybersecurity and Election Administration


While most public attention around AI focused on potential harms to the information environment, some experts were worried the technology would impact other aspects of the election ecosystem, including cybersecurity and election administration. AI’s impact was not especially disruptive on these fronts, however.

In early 2024, the federal Cybersecurity and Infrastructure Security Agency published guidance noting that AI tools would “likely not introduce new risks” but “may amplify existing risks to election infrastructure.” The agency specifically noted how AI could improve the effectiveness of traditional tactics like distributed denial-of-service (DDoS) attacks and phishing emails— exactly the type of cyber incidents that occurred during the 2024 election season.

For example, a recent report from cybersecurity firm Cloudflare reported a significant uptick in the volume of DDoS attacks on election-related websites in 2024 compared to the last presidential election in 2020. These attacks seek to overwhelm websites with traffic, forcing them offline and making them inaccessible to the public. As expected, websites for campaigns, political parties, and election offices were targeted in the lead-up to Election Day; however, the sites remained online thanks to cybersecurity improvements that rendered them able to withstand the cyberattacks’ increased capabilities.

Hackers targeted both presidential campaigns; notably, Iran was able to gain access to internal documents from the Trump campaign. The hackers relied on familiar techniques that can be conducted with or without the assistance of AI, including spearphishing. This suggests that traditional cyber defenses—including multifactor authentication, strong passwords, email authentication protocols, and cybersecurity training—remain useful for curtailing AI-generated phishing and social engineering attacks.

While AI did not substantially impact election administration in 2024, opportunity remains to utilize the technology as an efficiency tool moving forward. Given the speed at which the technology burst onto the scene (and the traditionally slow adoption rates of new technology by governments), it is no surprise that election administrators focused most of their attention on the immediate task of administering a smooth election before attempting to incorporate AI into operations. However, the growing adoption of chatbots in state and local government administration presents an opportunity for election offices to experiment with AI. Beyond chatbots, AI could be useful in addressing mundane, repetitive, or detail-oriented tasks like ballot proofing, signature verification, and responding to public records requests.

The 2024 election was the first presidential contest held since AI entered the mainstream. While the technology triggered a rapid legislative response over fears it would be weaponized to deceive American voters, its actual effects were more benign. However, as AI continues to evolve, lawmakers will experience continued pressure to take action to defend the integrity of the election process.

Chris McIsaac is a resident fellow at the R Street Institute, which originally published this article.



Governing’s opinion columns reflect the views of their authors and not necessarily those of Governing’s editors or management.