OPINION: Don’t panic over political campaigns using AI. It could be a good thing

There’s something unnerving about the idea of candidates using artificial intelligence to influence voters, but we should all take a deep breath before panicking too much about machines taking over our politics.
For starters, our political era is so bereft of intelligent discourse, the inclusion of any sort of intelligence — even the artificial variety — should probably be considered a welcome development.
But risks clearly exist, which is why Nevada is preparing to implement a law (AB73) that would require political campaigns to disclose any AI-generated content in their ads. Nevada is among 24 other states that now require such disclosures, with two additional states placing an outright ban on political “deepfake” AI content.
The impetus for such regulatory oversight is understandable. The risk of deepfakes, for example, should be taken seriously in our current era where partisan conspiracists are seemingly everywhere and online information isn’t always trustworthy. Indeed, much like fake news and misinformation that is already peddled regularly, a few convincing pictures, a brief video or even faked audio of a particular candidate could cause serious confusion among voters.
In fact, that’s precisely what happened in New Hampshire in January of 2024.
Just ahead of the Democratic primary in that state, voters began receiving robocalls from someone who sounded like Joe Biden, instructing them not to vote in the upcoming primary contest. The voice did not actually belong to the former president; it was created by AI as part of a misinformation campaign commissioned by a rogue Democratic political consultant.
The consultant was fined $6 million by the Federal Communications Commission and indicted on criminal charges for the stunt. Regardless of his legal woes, the consultant succeeded in making his intended point — this technology opens up new ways for misinformation to be spread during important elections.
However, the incident also demonstrates an important point about AI in political campaigns that many policymakers seem to overlook: Such abuse is not likely to come from candidates themselves. For most serious political campaigns, the reputational, civil and legal cost of such bad behavior is already well beyond what most candidates are willing to risk, especially in local elections. Instead, most serious abuse will come from independent bad actors who are already trying to take advantage of people’s growing distrust in the media, government or “official” sources of information.
The sort of AI content we’re going to see emanating from official campaigns will likely be far more benign in nature.
What we should expect from candidates is the same sort of graphically enhanced content we already see plastered across campaign mailers, television advertisements and email solicitations. As Reason satirized years ago, every political ad seems to follow the same basic graphic-design formula anyway: Monochromatic images of political rivals in unflattering poses, juxtaposed by brightly colored video of friendly candidates kissing babies and shaking hands with local community members.
Indeed, much of modern campaign content is already digitally altered in such ways — with staged photo-ops, stock images and photoshopped pictures of political opponents used ad nauseam to incite certain feelings among targeted voters.
AI’s primary usefulness for political candidates will be making such digital creations more accessible, rather than pouring deepfake misinformation into people’s news feeds. It will effectively democratize the ability for candidates who are short on funds to create content that rivals the production quality of their cash-flush opponents.
And in a way, despite the fact that it means we’ll likely have to endure even more uninspired political messaging, it’s actually a good thing for anyone who believes money provides an unfair advantage to wealthy candidates during campaign season. From curating voter lists to identifying target audiences to actually churning out visual content, AI promises to allow cash-strapped candidates the ability to scale up their communications efforts even if big donors and special interests aren’t throwing donations their way.
And so far, that’s largely what we’ve seen from campaigns, which is why AI’s effect on recent elections was far less negative than some experts had originally predicted. In Nevada, campaigns seem to be acting responsibly: From a candidate for Reno’s city council humorously shooting down spaceships with lasers to a candidate for Congress theatrically portraying her political rivals as old-school mobsters, AI creations have been primarily used to supplement creative visions rather than manufacture deepfake hoaxes.
In other words, the real misinformation threat AI poses to the public isn’t necessarily going to come from candidates or their super PACs pushing their usual partisan nonsense — it will come from the same bad actors who have long peddled social media rumors, driven false narratives or incited distrust among extremists with outlandish conspiracy theories. And that’s a threat that will continue to exist regardless of what campaign transparency laws or updated disclosure requirements are put on the books, because such malevolent actors already operate without much concern for the law, common decency or even (in some cases) American sovereignty.
As for candidates, AI will merely allow them to generate more of the typical partisan nonsense they peddle every election — the sort of nonsense that makes some of us think most of the intelligence in politics is already pretty “artificial.”
Michael Schaus is a communications and branding expert based in Las Vegas, Nevada, and founder of Schaus Creative LLC — an agency dedicated to helping organizations, businesses and activists tell their story and motivate change. He has more than a decade of experience in public affairs commentary, having worked as a news director, columnist, political humorist, and most recently as the director of communications for a public policy think tank. Follow him on Twitter @schausmichael or on Substack @creativediscourse.