“Deep fakes” that rely on nearly flawless computer-generated trickery may soon play such havoc with political campaigns that the prospect is raising serious concern by Maine’s senators and many of their Capitol Hill colleagues.

Using artificial intelligence, or AI, to create fabricated images and audio has already happened at least a couple of times in recent months in efforts to fool voters into thinking they’re seeing or hearing something real when it is, in fact, utter fiction.

“If we don’t act, we could soon live in a world where political campaigns regularly deploy totally fabricated but also totally believable images and footage,” U.S. Senate Majority Leader Chuck Schumer said during a hearing last week.

Left unchecked, the New York Democrat said at the Rules and Administration Committee hearing, AI “could erode our democracy.”

Aiming to take steps before the 2024 elections grow hot and heavy, a handful of senators, including Susan Collins of Maine, are backing a bill that would ban AI’s use to generate deceptive content falsely depicting federal candidates in political ads with the intent of influencing federal elections.

“This bipartisan legislation would help to strengthen the integrity of our elections while also protecting First Amendment rights,” Collins, a Republican, said in a prepared statement.


Collins was one of the first senators to cosponsor a bill drawn up by U.S. Sens. Amy Klobuchar, a Minnesota Democrat, and Josh Hawley, a Missouri Republican, politicians who are normally so at odds that Klobuchar joked, “Hold your beer, that’s correct” when she announced the introduction of the Protect Elections from Deceptive AI Act they each promoted.

“We must protect the right of Americans to vote without being controlled or manipulated by artificial intelligence companies,” Hawley said. “Elections belong to the people, not the tech companies.”

Klobuchar said the bill “creates a framework that is constitutionally all right” and can, with bipartisan backing, provide “the guardrails that we need.”

But it remains unclear whether the restrictions eyed in the legislation introduced in September are allowable under the First Amendment and whether enough lawmakers are concerned to push the issue to the top of the agenda fast enough to put anything in place before next year’s campaigns begin to fill the airwaves.

Angus King’s warnings

The prospect of harnessing growing computer sophistication to create fictional videos has been on politicians’ radar screen for years.

In 2018, U.S. Sen. Angus King, a Maine independent, warned that “deep fakes” were going to become a serious problem.


At the time, King said it has become easy to conjure up an utterly invented video that showed him talking about something that looks real but was actually created on a computer.

“For somebody in my line of work,” King said, “it’s pretty damn unsettling.”

Looking into the issue that same year for The Heritage Foundation, University of Maryland law professor Danielle Citron warned that if deep fakes become commonplace, people “could lose faith in our public discourse” as it becomes ever harder to know what is true. She feared authoritarian leaders may be the ones to decide if people can’t discern reality for themselves.

Real-life examples

Last spring, a video popped up on TikTok that showed U.S. Sen. Elizabeth Warren, a Massachusetts Democrat, telling someone that “Republicans have a history of promoting policies that undermine the public’s trust in the government, allowing them to vote could lead to an outcome that does not accurately reflect the will of the people.”

It continued, depicting Warren saying, “Furthermore, allowing Republicans to vote could result in a surge of voter suppression tactics and voter intimidation, which could compromise the security and fairness of the election. For these reasons it is necessary to restrict Republican voting in the 2024 election.”

“She never said that,” Klobuchar said, “but it looked like her.”


Nearly 200,000 users saw the video in a matter of days — and few doubt that many of them believed it was real.

Klobuchar said another fake AI-generated image, released by Florida Gov. Ron DeSantis’ presidential campaign, showed former President Donald Trump hugging and kissing Dr. Anthony Fauci, a public health advocate many Republicans loathe.

“The problem for voters is that people aren’t going to be able to distinguish if it’s the opposing candidate or their own candidate, if it’s them talking or not. That is untenable in a democracy,” Klobuchar said.

What the legislation proposes

The bill supported by Collins would, in its own words, “prohibit the distribution of materially deceptive AI-generated audio or visual media relating to candidates for federal office, and for other purposes.”

It says that if a reasonable person watching or listening to the artificially generated image, video or audio would likely be fooled, the AI-generated item could not be used to influence or solicit funds or federal elections for the U.S. House, U.S. Senate or president.

The law would not apply to broadcasters or publications that use the material as part of their news coverage as long as they make it clear there are questions about the deceptive nature of the footage, picture or audio.


Satire and parody could also be allowed under the proposed law.

Anyone whose voice or likeness is the subject of faked audio or visual media distributed as part of a federal election-related activity, the bill says, could seek a court injunction to bar its use and have the right to sue.

“Clear and convincing evidence” would be required to win a case.

An incumbent protection bill?

Neil Chilson, a senior fellow at the Center for Growth and Opportunity at Utah State University, said during the hearing that AI “promises to help humanity create a healthier, more productive, more artistic, more interesting and more enjoyable world.”

While it could be misused, like any tool, AI seems “unlikely to materially affect election results because political speech already uses AI tools and has for years,” he said.

What the new AI offers, Chilson said, is a way to “lower the cost — in time and money — to generate high-quality creative content.”


“We’ll see more speech, including political speech,” he said, “but we shouldn’t expect a shift in the truth-to-lies ratio.”

“In fact, if a lie is halfway around the world before the truth gets its shoes on, generative AI is a rocket-powered running shoe,” Chilson said. “AI tools will enable real-time fact-checking, cheaper voter education, and messages tailored to voter needs.”

“These tools can strengthen democracy,” he added.

Chilson said the bottom line is that the measure promoted by Klobuchar, Hawley and Collins amounts to “an incumbent protection act” that would make it harder for smaller teams to generate high-quality content affordably to deploy against well-heeled insiders.

“This bill restricts affordable content creation, disadvantaging new candidates without significant funds,” Chilson said.

Concerns about First Amendment impact

That AI is turning into “a tool to influence our democracy” is no surprise, Ari Cohn, free speech counsel at the nonprofit TechFreedom, told senators at last week’s hearing.


“The very purpose of the AI technology that prompted this hearing, as with any advancement in communications technology, is to allow us to better, more easily and more efficiently communicate with one another,” he said. “In other words, AI promises to enhance the activity most fundamental to our democracy and liberty.”

Cohn said that new ways to communicate “have always been accompanied by concerns about their impact on the political atmosphere,” from the invention of the printing press to more recent fretting about bloggers spouting off.

“Advances in communications technology will ultimately enhance our ability to express ourselves and engage in civic discourse,” he testified. “We should be excited for, not fearful of, these expanding horizons.”

U.S. Sen. Deb Fischer, the Nebraska Republican who is the ranking GOP member of Klobuchar’s panel, said it is understandable people are concerned about AI given the way it has “quickly moved from the stuff of science fiction to being part of our daily lives.”

She said Congress should weigh its risks and benefits, but pointed out the issues involved are complicated and whatever the Senate opts to do must keep free speech protections in mind.

Cohn said the First Amendment’s protection for free speech and a free press come into play.


“Because political speech receives the highest protection and because of the dangers of allowing the government to operate as a political Ministry of Truth, courts have overwhelmingly held that laws regulating electoral substance must satisfy strict scrutiny,” he said.

What that means in practice is that government “bears an exceptionally heavy burden” if it seeks to regulate election-related speech, Cohn said.

He said it must show any restrictions are “necessary to serve a compelling government interest” and “narrowly tailored to serve that interest” and that authorities could not have achieved the same goal with fewer restrictions.

Cohn expressed skepticism that the legislation endorsed by Collins would pass muster.

He said deep fakes have yet to play a meaningful role in elections and that conjecture that they might isn’t enough to justify restrictions on speech, even speech touting something that isn’t true.

“Reflexive legislation prompted by fear of the next technological boogeyman will not safeguard our democratic values,” Cohn said. “Instead, intrusions on the free and unfettered political discourse that has been the lifeblood of our democracy will ultimately subvert it.”


He urged lawmakers to resist “the urge to legislate speculative problems out of existence before they arise” because waiting and counting on free speech to work out “will strengthen our resiliency, safeguard our fundamental liberties, and allow innovation to flourish and take us to new heights.”

U.S. Sen. Bill Hagerty, a Tennessee Republican, expressed doubt about the legislative push.

Hagerty said, “Congress and the Biden administration should not engage in heavy-handed regulation with uncertain impacts that I believe pose a great risk to limiting political speech.”

“We shouldn’t immediately indulge the impulse for government to just do something, as they say, before we fully understand the impacts of the emerging technology, especially when that something encroaches on political speech,” Hagerty said.

Comments are no longer available on this story