Deepfakes and Democracy: The Case for Federal Legislation Regulating Political AI
- UCSB ULJ Newsletter
- Jun 22
- 8 min read
Updated: Jun 23
Spring 2025
Riley Kimont
Edited By: Braylen Hill
As artificial intelligence (AI) has proliferated in recent years, controversy surrounding its legality and potential impact on US elections has inevitably arisen. The creation of hyper-realistic audio and video known as “deepfakes” using AI holds a unique ability to defame candidates and deceive voters through illusory messages. Despite the issue’s urgency, US election law remains largely unequipped to manage the burgeoning technology. Senator Amy Klobuchar sponsored Senate Bill 2770 in May 2024, titled the Protect Elections from Deceptive AI Act, intending “to prohibit the distribution of materially deceptive AI-generated audio or visual media relating to candidates for Federal office” [1]. The bill ultimately died in the 118th Congress, never receiving a vote. As of March 2025, Senator Klobuchar has reintroduced the bill as S.B. 1213 in the new session of Congress, with bipartisan support [2]. In terms of older legislation, the Federal Election Campaign Act of 1971 stipulates disclaimer requirements about the financial origins of campaign communications, with this legislation extending to include internet communications as of March 2023 [3]. While this is an important measure, it remains entirely unable to regulate AI-generated content.
Given this federal legal gap, several states have codified laws regarding AI into their individual state regulations. California, Texas, and Minnesota have all released laws governing AI usage within a specific time frame before an election. This is a step in the right direction, however, the lack of federal laws regarding this issue should not be ignored. While creating federal legislation surrounding AI usage in elections may conflict with the Constitutional protections surrounding political free speech, this hindrance does not negate its necessity. By analyzing state legislation addressing political AI and evaluating its potential effectiveness on a national scale, this article will argue for federal legislation which establishes mandatory disclaimers on AI-generated content and narrowly tailored restrictions on deepfakes regarding elections. Without these regulations, AI will continue to outpace the law, threatening voter trust and the integrity of US democratic processes.
The lack of federal legislation surrounding AI and elections is likely in part due to Constitutional complications: if creating and distributing AI generated content is considered protected under free speech, any new laws must take the First Amendment into account [4]. Deciding to nullify an Ohio law criminalizing the distribution of anonymous campaign literature, the Supreme Court’s ruling in McIntyre v. Ohio Elections Commission (1995) effectively protects anonymous political speech under the First Amendment [5]. This inherently complicates regulating AI-generated political content, as any sort of disclosure requirement must comply with this legal right. Given the ease with which the internet allows individuals to remain anonymous, creating a law requiring AI content’s origins to be disclosed may be difficult given the protections afforded to anonymous political speech.
Constitutional safeguards surrounding false speech form an added layer of complexity. In United States v. Alvarez (2012), the Supreme Court ruled that the Stolen Valor Act, a law which prohibited falsely claiming to have received military decoration, was in violation of the First Amendment [6]. Ultimately, the Court voided the act because the Constitution “demands that content-based restrictions on speech be presumed invalid... and that the Government bear the burden of showing their constitutionality” [7]. Essentially, the case makes it so the falsity of speech itself is not enough to exempt it from the First Amendment—it must fall under specific categories of speech deemed harmful, such as defamation or true threats. In the context of AI-created political content, this poses a unique conflict. Deepfakes of candidates may be entirely false, but unless the content can be proven to fall under certain exceptions, they may still be constitutionally protected. Thus, laws created surrounding AI must take this ruling into account and specify that there is actual harm or malice caused by the false content.
Moreover, these constitutional concerns are not unfounded. California Assembly Bill No. 2839, signed in September 2024 by Gov. Gavin Newsom, outlaws the malicious distribution of materially deceptive campaign content. The Bill defines this as media which portrays a candidate “doing or saying something that the candidate did not do or say if the content is reasonably likely to harm the reputation or electoral prospects of a candidate” [8]. This bill has already encountered dissent on the basis of First Amendment infringement. U.S. District Judge John A. Mendez granted a preliminary injunction in October 2024, temporarily blocking the law due to First Amendment concerns [9]. Given the relevance and contention surrounding AI usage and politics, it comes as no surprise that laws attempting to regulate it have fallen under heightened scrutiny.
Free speech protections may be an obstacle when it comes to passing legislation on AI, but the need for these laws is still urgent. Eighteen states have released laws responding to ‘deceptive’ and ‘manipulated’ media’s potential to interfere with elections, manipulate voters, and wrongly defame candidates [10]. These laws vary in their definitions, punishments, and effectiveness; examining them in greater detail reveals the strengths and shortcomings of the current legal landscape.
California Election Code § 20010 was amended in both 2019 and 2022 to combat the use of deepfakes and AI for purposes of interfering with political elections. The statute criminalizes the distribution of materially deceptive audio or visual media with “actual malice” and “the intent to injure the candidate’s reputation or to deceive a voter into voting for or against the candidate” [11]. However, this prohibition does not apply if the communication is accompanied by a disclaimer which acknowledges that the media has been manipulated. Materially deceptive media is defined as “an image or an audio or video recording of a candidate’s appearance, speech, or conduct that has been intentionally manipulated... [which] would cause a reasonable person to have a fundamentally different understanding or impression” than that person would have from the original [12]. The Californian legislation has many strengths for being one of the first states to create any sort of regulations for AI use and elections. By including an actual malice clause aligned with the standards set by New York Times Co. v. Sullivan [13], the law makes it clear that only intentionally harmful falsehoods fall under its reach. Moreover, the law makes explicit exemptions for satire and parody, allowing humorous political interpretations to continue in AI form. While the section is a proactive measure, it leaves ambiguity that ultimately could lead to misinterpretation.
Another one of the first states to regulate political AI, Texas Election Code Chapter 225 includes a section focused on political deepfake videos. Amended in 2019, § 255.004 makes it a Class A misdemeanor to publish and distribute a deepfake video thirty days prior to an election with “intent to injure a candidate or influence the result of an election” [14]. Texas defines deepfake videos as “created with the intent to deceive, [a video] that appears to depict a real person performing an action that did not occur in reality.” This definition disregards the possibility of audio or image deceptive AI, a gap which could easily be exploited. Subsection B of the law has further come under fire due to First Amendment concerns. In Ex parte Stafford, the Court of Criminal Appeals in Texas ruled that the statute was not “narrowly tailored” enough and therefore unconstitutional since it could encroach on forms of protected political speech [15].
Minnesota is relatively more recent in creating AI legislation, adding § 609.771 in its 2024 Statutes. The law criminalizes the dissemination of a deepfake with “the intent to injure a candidate or influence the result of an election” if “the person knows or acts with reckless disregard about whether the item being disseminated is a deep fake” [16]. The law includes direct definitions in Subdivision 1, with deepfake including video, audio, and images where “the production of which was substantially dependent upon technical means, rather than the ability of another individual to physically or verbally impersonate such individual” [17].
Senator Klobuchar’s bill was introduced to the 119th Congress on March 31 of this year, underscoring how pressing this issue truly is. S.B. 1213, unlike the state bills analyzed earlier, explicitly states that it aims to restrict deceptive media which is “the product of artificial intelligence technology that uses machine learning” [18].
On May 22, President Trump’s “One Big Beautiful Bill Act” was passed in the House, mainly addressing tax reductions and federal spending. However, slipped into Subtitle C of this bill is a provision which would impose a 10-year moratorium on state laws governing AI [19]. This only underscores the urgent need for federal regulations: if it is uncertain what will happen in individual states, having overarching federal laws will ensure the safety of US democratic processes. By combining all the most effective aspects of existing state legislation and the Protections Against Deceptive AI Act, it is possible to create a law that guarantees First Amendment rights while ensuring both candidates and electoral processes stay protected from AI threats.
Technology, media, and politics have always been undeniably intertwined. As radio and TV rose to prominence, laws were created which held the delicate balance of managing new platforms and the right to free political discourse. AI should be treated no differently; in fact, implementing laws to govern it should be an even higher priority. Given AI’s rapidly generative qualities, the technology will likely improve exponentially in the coming years. This will lead to more realistic and convincing media, posing a greater danger to voter misinformation. Regulations can and should be put into place regarding this new form of media; regardless, AI’s rapid advancement has the capacity to exceed legal frameworks and leave critical gaps in oversight. This creates a dangerous imbalance where technology evolves faster than our ability to control its misuse, ultimately endangering the core principles of trust and accountability in our democratic systems. Regardless, it is in the utmost national interest to implement legislation governing AI and educating voters on its capabilities– even if AI may potentially outgrow these regulations, we must do what we can in the present to aid democracy.
References
[1] U.S. Senate, Protect Elections from Deceptive AI Act, S. 2770, 118th Cong., 1st sess., introduced September 12, 2023. https://www.congress.gov/bill/118th-congress/senate-bill/2770
[2] U.S. Senate, Protect Elections from Deceptive AI Act, S. 1213, 119th Cong., 1st sess., introduced March 31, 2025. https://www.congress.gov/bill/119th-congress/senate-bill/1213/cosponsors
[3] U.S. Congress, Congressional Research Service, Campaign Finance Law, CRS Report R45320, updated May 17, 2023. https://www.congress.gov/crs-product/R45320
[4] U.S. Const. amend. I. https://constitution.congress.gov/constitution/amendment-1/
[5] McIntyre v. Ohio Elections Comm’n, 514 U.S. 334 (1995). https://www.law.cornell.edu/supct/html/93-986.ZS.html
[6] United States v. Alvarez, 567 U.S. 709 (2012). https://www.oyez.org/cases/2011/11-210
[7] United States v. Alvarez, 567 U.S. 709 (2012). https://www.law.cornell.edu/supremecourt/text/11-210
[8] California State Legislature, Elections: Deceptive Media in Advertisements, AB 2839 (2024). https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240AB2839
[9] Carlos E. Castañeda, “Judge Blocks New California Law Cracking Down on Election Deepfakes,” CBS News, October 3, 2024. https://www.cbsnews.com/sanfrancisco/news/california-election-deepfake-law-ab2839-blocked-by-judge/
[10] C.J. Larkin, “Regulating Election Deepfakes: A Comparison of State Laws,” TechPolicy.Press, January 8, 2025. https://www.techpolicy.press/regulating-election-deepfakes-a-comparison-of-state-laws/
[11] California Elections Code § 20010 (2019). https://leginfo.legislature.ca.gov/faces/codes_displaySection.xhtml?sectionNum=20010.&lawCode=ELEC
[12] Cal. Elec. Code § 20010 (2019)
[13] New York Times Co. v. Sullivan, 376 U.S. 254 (1964). https://www.oyez.org/cases/1963/39
[14] Texas Election Code § 255.004 (2019). https://statutes.capitol.texas.gov/docs/el/htm/el.255.htm
[15] Daniel Ortner, “Texas Court Confronts Misleading Political Communications and the First Amendment,” Federalist Society, November 11, 2024. https://fedsoc.org/scdw/texas-court-confronts-misleading-political-communications-and-the-first-amendment
[16] Minnesota Statutes § 609.771 (2024). https://www.revisor.mn.gov/statutes/cite/609.771
[17] Minn. Stat. § 609.771
[18] Protect Elections from Deceptive AI Act, S. 1213 (2025)
[19] U.S. House of Representatives, One Big Beautiful Bill Act, H.R. 1, 119th Cong., 1st sess., introduced May 16, 2025. https://www.congress.gov/bill/119th-congress/house-bill/1
Comments