top of page

The Voice Gap: Why Copyright Law Fails Musicians in the Age of AI Deepfakes

Winter 2026


Emma Miller

Edited by: Eden Kipnis


As generative artificial intelligence rapidly advances, copyright law struggles to keep up with its evolving capabilities. Traditional copyright frameworks protect works such as studio recordings or live performances, but AI voice cloning operates differently, learning vocal patterns to generate entirely new performances that evade these protections [1]. Within this nascent legal landscape, no federal law protects musicians' voices from AI-generated deepfakes. A "deepfake" is "an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said" [2]. In music, this includes AI-generated vocals produced without the original singer's permission, mimicking speech patterns, cadence, and vocal style from recordings [3]. When AI accurately replicates a musician's voice, listeners cannot distinguish authentic recordings from AI-generated imitations, diverting revenue from artists to unauthorized creators. This brand confusion undermines artists' reputations and strips them of control over their creative identity [4].


Several states have attempted to address the issue of AI-generated music, but put forth a fragmented, state-by-state approach that is inadequate for regulating AI technology that operates globally. Tennessee's ELVIS Act protects both living and deceased performers by penalizing unauthorized voice copying, while California's Assembly Bill 1836 specifically addresses musicians' posthumous rights, preventing unauthorized use of their voice after death [5, 6]. However, state laws remain inconsistent, and many states still offer no protection. This approach fails to provide comprehensive solutions because creators can generate AI content in one state, host it on platforms in another, and distribute it nationally [7]. The recently proposed NO FAKES Act of 2025 is Congress's first attempt at federal protection. The Act would establish a "digital replication right", granting individual artists control over AI-generated reproductions of their voice, create a takedown procedure for platforms, and impose civil penalties up to $750,000 per work [8]. However, the Act contains many critical flaws.


The NO FAKES Act's vague licensing provisions could allow record labels to claim that existing contracts already confer AI voice rights to labels, transferring control from artists to corporations. Specifically, Section 2(a)(6) defines "right holder" broadly to include "any other individual or entity that has acquired, through a license, inheritance, or otherwise, the right to authorize the use of the voice" [9]. This language provides no safeguards preventing labels from arguing that standard recording contracts already encompass AI voice replication rights. If labels control these rights, they can license AI reproductions of artists' voices to third parties and collect revenue, effectively monetizing an artist's vocal identity without the artist's ongoing consent or fair compensation. Additionally, the Act's First Amendment exemptions for "bona fide commentary, criticism, scholarship, satire, or parody" lack clear standards for distinguishing protected uses from infringement [10]. If streaming platforms like Spotify and YouTube cannot reliably distinguish legitimate parody from unauthorized AI replications, they may remove lawful content to avoid the Act's severe penalties of up to $750,000 per work, thereby suppressing legitimate fair use. To address this problem, Congress should enact targeted federal legislation establishing copyright protections specifically for AI-generated content, preventing third parties like record labels from claiming these rights through existing contracts, and setting clear fair use standards to protect both artists and free expression. This article examines current state laws to illustrate why copyright law fails to protect musicians from AI voice cloning, analyzes the NO FAKES Act's critical flaws, and proposes a comprehensive federal solution.


Musicians today are facing an "AI doppelganger dilemma", in which AI systems replicate voices with such accuracy that digital replicas saturate markets, compete with authentic work, and damage reputations, all without consent or compensation [11]. These AI-generated vocals appeal to consumers who would otherwise stream legitimate recordings, causing revenue loss for the original creators. The 2023 viral success of "Heart on My Sleeve" exemplifies these harms. Created by anonymous TikTok user Ghostwriter977, the track used AI to generate vocals mimicking Drake and The Weeknd performing an entirely original song neither artist authorized [12]. The AI vocals were so convincing that many listeners believed it was an authentic collaboration, accumulating over 600,000 Spotify streams, and 15 million TikTok views before removal [13].


Universal Music Group removed the track only after issuing DMCA takedown notices. However, the takedown succeeded not because the AI vocals violated copyright, since voices themselves cannot be copyrighted. Instead, the song inadvertently included a copyrighted "producer tag" by Metro Boomin, providing UMG with a technical copyright violation to cite [14]. Without this random yet fortunate inclusion, the legal basis for removal would have been significantly weaker. AI-generated songs easily saturate platforms because consumers assume authenticity due to the accuracy of AI voice replication [15]. This dilutes artists' brand identity by making authentic and AI-generated performances indistinguishable, devaluing genuine artistic creation and undermining the music industry's ability to recognize and reward genuine human artistry. This confusion poses serious reputational risks when AI-generated content makes artistic choices inconsistent with an artist's established persona.


Recent federal court decisions affirm that copyright law cannot protect voice. In Lehrman v. Lovo, Inc., the U.S. District Court for the Southern District of New York held that "the Copyright Act protects only the original sound recordings, not the abstract qualities of a voice or new recordings that merely imitate or simulate the original" [16]. This case involved voice actors Paul Lehrman and Linnea Sage, whose voice AI company Lovo cloned without authorization after they recorded audiobook narrations through online platforms. Lovo used these recordings to train its AI voice synthesis system, which then generated synthetic versions of their voices for commercial licensing. The plaintiffs alleged Lovo created over 200 AI-generated voice clones from their recordings and licensed these synthetic voices to third parties, generating substantial revenue [17]. The court dismissed copyright claims, explaining that while copyright protects specific recordings, it does not protect "an individual's voice or vocal style" as an abstract quality. Copyright protects "original works of authorship fixed in any tangible medium of expression", specifically, musical compositions and sound recordings [18]. However, copyright does not extend to an individual's vocal style unless embedded in a copyrighted work itself. Copyright protects the particular expression captured in a recording, not the voice that produced it [19]. By analyzing recordings and extracting vocal characteristics, AI systems can generate entirely new performances of original songs without technically copying any specific copyrighted work, thereby exploiting any limitation imposed by law.


Under 17 U.S.C. § 114(b), copyright in a sound recording explicitly "does not extend to the making or duplication of another sound recording that consists entirely of an independent fixation of other sounds, even though such sounds imitate or simulate those in the copyrighted sound recording" [20]. Congress enacted this statutory provision prior to AI technology to protect human impersonators and cover artists. However, it now creates a loophole, one that benefits AI systems and harms original artists. As long as AI generates new independent recordings rather than copying an existing one, it falls outside copyright's scope even if it perfectly replicates the artist's voice.


Congress introduced the NO FAKES Act in 2025 as the first federal framework specifically protecting voice and likeness from AI replication. The Act establishes a "digital replication right" granting artists control over AI reproductions, with civil penalties ranging from $5,000 to $750,000 per violation [21]. While this is fundamental progress towards federal protection against AI music replications, the Act contains critical flaws. First, Section 2(a)(6) defines "right holder" broadly to include "any other individual or entity that has acquired, through a license, inheritance, or otherwise, the right to authorize the use of the voice." This vague language could allow record labels to claim that existing recording contracts already convey AI voice rights. With that interpretation, labels could create new songs, advertisements, or collaborations featuring an artist's voice without the artist's approval. The label would collect licensing fees from these AI-generated performances while the artist receives no compensation and retains no control. This transforms voice rights into a revenue stream controlled by corporations rather than artists, undermining artists' control over how their vocal identity is used commercially.


Second, the Act exempts content created for "bona fide commentary, criticism, scholarship, satire, or parody" but provides no clear standards for distinguishing protected parody from infringing impersonation [22]. Automated content moderation systems cannot reliably make nuanced legal distinctions between legitimate fair use and infringement [23]. Fiesler et al.'s research on copyright takedowns demonstrates that algorithmic content moderation produces high error rates, removing substantial amounts of lawful transformative content. This is due to the fact that algorithms cannot assess context, artistic intent, or the transformative nature of works, which require human judgment. Their study found that automated systems frequently misclassify parody, criticism, and remixes as infringement, chilling creative expression. Constitutional analysis requires carefully distinguishing between uses of identity in public discourse, which merit strong First Amendment protection, and uses of identity for commercial speech, which receives less protection. The NO FAKES Act fails to codify this distinction [24]. Furthermore, when platforms face liability up to $750,000, as proposed by the NO FAKES Act, they will over remove content, suppressing legitimate parody and transformative works, such as sampling, satire, and remixes.


Finally, the Act relies on platform based enforcement through takedown procedures, placing unrealistic burdens on musicians while incentivizing platform censorship. Musicians must constantly monitor the internet for unauthorized replicas and send detailed notifications, while platforms must remove content "as soon as is technically and practically feasible" or face severe penalties. This creates a "heckler's veto", where anyone can submit takedown notices claiming content is unauthorized. This effectively forces platforms to remove the content immediately without judicial determination of whether the use is actually infringing or protected fair use. Section 230 of the Communications Decency Act shields platforms from liability for user-generated content, incentivizing platforms to maintain a neutral stance [25]. However, the NO FAKES Act's liability provisions would override this protection for AI-generated voice content, pressuring platforms toward over-removal. When platforms face direct financial penalties for hosting allegedly infringing content, they predictably adopt risk-averse policies that remove content first and adjudicate legitimacy later, a system that disproportionately harms fair use.


To remedy the NO FAKES Act's critical flaws, Congress should enact targeted federal legislation establishing three core protections. First, artists must retain permanent control over their voice. The law should establish voice rights as inalienable, meaning artists cannot sell or permanently license their voice to anyone during their lifetime. The statute should explicitly state that recording contracts and other industry agreements cannot transfer rights to authorize AI-generated voice replications. This prevents record labels from creating and profiting off of AI-generated performances such as new songs, features on other artists' tracks, commercial endorsements, or any vocal performance the artist never actually recorded. Additionally, these rights remain exclusively with the individual artist and cannot be assigned to any third party during the artist's lifetime. This prevents record labels from claiming that existing contracts already give them control over AI voice rights. Artists could still grant permission for specific AI projects through limited agreements, but they would retain ultimate control. This approach extends California's protections for deceased celebrities' rights to living artists [26].


To directly address unauthorized AI voice replications, the statute should prohibit any person or entity from creating, distributing, or publicly performing AI-generated vocal performances that replicate an artist's voice without express written authorization for that specific use. Violations should trigger statutory damages between $5,000 and $50,000 per unauthorized work, with enhanced damages for willful commercial exploitation. This creates clear liability for AI developers, platforms hosting infringing content, and individuals creating deepfake performances, establishing that an artist's voice cannot be replicated for new performances without explicit permission, regardless of whether existing copyright law applies.


Second, the law must protect free speech through clear fair use exceptions. Right of publicity laws protect individuals' control over commercial exploitation of their identity, including voice, name, and likeness [27]. As Post and Rothman explain, effective publicity rights must distinguish between different uses. For example, distinguishing public discourse, such as uses of identity in news, commentary, and criticism, occupying the highest rung of First Amendment protection, from commercial speech such as promotional uses and purely commercial products, which receives less protection. Congress should explicitly define lawful uses. First, news reporting, commentary, or criticism. Second, parody of the individual or their work. Third, biographical or historical works. Fourth, nonprofit educational materials, and finally, transformative uses that do not commercially substitute for the artist's authentic voice. Courts have successfully developed similar standards in copyright law. In Campbell v. Acuff-Rose Music, Inc., the Supreme Court held that 2 Live Crew's rap parody of Roy Orbison's "Oh, Pretty Woman" released in 1989 was fair use because it transformed the original work for purposes of commentary and criticism [28]. The Court established that transformative works, those adding new expression, meaning, or message rather than merely copying, deserve protection even when commercial, provided they do not serve as market substitutes for the original work. These same principles of evaluating whether a use is transformative or acts as a true substitute for the original should apply to AI replication use. For AI voice cases, courts should assess whether the AI-generated content comments on, criticizes, or transforms the artist's work or persona, which is protected, versus whether it simply replicates the artist's voice to create commercial products that compete with the artist's authentic performances, which is a case of infringement.


Human review should be required to evaluate fair use and make takedowns instead of relying on automated systems. Platforms must notify users and implement contestation and dispute protocols prior to takedown. Algorithms cannot reliably distinguish fair use from infringement as research has shown [29]. Specifically, Fiesler et al. found that automated content moderation systems lack the contextual understanding necessary to evaluate artistic commentary and parody, resulting in systematic takedowns of lawful creative expression.


Third, the law should establish penalties proportionate to the harm caused. The current NO FAKES proposal's severe fines of up to $750,000 create a harmful incentive for platforms to over remove content to avoid liability rather than carefully distinguishing legitimate uses from actual infringement. A more effective approach would be to tie penalties to damages proposed by the plaintiff. Courts should award monetary compensation for damages that reflect the tangible financial losses like lost licensing fees. They should also demonstrate reputational harm and disgorgement of any profits the infringer earned from the unauthorized voice use. Courts should reserve higher statutory damages, such as $1,000-$50,000 per work, for willful violations where the infringer acted with the knowledge that they lacked authorization.


This proposed legislation provides key advantages. First, it establishes uniform national standards that apply consistently across all states, eliminating the current disparity where different states offer different levels of protection. This consistency benefits both artists seeking to protect their voices and AI developers seeking clear rules to follow. Second, making voice rights inalienable, meaning they cannot be sold or permanently licensed away, prevents record labels from exploiting artists. Musicians' ability to permanently control their voice even while granting labels' rights to specific recordings under contract would allow separation of the voice itself from particular recorded performances. Third, explicit fair use protections must require human review to balance content removal on music platforms. Artists receive protection from brand exploitation, while legitimate uses like parody, news commentary, and criticism remain protected under the First Amendment.

However, global distribution of AI replications poses enforcement challenges that domestic legislation cannot fully resolve. When an infringer creates AI-generated content in a foreign jurisdiction, hosts it on servers abroad, and distributes it to American audiences, determining applicable law and enforcing judgments becomes complex. Under principles of personal jurisdiction, U.S. courts typically apply the law of the state where the plaintiff is domiciled or where the harm occurred. For AI voice replications distributed nationally, this could create conflicts when multiple states claim jurisdiction. The proposed federal statute would establish national uniformity for domestic cases, but international enforcement would still require cooperation through treaties or blocking orders against foreign platforms, as discussed below.


Nonetheless, this proposed solution is not without challenges. First, courts must develop standards for distinguishing human impersonators from AI clones, determining when similarity crosses into infringement. For example, differentiating from an impersonator lawfully performing in a tribute band should be treated differently than AI generated replication of the same voice. Fortunately, courts have successfully navigated similar issues. Copyright law's "substantial similarity" test determines when one creative work unlawfully copies another through a two-part analysis [30]. First, whether an ordinary observer would perceive substantial similarities between the works. Second, whether those similarities relate to protectable expressions rather than unprotectable ideas. Courts examine both quantitative copying, how much was taken, and qualitative copying, whether the "heart" of the original was appropriated. Clear copyright frameworks like this lend themselves to voice rights, using similar tests of perceptual similarity and distinctive vocal characteristics to determine unlawful replication.


Second, foreign AI platforms pose enforcement challenges because companies operating in other countries are not automatically subject to U.S. legislation compliance. A company based in Asia might operate beyond U.S. legal jurisdiction, granting them access to distribute AI voice replication globally. However, if the platforms want American users, they must comply with American laws. The proposed statute can authorize courts to issue blocking orders against foreign services that systematically violate musicians' rights. This enforcement mechanism already exists under copyright law, where courts have ordered providers to block access to foreign piracy websites [31]. The same approach can apply to foreign AI platforms that repeatedly exploit musicians' voices.


Generative AI's ability to accurately replicate musicians' voices demands immediate action from lawmakers. The current state copyright law's structural limitations leave musicians vulnerable to brand dilution and consequent financial losses, allowing label control over their voice. While the NO FAKES Act represents Congress's first federal attempt to address AI voice cloning, its critical flaws render it insufficient and in need of substantial revision. Congress must instead enact targeted legislation establishing inalienable voice likeness rights, fair use protections with human review, and balanced enforcement mechanisms that empower artists over corporations.

The NO FAKES Act awaits Senate Judiciary Committee consideration, where lawmakers must decide whether to pass the legislation or send it back with comprehensive reforms. Meanwhile, AI and voice cloning technology grow increasingly sophisticated. Platforms struggle to identify and remove unauthorized deepfakes before they go viral, with some AI tracks accumulating millions of streams before detection. The next few years will determine whether musicians retain control over their most fundamental asset, their voice, or whether vocal identity becomes a commodity to exploit.


References


[1] Mathilde Pavis, "Rebalancing Our Regulatory Response to Deepfakes with Performers' Rights," Convergence: The International Journal of Research into New Media Technologies 27, no. 4 (2021): 974–998.

[2] Merriam-Webster, s.v. "deepfake," accessed January 22, 2026, https://www.merriam-webster.com/dictionary/deepfake.

[3] Harnoorvir Singh Josan, "AI and Deepfake Voice Cloning: Innovation, Copyright and Artists' Rights," Centre for International Governance Innovation, February 20, 2024, https://www.cigionline.org/publications/ai-and-deepfake-voice-cloning-innovation-copyright-and-artists-rights/.

[4] Erica Shields, "The AI Doppelganger Dilemma: Cloned Voices in the Music Industry," Seattle University Law Review 48 (2024): 761–810.

[5] Tenn. Code Ann. § 47-25-1105 (2024).

[6] Cal. Civ. Code § 3344.1 (West 2024).

[7] Megan C. Parker, "Fighting AI and Deepfake Misuse in Music," GPSolo Magazine 42, no. 6 (2025).

[8] NO FAKES Act of 2025, S. 1367, 119th Cong. (2025).

[9] NO FAKES Act of 2025, S. 1367, 119th Cong. § 2(a)(6) (2025).

[10] NO FAKES Act of 2025, S. 1367, 119th Cong. § 3(b) (2025).

[11] Shields, "The AI Doppelganger Dilemma," 761–810.

[12] Bobby Allyn, "The Song 'Heart on My Sleeve' Was Made Using AI Trained on Drake and The Weeknd," NPR, April 19, 2023, https://www.npr.org/2023/04/19/1170836219/drake-the-weeknd-ai-heart-on-my-sleeve.

[13] Goldmedia, AI and Music: Market Development of AI in the Music Sector and Impact on Music Creators in Australia and New Zealand, commissioned by APRA AMCOS (August 2024), https://www.goldmedia.com/wp-content/uploads/2024/09/GOLDMEDIA_Studie_KI-und-Musik-in-Australien-und-Neuseeland_24-08.pdf.

[14] Melissa Torres, "(A.I.) Drake, The Weeknd, and the Future of Music," Washington Journal of Law, Technology & Arts, May 3, 2023, https://wjlta.com/2023/05/03/a-i-drake-the-weeknd-and-the-future-of-music/.

[15] Shields, "The AI Doppelganger Dilemma," 761–810.

[16] Lehrman v. Lovo, Inc., No. 23-cv-7748, 2024 WL 1235453 (S.D.N.Y. Mar. 22, 2024).

[17] Lehrman v. Lovo, Inc., 2024 WL 1235453.

[18] 17 U.S.C. § 102(a).

[19] Edward Lee, "AI and the Sound of Music," Yale Law Journal Forum 134 (2024): 187–210.

[20] 17 U.S.C. § 114(b).

[21] NO FAKES Act of 2025, S. 1367, 119th Cong. (2025).

[22] NO FAKES Act of 2025, S. 1367, 119th Cong. § 3(b) (2025).

[23] Casey Fiesler, Joshua Paup, and Corian Zacher, "Chilling Tales: Understanding the Impact of Copyright Takedowns on Transformative Content Creators," Proceedings of the ACM on Human-Computer Interaction 7, no. CSCW2 (2023): 1–21.

[24] Robert C. Post and Jennifer E. Rothman, "The First Amendment and the Right(s) of Publicity," Yale Law Journal 130 (2020): 86–180.

[25] 47 U.S.C. § 230.

[26] Cal. Civ. Code § 3344.1 (West 2024).

[27] Restatement (Third) of Unfair Competition § 46 (Am. L. Inst. 1995).

[28] Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569 (1994).

[29] Fiesler, Paup, and Zacher, "Chilling Tales," 1–21.

[30] Arnstein v. Porter, 154 F.2d 464 (2d Cir. 1946).

[31] 17 U.S.C. § 512(j).

 
 
 

Recent Posts

See All

Comments


bottom of page