The High Stakes of Deepfakes: The Growing Necessity of Federal Legislation to Regulate This Rapidly Evolving Technology

From its nascent development in the 1990s to the introduction of a widely available app in 2018, deepfake technology has become both increasingly sophisticated and readily accessible to the general population.[1] Deepfake—a portmanteau of “deep learning” and “fake”—is a form of synthetic media in which artificial intelligence is utilized to create a digital copy of a person’s likeness or voice. Deepfake technology can be utilized in a myriad of ways: in the educational sphere to create interactive content, in the film industry to substitute the lead actor for their stunt double or to align the dubbing in a foreign language film, and in the retail sector to enhance a prospective buyer’s experience. However, despite its many beneficial uses, deepfake technology has also been applied with deleterious consequences including the disparagement of political foes, the creation of deepfake pornography, as well as fraudulent business schemes.

In fact, since this technology has become more widely available, 90-95% of deepfake videos are now nonconsensual pornographic videos and, of those videos, 90% target women—mostly underage.[2] In fact, one popular online streaming personality, known as QTCinderella, who streams video content on the website Twitch, was the subject of hundreds of unauthorized deepfake pornography videos; however, QTCinderella was advised that she would not be successful in taking legal action against the person or persons who had stolen her likeness due to lack of existing legislation.[3] Currently, the lack of regulation of deepfake technology is not only a threat to celebrities, but also to any person with an image of themselves posted on the internet, privately or publicly, and has the potential to wreak havoc on the personal and professional lives of many without repercussion.

In addition to creating synthetic videos, deepfakes are also capable of vocal mimicking. The CEO of a UK-based energy plant fell prey to a scam utilizing the auditory form of deepfake technology. Believing that the leader of his parent company requested via a phone call that he wire transfer €200,000 to Hungary, he did so; however, his boss’s German-accented voice actually belonged to a scammer using the vocal mimicking abilities of deepfake technologies.[4] Although the fraudulent nature was soon recognized, the company was unable to receive any compensation or bring the perpetrators to justice due to the difficulty in tracing the origins of deepfakes.

Deepfakes have also been increasingly used within the political sphere, including the dissemination of misinformation about political candidates and the release of false statements from world leaders. In 2018, a now-viral video showed a deepfake-generated President Obama uttering words that were not his own but were, in fact, intended to demonstrate the realistic nature of these manufactured images and voices.[5] Used in these ways, deepfakes may have the ability to potentially undermine elections, as well as create global conflict. Deepfake technology, with its rapid development, if left unregulated, has the potential to ruin lives, destroy businesses, and spread misinformation.

Current Legislation

No federal legislation currently exists to address the potential threats of deepfake technology within the United States. However, in December of 2019, Congress passed the National Defense Authorization Act (NDAA), which, in Section 5709, requires the Director of National Intelligence to report on the use of deepfakes by international governments, its ability to spread misinformation, and its potential impact on national security. Although this action by the government is promising in regulating the growing threat of deepfakes from sources outside the nation, it does not address the issues that deepfakes may cause within the United States.

Since its creation, only five states have passed legislation regarding emerging deepfake technology. In 2019, Texas passed S.B. 751[6] and California passed AB730,[7] both banning the use of deepfakes to influence upcoming elections. In the same year, California’s law AB602,[8] Georgia’s S.B. 337,[9] and Virginia’s SB 1736[10] were passed to prohibit the creation and dissemination of nonconsensual deepfake pornography. In 2020, New York passed law S6829A,[11] establishing the rights for legal action against the unlawful publication of deepfakes.[12] More recently, bills to regulate artificial intelligence (AI) have been proposed and enacted in several states in 2022; however, most states still lack legislation regarding AI in general and only the five aforementioned states have deepfake-specific legislation.

Conversely, the Chinese government has recently enacted strict regulations, called Deep Synthesis Provisions, which prohibit the creation of deepfakes without the consent of the user and require confirmation that the content was generated using AI.[13] Despite the widely held view that China does not respect people’s personal freedoms, it is currently the only country to impose a strict ban regarding the use of certain deepfakes.[14] However, a few other countries do offer some legal protection against the unlawful use of deepfake technology. For example, the general right of responsibility is enshrined under German Basic Law and grants individuals the right to their own image; despite not directly addressing the deepfake technology platform, this law essentially renders deepfakes of individuals without consent illegal in Germany.[15] The United Kingdom has also sought to protect individuals through amendments to the country’s Online Safety Bill, including specific bans on deepfakes used for nonconsensual pornography.[16] Just as many see the value of the legislation to limit and regulate deepfake technology, others voice concerns that proposed regulations would constitute a potential violation of a creator’s First Amendment rights.

Are deepfakes protected by the First Amendment?

The increasing sophistication and accessibility of creating deepfakes have allowed this technology to pose an increasing threat towards the integrity of elections and safety of millions of Americans; however, many argue that restrictions on its use would constitute an impingement on the creator’s freedom of speech and expression, and, therefore, a violation of the First Amendment. Given that deepfakes are technically forms of expression, it would be unconstitutional to ban all of them, but there are exceptions within the First Amendment in which certain speech is no longer protected by the Constitution. These exceptions include libel, written defamation, slander, spoken defamation, and profanity.[17] While not all deepfakes are used for these purposes, many would fall under these categories such as nonconsensual pornography videos. As these forms of deepfakes would not be protected, it would be completely legal to impose federal bans or restrictions on them.

The defense of the First Amendment for deepfakes can also be supported through copyright law. As deepfakes are often built on recycled, copyrighted content, the creation of deepfakes without permission may potentially be considered copyright infringement. While use of copyrighted content without permission is, in fact, copyright infringement, the fair use doctrine permits limited use of copyrighted materials without permission for certain works.[18] Under the Copyright Act of 1976 S107, fair use is determined through a four factor test:

  1. Purpose and character of use
  2. Nature of original work
  3. Amount and substantiality of content borrowed
  4. Effect of such use on the potential market[19]

As the purpose of deepfake creation is inherently different than that of original work, it clearly meets the first criterion. U.S. courts have also upheld through Campbell v. Acuff Rose[20] that content including a significant portion of copyrighted work would be considered fair use if the new work is transformative. In most cases, deepfakes may be considered transformative works, therefore their creators cannot be held liable for copyright infringement. In these cases, deepfakes being taken down based on copyright infringement would be a clear obstruction of free speech because they would be protected by fair use.

Finally, while the creator of a deepfake that is considered to be slander or libel is not protected by the First Amendment, the third party website that hosts this content cannot be held liable. Section 230 states that, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[21] Although this rightfully prevents websites from being held responsible for the actions of its users, there is no way to prevent offensive deepfakes from being reposted on the same or other sites.

Is Defamation Law Enough?

Another argument against the creation of new legislation limiting deepfakes is that legal recourse for defamation caused by this technology already exists in the form of defamation law which imposes liability on those who create reputational damage through public and false statements.[22] However, defamation law in the United States is on a state-by-state basis and is considered a tort (civil offense), rather than a criminal offense. Defamation law also requires a distinct division between statements of fact or opinion in order to be used, as the falsehood must have been presented as truth in order to be considered libel or slander.[23] Defamation law often overlaps with false light law, which is a tort under the category of invasion of privacy and focuses on the spreading of malicious falsehoods about a certain person. False light law focuses on the emotional distress caused by the damage to reputation through public speech, rather than the actual damage to reputation.[24] Both could be used as defense against the dissemination of targeted deepfakes; however, they are also subject to interpretation given the necessity of the defamation being presented as fact rather than opinion and the subjectivity of what constitutes damage to reputation. Although defamation and false light laws may be able to provide legal recourse for the targets of these deepfakes, monetary compensation would not undo damage to reputation and may not adequately restore emotional well-being. Therefore, legislation is necessary in order to prevent new deepfakes from being created and disseminated with the power to ruin lives under the guise of the truth and to allow for criminal consequences for those who misuse the technology.

Conclusion

Rapidly evolving deepfake technology—with the ability to convincingly mimic voices and digitally alter videos in order to present falsehoods as fact—has already been used to create nonconsensual deepfake pornography, spread political misinformation, and extort businesses; without new government regulations, this technology will soon become undetectable and has the potential to create irreversible damage. Although this technology has many positive applications, such as creating immersive language experiences or enhancing a retail customer experience, its accessibility has allowed individuals to harness this technology for nefarious purposes.

Could legislation aimed at preventing the malicious use of this technology prevent its misuse while preserving its positive function in society? With the power to obscure the lines between fact and fiction, if left unchecked, deepfake technology could alter the political, economic and social climate of the United States drastically with consequences that are unfortunately all too real.

[1] “The History of Deepfake Technology: How Did Deepfakes Start?,” Deepfake Now, last modified April 21, 2020,

accessed April 30, 2023, https://deepfakenow.com/history-deepfake-technology-how-deepfakes-started/.

[2] Karen Hao, “Deepfake Porn Is Ruining Women’s Lives. Now the Law May Finally Ban It.,” MIT Technology

Review, last modified February 12, 2021, accessed April 24, 2023,

https://www.technologyreview.com/2021/02/12/1018222/deepfake-revenge-porn-coming-ban/.

[3] Bianca Britton, “They appeared in deepfake porn videos without their consent. Few laws protect them.,” NBC

News, last modified February 14, 2023, accessed April 30, 2023,

[4] Meredith Somers, “Deepakes, Explained,” MIT Management Sloan School, last modified July 21, 2020, accessed

April 24, 2023, https://mitsloan.mit.edu/ideas-made-to-matter/deepfakes-explained.

[5] Kaylee Fagan, “A viral video that appeared to show Obama calling Trump a ‘dips—‘ shows a disturbing new trend

called ‘deepfakes,’” Insider, last modified April 17, 2018, accessed April 30, 2023,

https://www.businessinsider.com/obama-deepfake-video-insulting-trump-2018-4.

[6] S. 751, 2019 Leg. (Tex. Apr. 17, 2019). Accessed April 30, 2023.

https://capitol.texas.gov/tlodocs/86R/billtext/pdf/SB00751F.pdf.

[7] A. 730, 2019 Leg. § 20010 (Cal. Oct. 3, 2019). Accessed April 30, 2023.

https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201920200AB730.

[8] A. 602, 2021st Leg. (Cal. Sept. 28, 2021). Accessed April 30, 2023.

https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202120220AB602.

[9] S. 337, 2019 Gen. Assem. (Ga.). Accessed April 30, 2023.

https://www.legis.ga.gov/api/legislation/document/20192020/194806.

[10] S. 1736, 2019 Gen. Assem. (Va. Mar. 7, 2019). Accessed April 30, 2023.

https://lis.virginia.gov/cgi-bin/legp604.exe?191+sum+SB1736.

[11] S. 6829A, 2021st Leg. (N.Y.). Accessed April 30, 2023. https://www.nysenate.gov/legislation/bills/2021/S6829.

[12] “Legislation Related to Artificial Intelligence,” National Conference of State Legislatures, last modified August

26, 2022, accessed April 24, 2023,

https://www.ncsl.org/technology-and-communication/legislation-related-to-artificial-intelligence.

[13] Asha Hemrajani, “China’s New Legislation on Deepfakes: Should the Rest of Asia Follow Suit?,” The Diplomat,

last modified March 8, 2023, accessed April 24, 2023,

[14] Laura Silver, Kat Delvin, and Christine Huang, “Large Majorities Say China Does Not Respect the Personal

Freedoms of Its People,” Pew Research Center, last modified June 30, 2021, accessed April 30, 2023,

[15] Maureen Cohen, “Deepfakes and German Law,” Dayside Stories, last modified January 3, 2020, accessed April

30, 2023, https://mj-cohen.com/2020/01/03/deepfakes-and-german-law/.

[16] Hsu Tiffany, “As Deepfakes Flourish, Countries Struggle With Response,” The New York Times, last modified

January 22, 2023, accessed April 30, 2023,

https://www.nytimes.com/2023/01/22/business/media/deepfake-regulation-difficulty.html.

[17] Marc Jonathan Blitz, “Deepfakes and Other Non-Testimonial Falsehoods: When Is Belief Manipulation (Not) First Amendment Speech,” Yale Journal of Law & Technology 23 (Fall 2020): 214.

[18] Shannon Reid, “The Deepfake Dilemma: Reconciling Privacy and First Amendment Protections,” abstract,

University of Pennsylvania Journal of Constitutional Law, June 26, 2020, 214-223, accessed April 30, 2023.

[19] Akhil Satheesh, “Deepfakes and the Copyright Connection: Analysing the Adequacy of the Present Machinery,”

University of Richmond Journal of Law and Technology, last modified January 25, 2022, accessed April 30, 2023,

[20] Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569 (1994)

[21] Abigail Loomis, “Deepfakes and American Law,” Davis Political Review, last modified April 20, 2022, accessed

April 24, 2023, https://www.davispoliticalreview.com/article/deepfakes-and-american-law.

[22] Lourdes Vasquez, “Recommendations for the Regulation of Deepfakes in the U.S.: Deepfake Laws Should

Protect Everyone Not Only Public Figures” (unpublished manuscript, 2021), 7.

[23] Lara Guest and Molly Reynolds, “A Short Take on Deepfakes,” TORYS, accessed April 24, 2023,

https://www.torys.com/Our%20Latest%20Thinking/Publications/2019/05/a-short-take-on-deepfakes/.

[24] “False Light,” Cornell Law School Legal Information Institute, last modified December 2022, accessed April 30,