AI-Generated Content and Deepfakes Pose a Host of New Legal Challenges in 2024

Feb 5, 2024 | Blog
Partner

“Don’t believe everything you see on the internet,” is probably one of the most consistently relevant aphorisms of the modern age. This is in no small part thanks to the expansion of generative artificial intelligence (AI) and its ability to create realistic, fake content, commonly referred to as “deepfakes.”

While the means to fabricate and alter content has certainly existed in the past, artificial intelligence is a more sophisticated tool for conjuring up digital material. AI provides enhanced capabilities in creating not only static images, but also extremely realistic audio and video using a real person’s likeness. Additionally, AI platforms are relatively cheap, accessible, and simple to use; they can generate large quantities of content almost instantaneously; and they are still largely unregulated.

In light of the current AI climate, below are some of the most pressing ethical and legal issues and what legislation is being proposed in response.

Election Interference

Concerns around AI-generated deepfakes are rightfully amplified in the lead-up to the 2024 primary and presidential elections considering the critical role that this type of content could play in proliferating misinformation, disinformation (i.e., misinformation with express intent to mislead), and voter suppression.

We have already begun to see harbingers of this new reality. In June 2023, Florida Governor Ron DeSantis’ campaign published an attack ad against former President Donald Trump containing AI-generated images depicting Trump embracing and kissing former Chief Medical Advisor Dr. Anthony Fauci. The following month, a DeSantis PAC also utilized an AI simulation of Trump’s voice in another attack ad.

More recently, in late January 2024, voters in New Hampshire received robocalls with an AI-generated version of President Joe Biden’s voice discouraging them from going to the polls for the state’s primary election. This incident in particular has given a disturbing glimpse of one way that AI could be used to suppress and disenfranchise voters.

Publicity Rights

Artificial intelligence has also stirred up questions regarding the right of publicity that protects a person from having their name, image, or likeness (NIL) misappropriated for commercial use. However, the right of publicity is not guaranteed at the federal level and varies by state as to its application.

This very issue was a contentious point in the 2023 negotiations between the Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA) and the Alliance of Motion Picture and Television Producers (AMPTP). Fears that studios could scan actors’ likenesses and use them in the future without obtaining consent or providing financial compensation prompted SAG-AFTRA to include terms related to the use of digital likenesses in their final deal with AMPTP.

Some actors are still concerned about studios’ ability to use generative AI technology to create “synthetic performers” based on the characteristics of real, human actors. Not only would such a “synthetic performer” replace the need for a human actor, it would also make it difficult for an actor to prove that their specific likeness had been misappropriated.

Opportunities for Abuse

Another alarming (if predictably inevitable) facet of AI-generated content is the potential for abuse through the creation of graphic, explicit, or compromising content using a real person’s likeness. This type of abuse can cause reputational harm, as well as significant emotional harm to the injured party. A particularly nefarious aspect of AI used in this way is that this material is causing damage regardless of whether it is widely known that the material is fake.

Virtually anyone could find themselves the victim of such abuse. In January 2024, explicit AI-generated images of Taylor Swift were circulated on X (formerly Twitter), causing outrage amongst users and forcing X to remove the images from the site. The Verge reported that one post of the images was live for 17 hours before being taken down by X, where it was viewed 45 million times, as well as reposted, liked, and bookmarked. X eventually had to block all searches for “Taylor Swift” while it attempted to scrub the rest of the images from the platform. Microsoft Designer, the platform on which the images were created, announced that it had tightened its safety systems to help prevent such misuse of its software in the future.

Slow-Moving Legislation

As AI continues to evolve rapidly, legislation is moving at a reactionary pace. While several states have

attempted to introduce AI-related legislation, much of it has stalled. The beginning of this year has seen more action, however —NBC news reported that in the first three weeks of 2024 alone,

legislators from 14 states introduced bills focused on curbing the harmful effects of AI-generated misinformation and deepfakes on elections.

Much of this legislation either outright bans disseminating election-related content fabricated with AI, or it at least requires disclosures denoting that AI has been used to alter the content. However, this assumes that the person sharing the content does so knowing that it has indeed been fabricated—which is often not the case.

There have been several efforts at the federal level to protect individuals’ NIL rights. In October 2023, a bipartisan group of senators released a discussion draft of the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act, which would help protect the image, voice, and visual likeness of all individuals from unauthorized AI-generated replicas.

Later that same month, President Biden issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence outlining a “coordinated, Federal Government-wide approach” to the safe and responsible development of AI.

On January 10, 2024, Representative María Elvira Salazar (R-FL) introduced the No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act, which “establishes a federal framework to protect Americans’ individual right to their likeness and voice against AI-generated fakes and forgeries.”

And most recently, on January 30th, another bipartisan group of senators introduced the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act of 2024, just days after the incident involving Taylor Swift. This bill looks to “hold accountable those who are responsible for the proliferation of nonconsensual, sexually-explicit ‘deepfake’ images and videos.”

The ratification and implementation of these measures is still pending. Part of the difficulty in creating AI regulations lies in the fact that AI is developing and changing so rapidly that the law can really only play catch-up as lawmakers attempt to try and understand this new technology and its full implications. A successful AI regulatory framework is also going to rely on the cooperation of large media companies and the AI platforms themselves.

In the United States, an additional wrinkle is that any legislation limiting the uses of AI must also contend with individuals’ First Amendment rights. AI laws will have to consider individuals’ rights to use others’ likenesses for news reporting, commentary, or creative transformative purposes such as artwork and parody. Future legislation on artificial intelligence must achieve an increasingly tricky balance between upholding the long-held rights to freedom of expression, while fighting against the consequences of disinformation and exploitation at the hands of this new technology.

In Europe, proposed legislation at the EU level has come along much more quickly. The European Commission has already established a framework for regulating AI tools based upon the risk that a certain tool creates. Tools must be approved prior to entry into the market and evaluated throughout their lifetime. High-risk tools may be prohibited entirely, such as those involved with facial recognition. Companies that use AI tools that involve automated decision-making may need to provide regular impact assessments. It is likely that tools that are used substantially for creating “deepfakes” will also be banned. But with information circulating globally in seconds and content being created everywhere and with a variety of AI tools, it remains to be seen whether even the EU regulations can stop the worldwide spread of this new brand of “fake news.”

If you have further questions regarding the legal implications of artificial intelligence tools, please contact Tara Aaron-Stelluto.