Litigation Minute: The Generative AI Litigation Landscape
18 Mei 2024What You Need To Know In A Minute Or Less
Beginning in 2023, courts across the United States have grappled with a wave of lawsuits challenging the legality and use of generative artificial intelligence (AI) systems and tools. While courts have yet to definitively rule on the myriad questions raised by the lawsuits, these cases may well signal what’s on the horizon for generative AI litigation.
This three-part series will discuss:
- Current trends in litigation regarding generative AI and strategies to mitigate litigation risk for companies considering deploying or currently using such systems or tools;
- Privacy, consumer protection, and other generative AI-specific litigation concerns that may impact business; and
- Generative AI’s impact on record-keeping and document storage, use, and exchange in litigation.
In a minute or less, here is what you need to know about the current trends in generative AI litigation.
What Is Generative Artificial Intelligence?
Generative AI refers to a type of AI that generates new content based on the patterns it has learned from its training1 and in response to a user’s prompt. This new content can be text, code, images, audio, video, or a combination of these outputs.
Many generative AI systems are built on large language models (LLMs). LLMs are designed to understand context, infer meaning, and generate coherent, contextually appropriate responses. LLMs give generative AI systems the ability to interact with users through natural language inputs and outputs.
Generative AI promises transformative benefits and opportunities for businesses across all sectors. Whether to enhance creativity and innovation, improve efficiency, personalize or customize experiences, reduce costs, or analyze data to make better decisions, many companies have either deployed generative AI systems in their businesses or are considering how best to leverage this technology.
What Has Been the Focus of the Initial Generative AI Litigation?
Early litigation regarding generative AI has primarily centered around the data used in training those systems. These cases have been brought against generative AI developers by authors, artists, publishers, and consumers and tend to focus on the plaintiffs’ intellectual property or privacy rights.2 These suits are in the early stages of litigation, and include claims for copyright infringement, invasion of privacy, violation of consumer protection acts, theft, and misappropriation of data.3
Although courts have expressed skepticism about plaintiffs’ claims in some early rulings, these cases continue to work their way through the legal process and still have the potential to significantly impact the rapidly growing market of generative AI systems and tools.
For example, one of the earliest and best-known cases in this space, Andersen v. Stability AI Ltd.,4 involves a group of artists who argue that because certain image-generating AI tools used their online artwork for training purposes without their consent, the images created by the tool in response to user prompts constitute impermissible infringement upon their copyrighted work, unfair competition, and breach of contract.
Initially, the Court dismissed nearly all of the claims, albeit with permission to refile the amended complaint to address the Court’s concerns. In that decision, the Court noted that the final images produced by the AI did not appear to copy any of the artists’ work in particular and found that—because the AI was trained on billions of online images—it was unlikely that it copied a particular artist or harmed them, individually, in a meaningful way. Plaintiffs subsequently filed an amended complaint, which also added additional parties and a new claim for unjust enrichment.
Earlier this month, in a tentative ruling on the defendants’ motions to dismiss the amended complaint, the Court indicated that it would this time allow plaintiffs’ copyright infringement claims to proceed on two alleged theories:
- AI-developer defendants used plaintiffs’ images for training purposes without permission; and
- Plaintiffs’ artwork is “stored as mathematical information” in the generative AI models themselves.5
The Court noted that, given the dispute between the parties regarding how the generative AI systems operate, plaintiffs’ claims should be “tested at summary judgment.”6
Although it was inclined to allow some claims to proceed, the Court tentatively dismissed plaintiffs’ copyright claim under the Digital Millennium Copyright Act, their breach of contract allegations, and their unjust enrichment claim (with leave to attempt to re-allege the unjust enrichment claim).
What Litigation Risks Are on the Horizon?
A shift in litigation focus—from developers to users—has already begun and is likely to increase as adoption of generative AI systems and tools becomes more commonplace. As more companies attempt to take advantage of generative AI by deploying and using it in their businesses, litigation risk for these users will only continue to grow. In addition to claims similar to those alleged against generative AI developers, claims against generative AI users have and will revolve around allegations that plaintiff(s) suffered injury, either due to a company’s (or its personnel’s) misuse of generative AI systems or tools, or due to an autonomous error by the generative AI system or tool, which was not caught or corrected by the company (or its personnel).
Companies using generative AI may also face claims for failing to appropriately protect or failing to receive appropriate consent to use consumers’ personal data that makes its way into the generative AI systems or tools used by the company.
What Steps Can Companies Take to Protect Themselves from Liability?
Companies can, however, take steps when deploying and using generative AI in their businesses to help reduce the risk of liability in this next wave of litigation. These steps include:
- Understanding the system or tool that is being deployed. Companies should invest time and effort—on a cross-disciplinary basis—to understand the capabilities, limitations, and intended use cases of the generative AI system or tools they intend to deploy. Importantly, companies should pay close attention to the ownership and sources of training data, fine tuning, prompts, and outputs.
- Securing adequate and appropriate contractual protections. Companies should carefully review and negotiate the contractual terms in licenses and other agreements with generative AI system providers to address risks specific to the generative AI system or tool under consideration. These risks can revolve around data protection and privacy and liability for outputs, among others. Representations and warranties, indemnities, and limitations of liabilities should be closely examined and tailored to the system or tool and its intended use cases.
- Creating clear acceptable use policies (AUPs) and guidelines for personnel. Clear AUPs and guidance can help ensure that generative AI is used appropriately, responsibly, ethically, and in compliance with applicable laws and regulations. Setting policy specific to generative AI systems and tools provides personnel with clear boundaries and expectations, which can help to limit the risk liability.
- Establishing a risk management framework, which includes continuously assessing, monitoring, and managing potential risks. Companies should adopt a proactive framework to identify and address potential legal, operational, and reputational risks associated with the use of generative AI. Regular audits and updates to this risk management framework keep mitigation strategies aligned with new legal and technological developments.
- Educating and training personnel on safe and responsible use of generative AI. Regular and ongoing training programs can familiarize employees with best practices for use of generative AI, as well as the risk of its use. These programs can mitigate risk by preventing inadvertent or unintended misuse and encourage safe and responsible use.
The ultimate resolution of many of the legal questions raised in this current litigation remains to be seen. Companies exploring or interacting with generative AI should arm themselves with an understanding of the current state of litigation, as well as the technology itself, to ensure that any use includes appropriate safeguards to protect from potential liability.