Litigation Minute: State Statutes and the Private Right of Action
18 Juni 2024What You Need To Know In A Minute Or Less
Class action litigation challenging generative artificial intelligence (AI) has rapidly become a familiar feature of the legal landscape. While early, headline-grabbing complaints were largely based on traditional theories of recovery, many of these have been dismissed, with courts commenting that the lawsuits presented “policy grievances that are not suitable for resolution by federal courts.”1
At the same time, multiple states have enacted statutes addressing the development and deployment of generative AI. These different statutory regimes present familiar questions as to whether the new statutory requirements can be enforced through private rights of action (PRA).
In a minute or less, we provide an overview of different state approaches, as well as early suggestions for companies deploying generative AI for consumer or customer-facing uses.
How Are States Addressing Enforcement of Generative AI Statutes?
Broadly speaking, those states that have enacted generative AI statutes have provided exclusive enforcement authority to designated state agencies, consistent with a deliberate, considered approach to evaluating AI risks and enforcement priorities. Two prominent examples are Utah and Colorado, discussed below.
The exceptions to this trend are state statutes (either proposed2 or enacted) that authorize a PRA. While no class action litigation has been launched to date, early signs suggest that any private litigation will be necessarily limited in scope, subject to multiple defenses, and uniquely unsuited for class litigation.
Utah’s AI Enforcement Regime and Regulatory Sandbox
With an effective date of 1 May 2024, Utah’s Artificial Intelligence Policy Act (UAIPA) now requires companies in regulated industries (such as accounting and healthcare) to prominently disclose that a consumer is interacting with AI; non-regulated companies must disclose the use of AI if directly asked. Further, companies deploying AI cannot disclaim responsibility for content of responses provided by AI tools.
The UAIPA commits enforcement solely to Utah’s Division of Consumer Protection (UDCP), while expanding the latter’s enforcement authority to include administrative fines, declaratory and injunctive relief, and monetary disgorgement. Notably, algorithmic disgorgement3 is not among the expanded remedies provided by the UAIPA. The UAIPA also creates an Office of AI Policy and AI Lab, through which companies can apply for regulatory mitigation (such as reduced fines and cure periods) while they develop and deploy AI tools.
ELVIS Has Left the Building, But Not Entered the Courthouse
Tennessee is the first state to prohibit unauthorized use of artificial intelligence to replicate an individual’s likeness, image, and voice. The Ensuring Likeness, Voice, and Image Security Act (known as the ELVIS Act), which goes into effect on 1 July 2024, creates three separate civil PRAs. As it relates to AI, the ELVIS Act authorizes individuals to sue when defendants employ an “algorithm, software, tool, or other technology service, or device,” the primary purpose of which is the unauthorized reproduction of the plaintiffs’ “photograph, voice, or likeness.” The PRA is subject to certain fair use exceptions, while remedies include injunctive relief, actual damages (but not statutory damages), and court orders requiring the destruction of materials made in violation of the statute.
Colorado’s Approach
Colorado’s Artificial Intelligence Act, SB 205 (CO AI Act), effective 1 February 2026, regulates high-risk AI systems by establishing multiple requirements on developers and deployers of such systems, including notice to consumers, impact assessments, and anti-discrimination duties.
A violation of the CO AI Act is designated a “deceptive trade practice” under Part 1 of the Colorado Consumer Protection Act (CCPA). Although the CCPA provides for a PRA generally, the PRA is carved out of the CO AI Act. It not only grants the Attorney General exclusive authority to enforce and promulgate rules under the CO AI Act, but also explicitly states that it does not provide a PRA. Developers or deployers can assert an affirmative defense based on discovery and cure of an alleged violation.
Takeaways
The deliberate approach taken by Utah—including the opportunity to mitigate risks of generative AI through the Utah AI Lab’s regulatory sandbox—is a promising sign that generative AI will be regulated in the first instances through tailored agency action, rather than through private litigants. Even under Tennessee’s ELVIS Act, the PRA by definition appears limited to specific individuals, rather than the basis of putative class action litigation. Other states will continue to enact statutes or promulgate regulations in this area—including California, through its ongoing assessment of automated decision technology regulations.
Against this evolving backdrop, companies considering deploying generative AI should consider compliance focused on an appropriate disclosure regime, development of internal AI policies, and internal training programs. Ongoing assessment of the company’s terms and policies applicable to consumer interaction with generative AI tools may be warranted, particularly for companies that are subject to new or pending state statutes.