A recently-filed federal court complaint tests the enforceability of restrictive terms in a data license against the use of licensed data for generative AI purposes. The outcome of this case may turn on interpreting broad terms such as training, internal research, distribution and publication.

UPDATE: On December 18, 2025, the court denied defendant Alexi Technologies Inc.’s (“Alexi”) motion for a temporary restraining order (TRO) based on Alexi’s contention that it would suffer irreparable harm without access to Fastcase’s updated data and lose revenue and customers and suffer reputational damage. In a brief order, the court found that Alexi’s harm is “too speculative and is not sufficiently corroborated to merit emergency relief at this time.” 

On November 26, 2025, legal research platform Fastcase, Inc. (“Fastcase”) sued legal AI company Alexi Technologies Inc. (“Alexi”) in a District of Columbia district court, alleging that Alexi used licensed “Fastcase Data” (i.e., a sophisticated, extensively tagged legal research database) to train and power a commercial AI legal research product in violation of a 2021 data license agreement.(Fastcase, Inc. v. Alexi Technologies Inc., No. 25-04159 (D.D.C. Filed Nov. 26, 2025)).

The complaint is particularly interesting as it involves a 2021 agreement – entered into before generative AI was a generally known technology – and thus presents the age-old question of interpreting the terms of an agreement to a technology not known at the time the agreement was entered into.

Regardless of the outcome of this case, the complaint highlights the importance of precise license drafting – for both licensors and licensees – in the age of AI. Parties should focus on well-defined understandings of how data can be used, shared and commercialized at a time where every company is seeking to leverage AI.  As this complaint illustrates, vague terms can lead to uncertainty and legal disputes.[1]

Of course, we do not know if this litigation will proceed to summary judgment or trial, but if it does, we may have an opportunity to see how a court will interpret and potentially enforce broad and general restrictions in a data license (e.g., no publication, distribution, commercial use, competitive use, use for internal research purposes only) in the context of generative artificial intelligence.[2]

In Cody v. Jill Acquisition LLC, No. 25-937 (S.D. Cal. June 30, 2025), the Southern District of California declined to enforce a retail site’s terms of use and compel arbitration, holding that the plaintiff, who used guest checkout to place an online order at the retail clothing site, did not have adequate notice of the terms and the arbitration clause. This case should serve as a wake-up call for online entities to reexamine electronic contracting processes. It exemplifies how, even if a website’s visual design and its placement of the hyperlinked Terms of Use during user checkout are comparable to other presentations that have been deemed enforceable, a court could still decline to enforce online terms if the context of the transaction is not the typical e-commerce transaction between a registered customer and a retail site. In this case, the court found that by checking out as a guest without creating an account, the user was less likely to expect a continuing relationship and, therefore, the site’s notice and presentation of the terms below the “Place Order” button were not conspicuous enough in this instance to bind the plaintiff.

In May 2024, we released Part I of this series, in which we discussed agentic AI as an emerging technology enabling a new generation of AI-based hardware devices and software tools that can take actions on behalf of users. It turned out we were early – very early – to the discussion, with several months elapsing before agentic AI became as widely known and discussed as it is today. In this Part II, we return to the topic to explore legal issues concerning user liability for agentic AI-assisted transactions and open questions about existing legal frameworks’ applicability to the new generation of AI-assisted transactions.

Background: Snapshot of the Current State of “Agents”[1]

“Intelligent” electronic assistants are not new—the original generation, such as Amazon’s Alexa, have been offering narrow capabilities for specific tasks for more than a decade. However, as OpenAI’s CEO Sam Altman commented in May 2024, an advanced AI assistant or “super-competent colleague” could be the killer app of the future. Later, Altman noted during a Reddit AMA session: “We will have better and better models. But I think the thing that will feel like the next giant breakthrough will be agents.” A McKinsey report on AI agents echoes this sentiment: “The technology is moving from thought to action.” Agentic AI represents not only a technological evolution, but also a potential means to further spread (and monetize) AI technology beyond its current uses by consumers and businesses. Major AI developers and others have already embraced this shift, announcing initiatives in the agentic AI space. For example:  

  • Anthropic announced an updated frontier AI model in public beta capable of interacting with and using computers like human users;
  • Google unveiled Gemini 2.0, its new AI model for the agentic era, alongside Project Mariner, a prototype leveraging Gemini 2.0 to perform tasks via an experimental Chrome browser extension (while keeping a “human in the loop”);
  • OpenAI launched a “research preview” of Operator, an AI tool that can interface with computers on users’ behalf, and launched beta feature “Tasks” in ChatGPT to facilitate ongoing or future task management beyond merely responding to real time prompts;
  • LexisNexis announced the availability of “Protégé,” a personalized AI assistant with agentic AI capabilities;
  • Perplexity recently rolled out “Shop Like a Pro,” an AI-powered shopping recommendation and buying feature that allows Perplexity Pro users to research products and, for those merchants whose sites are integrated with the tool, purchase items directly on Perplexity; and
  • Amazon announced Alexa+, a new generation of Alexa that has agentic capabilities, including enabling Alexa to navigate the internet and execute tasks, as well as Amazon Nova Act, an AI model designed to perform actions within a web browser.

Beyond these examples, other startups and established tech companies are also developing AI “agents” in this country and overseas (including the invite-only release of Manus AI by Butterfly Effect, an AI developer in China). As a recent Microsoft piece speculates, the generative AI future may involve a “new ecosystem or marketplace of agents,” akin to the current smartphone app ecosystem.  Although early agentic AI device releases have received mixed reviews and seem to still have much unrealized potential, they demonstrate the capability of such devices to execute multistep actions in response to natural language instructions.

Like prior technological revolutions—personal computers in the 1980s, e-commerce in the 1990s and smartphones in the 2000s—the emergence of agentic AI technology challenges existing legal frameworks. Let’s take a look at some of those issues – starting with basic questions about contract law.

On September 17, 2024, Governor Gavin Newsom signed AB 2602 into California law (to be codified at Cal. Lab. Code §927).  The law addresses the use of “digital replicas” of performers.  As defined in the law, a digital replica is:

a computer-generated, highly realistic electronic representation that is readily identifiable

On May 9, 2024, a California district court dismissed, with leave to amend, the complaint brought by social media platform X Corp. (formerly Twitter) against data provider Bright Data Ltd. (“Bright Data”) over Bright Data’s alleged scraping of publicly available data from X for use in data products sold

Generative AI has been most synonymous in the public mind with “AI” since the commercial breakout of ChatGPT in November 2022. Consumers and businesses have seen the fruits of impressive innovation in various generative models’ ability to create audio, video, images and text, analyze and transform data, perform Q&A chatbot

On January 23, 2024, a California district court released its opinion in a closely-watched scraping dispute between the social media platform Meta and data provider Bright Data Ltd. (“Bright Data”) over Bright Data’s alleged scraping of publicly-available data from Facebook and Instagram for use in data products sold to third

Last week, OpenAI rolled out ChatGPT Team, a flexible subscription structure for small-to-medium sized businesses (with two or more users) that are not large enough to warrant the expense of a ChatGPT Enterprise subscription (which requires a minimum of 150 licensed users).  Despite being less expensive than its Enterprise counterpart, ChatGPT Team provides for the use of the latest OpenAI models with the robust privacy, security and confidentiality protections that previously only applied to the ChatGPT Enterprise subscription and which are far more protective than the terms that govern ordinary personal accounts. This development could be the proverbial “game changer” for smaller businesses, as for the first time, they can have access to tools previously only available to OpenAI Enterprise customers, under OpenAI’s more favorable Business Terms and the privacy policies listed on the Enterprise Privacy page, without making the financial or technical commitment required under an Enterprise relationship. 

Thus, for example, ChatGPT Team customers would be covered by the Business Terms’ non-training commitment (OpenAI’s Team announcement states: “We never train on your business data or conversations”), and by other data security controls, as well as Open AI’s “Copyright Shield,” which offers indemnity for customers in the event that a generated output infringes third party IP.[1] Moreover, under the enterprise-level privacy protections, customers can also create custom GPT models that are for in-house use and not shared with anyone else.

As noted above, until now, the protections under the OpenAI Business Terms were likely beyond reach for many small and medium sized businesses, either because of the financial commitment required by OpenAI’s Enterprise agreement or because of the unavailability of the technical infrastructure necessary to implement the OpenAI API Service. In the past, such smaller entities might resort to having employees use free or paid OpenAI products under individual accounts, with internal precautions (like restrictive AI policies) in place to avoid confidentiality and privacy concerns.[2]

As we’ve seen over the last year, one generative AI provider’s rollout of a new product, tool or contractual protection often results in other providers following suit. Indeed, earlier this week Microsoft announced that it is “expanding Copilot for Microsoft 365 availability to small and medium-sized businesses.” With businesses of all sizes using, testing or developing custom GAI products to stay abreast with the competition, we will watch for future announcements from other providers about more flexible licensing plans for small-to-medium sized businesses.