A recently-filed federal court complaint tests the enforceability of restrictive terms in a data license against the use of licensed data for generative AI purposes. The outcome of this case may turn on interpreting broad terms such as training, internal research, distribution and publication.

UPDATE: On December 18, 2025, the court denied defendant Alexi Technologies Inc.’s (“Alexi”) motion for a temporary restraining order (TRO) based on Alexi’s contention that it would suffer irreparable harm without access to Fastcase’s updated data and lose revenue and customers and suffer reputational damage. In a brief order, the court found that Alexi’s harm is “too speculative and is not sufficiently corroborated to merit emergency relief at this time.” 

On November 26, 2025, legal research platform Fastcase, Inc. (“Fastcase”) sued legal AI company Alexi Technologies Inc. (“Alexi”) in a District of Columbia district court, alleging that Alexi used licensed “Fastcase Data” (i.e., a sophisticated, extensively tagged legal research database) to train and power a commercial AI legal research product in violation of a 2021 data license agreement.(Fastcase, Inc. v. Alexi Technologies Inc., No. 25-04159 (D.D.C. Filed Nov. 26, 2025)).

The complaint is particularly interesting as it involves a 2021 agreement – entered into before generative AI was a generally known technology – and thus presents the age-old question of interpreting the terms of an agreement to a technology not known at the time the agreement was entered into.

Regardless of the outcome of this case, the complaint highlights the importance of precise license drafting – for both licensors and licensees – in the age of AI. Parties should focus on well-defined understandings of how data can be used, shared and commercialized at a time where every company is seeking to leverage AI.  As this complaint illustrates, vague terms can lead to uncertainty and legal disputes.[1]

Of course, we do not know if this litigation will proceed to summary judgment or trial, but if it does, we may have an opportunity to see how a court will interpret and potentially enforce broad and general restrictions in a data license (e.g., no publication, distribution, commercial use, competitive use, use for internal research purposes only) in the context of generative artificial intelligence.[2]

In May 2024, we released Part I of this series, in which we discussed agentic AI as an emerging technology enabling a new generation of AI-based hardware devices and software tools that can take actions on behalf of users. It turned out we were early – very early – to the discussion, with several months elapsing before agentic AI became as widely known and discussed as it is today. In this Part II, we return to the topic to explore legal issues concerning user liability for agentic AI-assisted transactions and open questions about existing legal frameworks’ applicability to the new generation of AI-assisted transactions.

Background: Snapshot of the Current State of “Agents”[1]

“Intelligent” electronic assistants are not new—the original generation, such as Amazon’s Alexa, have been offering narrow capabilities for specific tasks for more than a decade. However, as OpenAI’s CEO Sam Altman commented in May 2024, an advanced AI assistant or “super-competent colleague” could be the killer app of the future. Later, Altman noted during a Reddit AMA session: “We will have better and better models. But I think the thing that will feel like the next giant breakthrough will be agents.” A McKinsey report on AI agents echoes this sentiment: “The technology is moving from thought to action.” Agentic AI represents not only a technological evolution, but also a potential means to further spread (and monetize) AI technology beyond its current uses by consumers and businesses. Major AI developers and others have already embraced this shift, announcing initiatives in the agentic AI space. For example:  

  • Anthropic announced an updated frontier AI model in public beta capable of interacting with and using computers like human users;
  • Google unveiled Gemini 2.0, its new AI model for the agentic era, alongside Project Mariner, a prototype leveraging Gemini 2.0 to perform tasks via an experimental Chrome browser extension (while keeping a “human in the loop”);
  • OpenAI launched a “research preview” of Operator, an AI tool that can interface with computers on users’ behalf, and launched beta feature “Tasks” in ChatGPT to facilitate ongoing or future task management beyond merely responding to real time prompts;
  • LexisNexis announced the availability of “Protégé,” a personalized AI assistant with agentic AI capabilities;
  • Perplexity recently rolled out “Shop Like a Pro,” an AI-powered shopping recommendation and buying feature that allows Perplexity Pro users to research products and, for those merchants whose sites are integrated with the tool, purchase items directly on Perplexity; and
  • Amazon announced Alexa+, a new generation of Alexa that has agentic capabilities, including enabling Alexa to navigate the internet and execute tasks, as well as Amazon Nova Act, an AI model designed to perform actions within a web browser.

Beyond these examples, other startups and established tech companies are also developing AI “agents” in this country and overseas (including the invite-only release of Manus AI by Butterfly Effect, an AI developer in China). As a recent Microsoft piece speculates, the generative AI future may involve a “new ecosystem or marketplace of agents,” akin to the current smartphone app ecosystem.  Although early agentic AI device releases have received mixed reviews and seem to still have much unrealized potential, they demonstrate the capability of such devices to execute multistep actions in response to natural language instructions.

Like prior technological revolutions—personal computers in the 1980s, e-commerce in the 1990s and smartphones in the 2000s—the emergence of agentic AI technology challenges existing legal frameworks. Let’s take a look at some of those issues – starting with basic questions about contract law.

After several weeks of handwringing about the fate of SB 1047 – the controversial AI safety bill that would have required developers of powerful AI models and entities providing the computing resources to train such models to put appropriate safeguards and policies into place to prevent critical harms – California

On September 17, 2024, Governor Gavin Newsom signed AB 2602 into California law (to be codified at Cal. Lab. Code §927).  The law addresses the use of “digital replicas” of performers.  As defined in the law, a digital replica is:

a computer-generated, highly realistic electronic representation that is readily identifiable

In an ongoing dispute commenced in 2016, the Eleventh Circuit for the second time in the lifetime of the litigation considered trade secret misappropriation and related copyright claims in a scraping case between direct competitors.

The case involved plaintiff Compulife Software, Inc. (“Plaintiff” or “Compulife”) – in the business of

Generative AI has been most synonymous in the public mind with “AI” since the commercial breakout of ChatGPT in November 2022. Consumers and businesses have seen the fruits of impressive innovation in various generative models’ ability to create audio, video, images and text, analyze and transform data, perform Q&A chatbot

Last week, OpenAI rolled out ChatGPT Team, a flexible subscription structure for small-to-medium sized businesses (with two or more users) that are not large enough to warrant the expense of a ChatGPT Enterprise subscription (which requires a minimum of 150 licensed users).  Despite being less expensive than its Enterprise counterpart, ChatGPT Team provides for the use of the latest OpenAI models with the robust privacy, security and confidentiality protections that previously only applied to the ChatGPT Enterprise subscription and which are far more protective than the terms that govern ordinary personal accounts. This development could be the proverbial “game changer” for smaller businesses, as for the first time, they can have access to tools previously only available to OpenAI Enterprise customers, under OpenAI’s more favorable Business Terms and the privacy policies listed on the Enterprise Privacy page, without making the financial or technical commitment required under an Enterprise relationship. 

Thus, for example, ChatGPT Team customers would be covered by the Business Terms’ non-training commitment (OpenAI’s Team announcement states: “We never train on your business data or conversations”), and by other data security controls, as well as Open AI’s “Copyright Shield,” which offers indemnity for customers in the event that a generated output infringes third party IP.[1] Moreover, under the enterprise-level privacy protections, customers can also create custom GPT models that are for in-house use and not shared with anyone else.

As noted above, until now, the protections under the OpenAI Business Terms were likely beyond reach for many small and medium sized businesses, either because of the financial commitment required by OpenAI’s Enterprise agreement or because of the unavailability of the technical infrastructure necessary to implement the OpenAI API Service. In the past, such smaller entities might resort to having employees use free or paid OpenAI products under individual accounts, with internal precautions (like restrictive AI policies) in place to avoid confidentiality and privacy concerns.[2]

As we’ve seen over the last year, one generative AI provider’s rollout of a new product, tool or contractual protection often results in other providers following suit. Indeed, earlier this week Microsoft announced that it is “expanding Copilot for Microsoft 365 availability to small and medium-sized businesses.” With businesses of all sizes using, testing or developing custom GAI products to stay abreast with the competition, we will watch for future announcements from other providers about more flexible licensing plans for small-to-medium sized businesses.

On December 19, 2023, AI research company Anthropic announced that it had updated and made publicly available its Commercial Terms of Service (effective Jan 1, 2024) to, among other things, indemnify its enterprise Claude API customers from copyright infringement claims made against them for “their authorized use of our services

In a previous post, we highlighted three key items to look out for when assessing the terms and conditions of generative artificial intelligence (“GAI”) tools: training rights, use restrictions and responsibility for outputs. With respect to responsibility for outputs specifically, we detailed Microsoft’s shift away, through its Copilot Copyright Commitment (discussed in greater detail below), from the blanket disclaimer of all responsibility for GAI tools’ outputs that we initially saw from most GAI providers.

In the latest expansion of intellectual property protection offered by a major GAI provider, OpenAI’s CEO Sam Altman announced to OpenAI “DevDay” conference attendees that “we can defend our customers and pay the costs incurred if you face legal claims around copyright infringement, and this applies both to ChatGPT Enterprise and the API.”