After several weeks of handwringing about the fate of SB 1047 – the controversial AI safety bill that would have required developers of powerful AI models and entities providing the computing resources to train such models to put appropriate safeguards and policies into place to prevent critical harms – California

On September 17, 2024, Governor Gavin Newsom signed AB 2602 into California law (to be codified at Cal. Lab. Code §927).  The law addresses the use of “digital replicas” of performers.  As defined in the law, a digital replica is:

a computer-generated, highly realistic electronic representation that is readily identifiable

On March 21, 2024, in a bold regulatory move, Tennessee Governor Bill Lee signed the Ensuring Likeness Voice and Image Security (“ELVIS”) Act (Tenn. Code Ann. §47-25-1101 et seq.) – a law which, as Gov. Lee stated, covers “new, personalized generative AI cloning models and services that enable human

Generative AI has been most synonymous in the public mind with “AI” since the commercial breakout of ChatGPT in November 2022. Consumers and businesses have seen the fruits of impressive innovation in various generative models’ ability to create audio, video, images and text, analyze and transform data, perform Q&A chatbot

Last week, OpenAI rolled out ChatGPT Team, a flexible subscription structure for small-to-medium sized businesses (with two or more users) that are not large enough to warrant the expense of a ChatGPT Enterprise subscription (which requires a minimum of 150 licensed users).  Despite being less expensive than its Enterprise counterpart, ChatGPT Team provides for the use of the latest OpenAI models with the robust privacy, security and confidentiality protections that previously only applied to the ChatGPT Enterprise subscription and which are far more protective than the terms that govern ordinary personal accounts. This development could be the proverbial “game changer” for smaller businesses, as for the first time, they can have access to tools previously only available to OpenAI Enterprise customers, under OpenAI’s more favorable Business Terms and the privacy policies listed on the Enterprise Privacy page, without making the financial or technical commitment required under an Enterprise relationship. 

Thus, for example, ChatGPT Team customers would be covered by the Business Terms’ non-training commitment (OpenAI’s Team announcement states: “We never train on your business data or conversations”), and by other data security controls, as well as Open AI’s “Copyright Shield,” which offers indemnity for customers in the event that a generated output infringes third party IP.[1] Moreover, under the enterprise-level privacy protections, customers can also create custom GPT models that are for in-house use and not shared with anyone else.

As noted above, until now, the protections under the OpenAI Business Terms were likely beyond reach for many small and medium sized businesses, either because of the financial commitment required by OpenAI’s Enterprise agreement or because of the unavailability of the technical infrastructure necessary to implement the OpenAI API Service. In the past, such smaller entities might resort to having employees use free or paid OpenAI products under individual accounts, with internal precautions (like restrictive AI policies) in place to avoid confidentiality and privacy concerns.[2]

As we’ve seen over the last year, one generative AI provider’s rollout of a new product, tool or contractual protection often results in other providers following suit. Indeed, earlier this week Microsoft announced that it is “expanding Copilot for Microsoft 365 availability to small and medium-sized businesses.” With businesses of all sizes using, testing or developing custom GAI products to stay abreast with the competition, we will watch for future announcements from other providers about more flexible licensing plans for small-to-medium sized businesses.

On December 19, 2023, AI research company Anthropic announced that it had updated and made publicly available its Commercial Terms of Service (effective Jan 1, 2024) to, among other things, indemnify its enterprise Claude API customers from copyright infringement claims made against them for “their authorized use of our services

In a previous post, we highlighted three key items to look out for when assessing the terms and conditions of generative artificial intelligence (“GAI”) tools: training rights, use restrictions and responsibility for outputs. With respect to responsibility for outputs specifically, we detailed Microsoft’s shift away, through its Copilot Copyright Commitment (discussed in greater detail below), from the blanket disclaimer of all responsibility for GAI tools’ outputs that we initially saw from most GAI providers.

In the latest expansion of intellectual property protection offered by a major GAI provider, OpenAI’s CEO Sam Altman announced to OpenAI “DevDay” conference attendees that “we can defend our customers and pay the costs incurred if you face legal claims around copyright infringement, and this applies both to ChatGPT Enterprise and the API.”

In the first half of 2023, a deluge of new generative artificial intelligence (“GAI”) tools hit the market, with companies ranging from startups to tech giants rolling out new products. In the large language model space alone, we have seen OpenAI’s GPT-4, Meta’s LLaMA, Anthropic’s Claude 2, Microsoft’s Bing AI, and others.

A proliferation of tools has meant a proliferation of terms and conditions. Many popular tools have both a free version and a paid version, which each subject to different terms, and several providers also have ‘enterprise’ grade tools available to the largest customers. For businesses looking to trial GAI, the number of options can be daunting.

This article sets out three key items to check when evaluating a GAI tool’s terms and conditions. Although determining which tool is right for a particular business is a complex question that requires an analysis of terms and conditions in their entirety – not to mention nonlegal considerations like pricing and technical capabilities – the below items can provide prospective customers with a starting place, as well as bellwether to help spot terms and conditions that are more or less aggressive than the market standard.

On October 30, 2023, President Biden issued an “Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence” (Fact Sheet, here) designed to spur new AI safety and security standards, encourage the development of privacy-preserving technologies in conjunction with AI training, address certain instances of algorithmic discrimination, advance the responsible use of AI in healthcare, study the impacts of AI on the labor market, support AI research and a competitive environment in the industry, and issue guidance on the use of AI by federal agencies.  This latest move builds on the White House’s previously-released “Blueprint for an AI Bill of Rights” and its announcement this past summer that it had secured voluntary commitments from major AI companies focusing on what the White House termed as “three principles that must be fundamental to the future of AI – safety, security, and trust.”