In the first half of 2023, a deluge of new generative artificial intelligence (“GAI”) tools hit the market, with companies ranging from startups to tech giants rolling out new products. In the large language model space alone, we have seen OpenAI’s GPT-4, Meta’s LLaMA, Anthropic’s Claude 2, Microsoft’s Bing AI, and others.

A proliferation of tools has meant a proliferation of terms and conditions. Many popular tools have both a free version and a paid version, which each subject to different terms, and several providers also have ‘enterprise’ grade tools available to the largest customers. For businesses looking to trial GAI, the number of options can be daunting.

This article sets out three key items to check when evaluating a GAI tool’s terms and conditions. Although determining which tool is right for a particular business is a complex question that requires an analysis of terms and conditions in their entirety – not to mention nonlegal considerations like pricing and technical capabilities – the below items can provide prospective customers with a starting place, as well as bellwether to help spot terms and conditions that are more or less aggressive than the market standard.

On October 30, 2023, President Biden issued an “Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence” (Fact Sheet, here) designed to spur new AI safety and security standards, encourage the development of privacy-preserving technologies in conjunction with AI training, address certain instances of algorithmic discrimination, advance the responsible use of AI in healthcare, study the impacts of AI on the labor market, support AI research and a competitive environment in the industry, and issue guidance on the use of AI by federal agencies.  This latest move builds on the White House’s previously-released “Blueprint for an AI Bill of Rights” and its announcement this past summer that it had secured voluntary commitments from major AI companies focusing on what the White House termed as “three principles that must be fundamental to the future of AI – safety, security, and trust.” 

In recent years there has been a great demand for information about job listings, company reviews and employment data.   Recruiters, consultants, analysts and employment-related service providers, amongst others, are aggressively scraping job-posting sites to extract that type of information. Recall, for example, the long-running, landmark hiQ scraping litigation over the scraping of public LinkedIn data.

The two most recent disputes regarding scraping of employment and job-related data were brought by Jobiak LLC (“Jobiak”), an AI-based recruitment platform.  Jobiak filed two nearly-identical scraping suits in California district court alleging that competitors unlawfully scraped its database and copied its optimized job listings without authorization. (Jobiak LLC v. Botmakers LLC, No. 23-08604 (C.D. Cal. Filed Oct. 12, 2023); Jobiak LLC v. Aspen Technology Labs, Inc., No. 23-08728 (C.D. Cal. Filed Oct. 17, 2023)).

In the rapidly-evolving AI space, the last few days of this week saw significant AI developments occur perhaps even faster than usual.  For example, seven AI companies agreed to voluntary guidelines covering AI safety and security and ChatGPT rolled out a custom preferences tool to streamline usage. In addition, as a related point, Microsoft issued a transparency note for the Azure OpenAI service.  And on top of that, this week saw announcements of a number of generative AI commercial ventures which are beyond the scope of this particular post.

On July 12, 2023, Nikhil Rathi, the CEO of the UK’s Financial Conduct Authority (“FCA”) delivered a speech on the FCA’s regulatory approach to Big Tech and Artificial Intelligence (“AI”). Below are some of the key points discussed at the event:

  • AI and Market Integrity
  • One of the many legal questions swirling around in the world of generative AI (“GenAI”) is to what extent Section 230 of the Communications Decency Act (CDA) applies to the provision of GenAI.  Can CDA immunity apply to GenAI-generated output and protect GenAI providers from potential third party liability?

    On June 14, 2023, Senators Richard Blumenthal and Josh Hawley introduced the “No Section 230 Immunity for AI Act,” bipartisan legislation that would expressly remove most immunity under the CDA for a provider of an interactive computer service if the conduct underlying the claim or charge “involves the use or provision of generative artificial intelligence by the interactive computer service.” While the bill would eliminate “publisher” immunity under §230(c)(1) for claims involving the use or provision of generative artificial intelligence by an interactive computer service, immunity for so-called “Good Samaritan” blocking under § 230(c)(2)(A), which protects service providers and users from liability for claims arising out of good faith actions to screen or restrict access to “objectionable” material from their services, would not be affected.

    Within the rapidly evolving artificial intelligence (“AI”) legal landscape (as explored in Proskauer’s “The Age of AI” Webinar series), there is an expectation that Congress may come together to draft some form of AI-related legislation. The focus is on how generative AI (“GenAI”) in the last six months or so has already created new legal, societal, and ethical questions.

    Intellectual property (“IP”) protection – and, in particular, copyright – has been a forefront issue. Given the boom in GenAI, some content owners and creators, have lately begun to feel that AI developers have been free riding by training GenAI datasets off a vast swath of web content (some of it copyrighted content) without authorization, license or reasonable royalty. Regardless of whether certain GenAI tools’ use of web-based training data and the tools’ output to users could be deemed infringement or not (such legal questions do not have simple answers), it is evident that the rollout of GenAI has already begun to affect the vocations of creative professionals and the value of IP for content owners, as AI-created works (or hybrid works of human/AI creation) are already competing with human-created works in the marketplace. In fact, one of the issues in the Writers Guild of America strike currently affecting Hollywood concerns provisions that would govern the use of AI on projects.

    On May 17, 2023, the House of Representatives Subcommittee on Courts, Intellectual Property, and the Internet held a hearing on the interoperability of AI and copyright law. There, most of the testifying witnesses agreed that Congress should consider enacting careful regulation in this area that balances innovation and creators’ rights in the context of copyright. The transformative potential of AI across industries was acknowledged by all, but the overall view was that AI should be used as a tool for human creativity rather than a replacement. In his opening remarks, Subcommittee Chair, Representative Darrell Issa, stated that one of the purposes of the hearing was to “address properly the concerns surrounding the unauthorized use of copyrighted material, while also recognizing that the potential for generative AI can only be achieved with massive amounts of data, far more than is available outside of copyright.” The Ranking Member of the Subcommittee, Representative Henry Johnson, expressed an openness for finding middle ground solutions to balance IP rights with innovation but stated one of the quandaries voiced by many copyright holders as to GenAI training methods: “I am hard-pressed to understand how a system that rests almost entirely on the works of others, and can be commercialized or used to develop commercial products, owes nothing, not even notice, to the owners of the works it uses to power its system.”

    Back in October 2022, the Supreme Court granted certiorari in Gonzalez v. Google, an appeal that challenged whether YouTube’s targeted algorithmic recommendations qualify as “traditional editorial functions” protected by the CDA — or, rather, whether such recommendations are not the actions of a “publisher” and thus fall outside of