In a previous post, we highlighted three key items to look out for when assessing the terms and conditions of generative artificial intelligence (“GAI”) tools: training rights, use restrictions and responsibility for outputs. With respect to responsibility for outputs specifically, we detailed Microsoft’s shift away, through its Copilot Copyright Commitment (discussed in greater detail below), from the blanket disclaimer of all responsibility for GAI tools’ outputs that we initially saw from most GAI providers.

In the latest expansion of intellectual property protection offered by a major GAI provider, OpenAI’s CEO Sam Altman announced to OpenAI “DevDay” conference attendees that “we can defend our customers and pay the costs incurred if you face legal claims around copyright infringement, and this applies both to ChatGPT Enterprise and the API.”

In the first half of 2023, a deluge of new generative artificial intelligence (“GAI”) tools hit the market, with companies ranging from startups to tech giants rolling out new products. In the large language model space alone, we have seen OpenAI’s GPT-4, Meta’s LLaMA, Anthropic’s Claude 2, Microsoft’s Bing AI, and others.

A proliferation of tools has meant a proliferation of terms and conditions. Many popular tools have both a free version and a paid version, which each subject to different terms, and several providers also have ‘enterprise’ grade tools available to the largest customers. For businesses looking to trial GAI, the number of options can be daunting.

This article sets out three key items to check when evaluating a GAI tool’s terms and conditions. Although determining which tool is right for a particular business is a complex question that requires an analysis of terms and conditions in their entirety – not to mention nonlegal considerations like pricing and technical capabilities – the below items can provide prospective customers with a starting place, as well as bellwether to help spot terms and conditions that are more or less aggressive than the market standard.

UPDATE: On February 5, 2024, the California district court granted the defendant Aspen Technology Labs, Inc.’s motion to dismiss Jobiak LLC’s web scraping complaint for lack of personal jurisdiction, with leave to amend. The court found that Jobiak had not adequately alleged that its copyright and tort-related claims arose out of the defendant’s forum-related activities and that there were no allegations that Jobiak’s database or website was hosted on servers in the California forum.  On March 8, 2024, the court dismissed the action with prejudice, as Jobiak did not submit an amended complaint within the time allowed by the court.  

In recent years there has been a great demand for information about job listings, company reviews and employment data.   Recruiters, consultants, analysts and employment-related service providers, amongst others, are aggressively scraping job-posting sites to extract that type of information. Recall, for example, the long-running, landmark hiQ scraping litigation over the scraping of public LinkedIn data.

The two most recent disputes regarding scraping of employment and job-related data were brought by Jobiak LLC (“Jobiak”), an AI-based recruitment platform.  Jobiak filed two nearly-identical scraping suits in California district court alleging that competitors unlawfully scraped its database and copied its optimized job listings without authorization. (Jobiak LLC v. Botmakers LLC, No. 23-08604 (C.D. Cal. Filed Oct. 12, 2023); Jobiak LLC v. Aspen Technology Labs, Inc., No. 23-08728 (C.D. Cal. Filed Oct. 17, 2023)).

Within the rapidly evolving artificial intelligence (“AI”) legal landscape (as explored in Proskauer’s “The Age of AI” Webinar series), there is an expectation that Congress may come together to draft some form of AI-related legislation. The focus is on how generative AI (“GenAI”) in the last six months or so has already created new legal, societal, and ethical questions.

Intellectual property (“IP”) protection – and, in particular, copyright – has been a forefront issue. Given the boom in GenAI, some content owners and creators, have lately begun to feel that AI developers have been free riding by training GenAI datasets off a vast swath of web content (some of it copyrighted content) without authorization, license or reasonable royalty. Regardless of whether certain GenAI tools’ use of web-based training data and the tools’ output to users could be deemed infringement or not (such legal questions do not have simple answers), it is evident that the rollout of GenAI has already begun to affect the vocations of creative professionals and the value of IP for content owners, as AI-created works (or hybrid works of human/AI creation) are already competing with human-created works in the marketplace. In fact, one of the issues in the Writers Guild of America strike currently affecting Hollywood concerns provisions that would govern the use of AI on projects.

On May 17, 2023, the House of Representatives Subcommittee on Courts, Intellectual Property, and the Internet held a hearing on the interoperability of AI and copyright law. There, most of the testifying witnesses agreed that Congress should consider enacting careful regulation in this area that balances innovation and creators’ rights in the context of copyright. The transformative potential of AI across industries was acknowledged by all, but the overall view was that AI should be used as a tool for human creativity rather than a replacement. In his opening remarks, Subcommittee Chair, Representative Darrell Issa, stated that one of the purposes of the hearing was to “address properly the concerns surrounding the unauthorized use of copyrighted material, while also recognizing that the potential for generative AI can only be achieved with massive amounts of data, far more than is available outside of copyright.” The Ranking Member of the Subcommittee, Representative Henry Johnson, expressed an openness for finding middle ground solutions to balance IP rights with innovation but stated one of the quandaries voiced by many copyright holders as to GenAI training methods: “I am hard-pressed to understand how a system that rests almost entirely on the works of others, and can be commercialized or used to develop commercial products, owes nothing, not even notice, to the owners of the works it uses to power its system.”

Competition between Amazon’s third-party merchants is notoriously fierce. The online retail giant often finds itself playing the role of referee, banning what it considers unfair business practices (such as offering free products in exchange for perfect reviews, or targeting competitors with so-called “review bombing”). Last month, in the latest round

ChatGPT has quickly become the talk of business, media and the Internet – reportedly, there were over 100 million monthly active users of the application just in January alone.

While there are many stories of the creative, humorous, apologetic, and in some cases unsettling interactions with ChatGPT,[1] the potential business applications for ChatGPT and other emerging generative artificial intelligence applications (generally referred to in this post as “GAI”) are plentiful. Many businesses see GAI as a potential game-changer.  But, like other new foundational technology developments, new issues and possible areas of risk are presented.

ChatGPT is being used by employees and consultants in business today.  Thus, businesses are well advised to evaluate the issues and risks to determine what policies or technical guardrails, if any, should be imposed on GAI’s use in the workplace.

At the close of 2022, New York Governor Kathy Hochul signed the “Digital Fair Repair Act” (S4101A/A7006-B) (to be codified at N.Y. GBL §399-nn) (the “Act”). The law makes New York the first state in the country to pass a consumer electronics right-to-repair law.[1] Similar bills are pending in other states. The Act is a slimmed down version of the bill that was first passed by the legislature last July.

Generally speaking, the Act will require original equipment manufacturers (OEMs), or their authorized repair providers, to make parts and tools and diagnostic and repair information required for the maintenance and repair of “digital electronic equipment” available to independent repair providers and consumers, on “fair and reasonable terms” (subject to certain exceptions). The law only applies to products that are both manufactured for the first time as well as sold or used in the state for the first time on or after the law’s effective date of July 1, 2023 (thus exempting electronic products currently owned by consumers).

Can internet service providers necessarily be compelled to unmask anonymous copyright infringers? In an opinion touching on Digital Millennium Copyright Act (DMCA) subpoenas, First Amendment concerns, and fair use, the Northern District of California said, in this one particular instance, no, granting Twitter’s motion to quash a subpoena seeking to reveal information behind an anonymous poster. (In re DMCA § 512(h) Subpoena to Twitter, Inc., No. 20-80214 (N.D. Cal. June 21, 2022)). The anonymous figure at the center of the dispute is @CallMeMoneyBags, an anonymous Twitter user who posts criticisms of wealthy people—particularly those working in tech, finance, and politics. Some such criticism lies at the heart of this dispute.

On July 30, 2021, a New York district court declined to dismiss copyright infringement claims with respect to an online article that included an “embedded” video (i.e., shown via a link to a video hosted on another site).  The case involved a video hosted on a social media platform that made embedding available as a function of the platform.  The court ruled that the plaintiff-photographer plausibly alleged that the defendants’ “embed” may constitute copyright infringement and violate his display right in the copyrighted video, rejecting the defendants’ argument that embedding is not a “display” when the image at issue remains on a third-party’s server (Nicklen v. Sinclair Broadcast Group, Inc., No. 20-10300 (S.D.N.Y. July 30, 2021)).  Notably, this is the second New York court to decline to adopt the Ninth Circuit’s “server test” first adopted in the 2007 Perfect 10 decision, which held that the infringement of the public display right in a photographic image depends, in part, on where the image was hosted.  With this being the latest New York court finding the server test inapt for an online infringement case outside of the search engine context (even if other meritorious defenses may exist), website publishers have received another stark reminder to reexamine inline linking practices.

In a narrowly drawn, yet significant decision, the Supreme Court reversed the Federal Circuit and ruled that Google LLC’s (“Google”) copying of some of the Sun Java Application Programming Interface (API) declaring code was a fair use as a matter of law, ending Oracle America Inc.’s (“Oracle”) infringement claims over Google’s use of portions of the Java API code in the Android mobile platform. (Google LLC v. Oracle America, Inc., No. 18-956, 593 U.S. ___ (Apr. 5, 2021)).  In reversing the 2018 Federal Circuit decision that found Google’s use of the Java API packages was not fair use, the Supreme Court, in a 6-2 decision (Justice Barrett did not take part in the case) found where Google reimplemented the Java user interface, taking only what was needed to allow outside developers to work in a new and transformative mobile smartphone program, Google’s copying of the Sun Java API was a fair use as a matter of law. This decade-long dispute had been previously dubbed “The World Series of IP cases” by the trial court judge, and like many classic series, this one culminated in a winner-take-all Game 7 at the highest court.

Oracle is one of the most notable Supreme Court decisions affecting the software and technology industry in recent memory since, perhaps, the Court’s 2010 Bilski patent opinion, its 2012 Jones decision on GPS tracking, privacy and the Fourth Amendment and its 2005 Grokster decision on copyright inducement in the peer-to-peer network context, and certainly the most notable decision implicating fair use since its well-cited 1994 Campbell decision that expounded on the nature of “transformative” use. It was no surprise that this case attracted a stack of amicus briefs from various technology companies, organizations, and academia. In the months following oral argument, it was difficult to discern how the Court would decide the case – would it be on procedural grounds based on the Federal Circuit’s standard of review of the jury verdict on fair use, on the issue of the copyrightability of the Java API packages, directly on the fair use issue, or some combination.  The majority decision is a huge victory for the idea that fair use in the software context is not only a legal defense but a beneficial method to foster innovation by developing something transformative in a new environment on top of the functional building blocks that came before. One has to think hard to recall an opinion involving software and technology that referenced and applied the big picture principles of copyright – “to stimulate artistic creativity for the general public good,” as the Supreme Court once stated in a prior case – so indelibly into the fair use analysis.

The decision is also notable for the potential impact on copyright’s “transformative use test.” By considering Google’s intent for using the Java API code, the Court’s discussion of what constitutes a “transformative” use appears to diverge somewhat from recent Circuit Court holdings outside the software context.  The decision may redirect the transformative use analysis going forward, or future decisions may cabin the holding to the software context.