On January 23, 2024, a California district court released its opinion in a closely-watched scraping dispute between the social media platform Meta and data provider Bright Data Ltd. (“Bright Data”) over Bright Data’s alleged scraping of publicly-available data from Facebook and Instagram for use in data products sold to third

Last week, OpenAI rolled out ChatGPT Team, a flexible subscription structure for small-to-medium sized businesses (with two or more users) that are not large enough to warrant the expense of a ChatGPT Enterprise subscription (which requires a minimum of 150 licensed users).  Despite being less expensive than its Enterprise counterpart, ChatGPT Team provides for the use of the latest OpenAI models with the robust privacy, security and confidentiality protections that previously only applied to the ChatGPT Enterprise subscription and which are far more protective than the terms that govern ordinary personal accounts. This development could be the proverbial “game changer” for smaller businesses, as for the first time, they can have access to tools previously only available to OpenAI Enterprise customers, under OpenAI’s more favorable Business Terms and the privacy policies listed on the Enterprise Privacy page, without making the financial or technical commitment required under an Enterprise relationship. 

Thus, for example, ChatGPT Team customers would be covered by the Business Terms’ non-training commitment (OpenAI’s Team announcement states: “We never train on your business data or conversations”), and by other data security controls, as well as Open AI’s “Copyright Shield,” which offers indemnity for customers in the event that a generated output infringes third party IP.[1] Moreover, under the enterprise-level privacy protections, customers can also create custom GPT models that are for in-house use and not shared with anyone else.

As noted above, until now, the protections under the OpenAI Business Terms were likely beyond reach for many small and medium sized businesses, either because of the financial commitment required by OpenAI’s Enterprise agreement or because of the unavailability of the technical infrastructure necessary to implement the OpenAI API Service. In the past, such smaller entities might resort to having employees use free or paid OpenAI products under individual accounts, with internal precautions (like restrictive AI policies) in place to avoid confidentiality and privacy concerns.[2]

As we’ve seen over the last year, one generative AI provider’s rollout of a new product, tool or contractual protection often results in other providers following suit. Indeed, earlier this week Microsoft announced that it is “expanding Copilot for Microsoft 365 availability to small and medium-sized businesses.” With businesses of all sizes using, testing or developing custom GAI products to stay abreast with the competition, we will watch for future announcements from other providers about more flexible licensing plans for small-to-medium sized businesses.

On December 19, 2023, AI research company Anthropic announced that it had updated and made publicly available its Commercial Terms of Service (effective Jan 1, 2024) to, among other things, indemnify its enterprise Claude API customers from copyright infringement claims made against them for “their authorized use of our services

  • Flight and travel data has always been valuable for data aggregators and online travel services and has prompted litigation over the years.
  • Latest suit from Air Canada against a rewards travel search site raises some interesting liability issues under the CFAA.
  • The implications of this case, if the plaintiffs are successful, could impact the legal analysis of web scraping in a variety of circumstances, including for the training of generative AI models.

In a recent post, we recounted the myriad of issues raised by recently-filed data scraping suits involving job listings, company reviews and employment data.  Soon after, another interesting scraping suit was filed, this time by a major airline against an award travel search site that aggregates fare and award travel data.  Air Canada alleges that Defendant Localhost LLC (“Localhost” or “Defendant”), operator of the Seats.aero website, unlawfully bypassed technical measures and violated Air Canada’s website terms when it scraped “vast amounts” of flight data without permission and purportedly caused slowdowns to Air Canada’s site and other problems. (Air Canada v. Localhost LLC, No. 23-01177 (D. Del. Filed Oct. 19, 2023)).[1]   

The complaint alleges that Localhost harvested data from Air Canada’s site and systems to populate the seats.aero site, which claims to be “the fastest search engine for award travel.” 

It also alleged that in addition to scraping the Air Canada website, Localhost engaged in “API scraping” by impersonating authorized requests to Air Canada’s application programming interface.  

In a previous post, we highlighted three key items to look out for when assessing the terms and conditions of generative artificial intelligence (“GAI”) tools: training rights, use restrictions and responsibility for outputs. With respect to responsibility for outputs specifically, we detailed Microsoft’s shift away, through its Copilot Copyright Commitment (discussed in greater detail below), from the blanket disclaimer of all responsibility for GAI tools’ outputs that we initially saw from most GAI providers.

In the latest expansion of intellectual property protection offered by a major GAI provider, OpenAI’s CEO Sam Altman announced to OpenAI “DevDay” conference attendees that “we can defend our customers and pay the costs incurred if you face legal claims around copyright infringement, and this applies both to ChatGPT Enterprise and the API.”

In the first half of 2023, a deluge of new generative artificial intelligence (“GAI”) tools hit the market, with companies ranging from startups to tech giants rolling out new products. In the large language model space alone, we have seen OpenAI’s GPT-4, Meta’s LLaMA, Anthropic’s Claude 2, Microsoft’s Bing AI, and others.

A proliferation of tools has meant a proliferation of terms and conditions. Many popular tools have both a free version and a paid version, which each subject to different terms, and several providers also have ‘enterprise’ grade tools available to the largest customers. For businesses looking to trial GAI, the number of options can be daunting.

This article sets out three key items to check when evaluating a GAI tool’s terms and conditions. Although determining which tool is right for a particular business is a complex question that requires an analysis of terms and conditions in their entirety – not to mention nonlegal considerations like pricing and technical capabilities – the below items can provide prospective customers with a starting place, as well as bellwether to help spot terms and conditions that are more or less aggressive than the market standard.

In recent years there has been a great demand for information about job listings, company reviews and employment data.   Recruiters, consultants, analysts and employment-related service providers, amongst others, are aggressively scraping job-posting sites to extract that type of information. Recall, for example, the long-running, landmark hiQ scraping litigation over the scraping of public LinkedIn data.

The two most recent disputes regarding scraping of employment and job-related data were brought by Jobiak LLC (“Jobiak”), an AI-based recruitment platform.  Jobiak filed two nearly-identical scraping suits in California district court alleging that competitors unlawfully scraped its database and copied its optimized job listings without authorization. (Jobiak LLC v. Botmakers LLC, No. 23-08604 (C.D. Cal. Filed Oct. 12, 2023); Jobiak LLC v. Aspen Technology Labs, Inc., No. 23-08728 (C.D. Cal. Filed Oct. 17, 2023)).

ChatGPT has quickly become the talk of business, media and the Internet – reportedly, there were over 100 million monthly active users of the application just in January alone.

While there are many stories of the creative, humorous, apologetic, and in some cases unsettling interactions with ChatGPT,[1] the potential business applications for ChatGPT and other emerging generative artificial intelligence applications (generally referred to in this post as “GAI”) are plentiful. Many businesses see GAI as a potential game-changer.  But, like other new foundational technology developments, new issues and possible areas of risk are presented.

ChatGPT is being used by employees and consultants in business today.  Thus, businesses are well advised to evaluate the issues and risks to determine what policies or technical guardrails, if any, should be imposed on GAI’s use in the workplace.

At the close of 2022, New York Governor Kathy Hochul signed the “Digital Fair Repair Act” (S4101A/A7006-B) (to be codified at N.Y. GBL §399-nn) (the “Act”). The law makes New York the first state in the country to pass a consumer electronics right-to-repair law.[1] Similar bills are pending in other states. The Act is a slimmed down version of the bill that was first passed by the legislature last July.

Generally speaking, the Act will require original equipment manufacturers (OEMs), or their authorized repair providers, to make parts and tools and diagnostic and repair information required for the maintenance and repair of “digital electronic equipment” available to independent repair providers and consumers, on “fair and reasonable terms” (subject to certain exceptions). The law only applies to products that are both manufactured for the first time as well as sold or used in the state for the first time on or after the law’s effective date of July 1, 2023 (thus exempting electronic products currently owned by consumers).