Late last year, Chegg Inc. (“Chegg”), an online learning platform, obtained a preliminary injunction based on allegations that the various operators of the Homeworkify website (“Defendants”) – which allows users to view Chegg’s paywalled solutions without creating an account – violated the Computer Fraud and Abuse Act (CFAA). (Chegg

On January 23, 2024, a California district court released its opinion in a closely-watched scraping dispute between the social media platform Meta and data provider Bright Data Ltd. (“Bright Data”) over Bright Data’s alleged scraping of publicly-available data from Facebook and Instagram for use in data products sold to third

  • Flight and travel data has always been valuable for data aggregators and online travel services and has prompted litigation over the years.
  • Latest suit from Air Canada against a rewards travel search site raises some interesting liability issues under the CFAA.
  • The implications of this case, if the plaintiffs are successful, could impact the legal analysis of web scraping in a variety of circumstances, including for the training of generative AI models.

In a recent post, we recounted the myriad of issues raised by recently-filed data scraping suits involving job listings, company reviews and employment data.  Soon after, another interesting scraping suit was filed, this time by a major airline against an award travel search site that aggregates fare and award travel data.  Air Canada alleges that Defendant Localhost LLC (“Localhost” or “Defendant”), operator of the Seats.aero website, unlawfully bypassed technical measures and violated Air Canada’s website terms when it scraped “vast amounts” of flight data without permission and purportedly caused slowdowns to Air Canada’s site and other problems. (Air Canada v. Localhost LLC, No. 23-01177 (D. Del. Filed Oct. 19, 2023)).[1]   

The complaint alleges that Localhost harvested data from Air Canada’s site and systems to populate the seats.aero site, which claims to be “the fastest search engine for award travel.” 

It also alleged that in addition to scraping the Air Canada website, Localhost engaged in “API scraping” by impersonating authorized requests to Air Canada’s application programming interface.  

In recent years there has been a great demand for information about job listings, company reviews and employment data.   Recruiters, consultants, analysts and employment-related service providers, amongst others, are aggressively scraping job-posting sites to extract that type of information. Recall, for example, the long-running, landmark hiQ scraping litigation over the scraping of public LinkedIn data.

The two most recent disputes regarding scraping of employment and job-related data were brought by Jobiak LLC (“Jobiak”), an AI-based recruitment platform.  Jobiak filed two nearly-identical scraping suits in California district court alleging that competitors unlawfully scraped its database and copied its optimized job listings without authorization. (Jobiak LLC v. Botmakers LLC, No. 23-08604 (C.D. Cal. Filed Oct. 12, 2023); Jobiak LLC v. Aspen Technology Labs, Inc., No. 23-08728 (C.D. Cal. Filed Oct. 17, 2023)).

One of the many legal questions swirling around in the world of generative AI (“GenAI”) is to what extent Section 230 of the Communications Decency Act (CDA) applies to the provision of GenAI.  Can CDA immunity apply to GenAI-generated output and protect GenAI providers from potential third party liability?

On June 14, 2023, Senators Richard Blumenthal and Josh Hawley introduced the “No Section 230 Immunity for AI Act,” bipartisan legislation that would expressly remove most immunity under the CDA for a provider of an interactive computer service if the conduct underlying the claim or charge “involves the use or provision of generative artificial intelligence by the interactive computer service.” While the bill would eliminate “publisher” immunity under §230(c)(1) for claims involving the use or provision of generative artificial intelligence by an interactive computer service, immunity for so-called “Good Samaritan” blocking under § 230(c)(2)(A), which protects service providers and users from liability for claims arising out of good faith actions to screen or restrict access to “objectionable” material from their services, would not be affected.

Back in October 2022, the Supreme Court granted certiorari in Gonzalez v. Google, an appeal that challenged whether YouTube’s targeted algorithmic recommendations qualify as “traditional editorial functions” protected by the CDA — or, rather, whether such recommendations are not the actions of a “publisher” and thus fall outside of

Since the passage of Section 230 of the Communication Decency Act (“CDA”), the majority of federal circuits have interpreted the CDA to establish broad federal immunity to causes of action that would treat service providers as publishers of content provided by third parties.  The CDA was passed in the early days of e-commerce and was written broadly enough to cover not only the online bulletin boards and not-so-very interactive websites that were common then, but also more modern online services, web 2.0 offerings and today’s platforms that might use algorithms to organize, repackage or recommend user-generated content.

Over 25 years ago, the Fourth Circuit, in the landmark Zeran case, the first major circuit court-level decision interpreting Section 230, held that Section 230 bars lawsuits, which, at their core, seek to hold a service provider liable for its exercise of a publisher’s “traditional editorial functions — such as deciding whether to publish, withdraw, postpone or alter content.” Courts have generally followed this reasoning ever since to determine whether an online provider is being treated as a “publisher” of third party content and thus entitled to immunity under the CDA.  The scope of “traditional editorial functions” is at the heart of a case currently on the docket at the Supreme Court. On October 3, 2022, the Supreme Court granted certiorari in an appeal that is challenging whether a social media platform’s targeted algorithmic recommendations fall under the umbrella of “traditional editorial functions” protected by the CDA or whether such recommendations are not the actions of a “publisher” and thus fall outside of CDA immunity. (Gonzalez v. Google LLC, No. 21-1333 (U.S. cert. granted Oct. 3, 2022)).

Can internet service providers necessarily be compelled to unmask anonymous copyright infringers? In an opinion touching on Digital Millennium Copyright Act (DMCA) subpoenas, First Amendment concerns, and fair use, the Northern District of California said, in this one particular instance, no, granting Twitter’s motion to quash a subpoena seeking to reveal information behind an anonymous poster. (In re DMCA § 512(h) Subpoena to Twitter, Inc., No. 20-80214 (N.D. Cal. June 21, 2022)). The anonymous figure at the center of the dispute is @CallMeMoneyBags, an anonymous Twitter user who posts criticisms of wealthy people—particularly those working in tech, finance, and politics. Some such criticism lies at the heart of this dispute.

Web 3.0 and the promise of the metaverse has generated excitement about new markets for businesses large and small. But as with any technological frontier, legal uncertainties cause new risks to emerge alongside the opportunities. One area currently full of legal questions is trademark law. We will examine what we