Photo of Jeffrey Neuburger

Jeffrey Neuburger is a partner, co-head of the Technology, Media & Telecommunications Group, a member of the Privacy & Cybersecurity Group and editor of the firm’s New Media and Technology Law blog.

Jeff’s practice focuses on technology, media and advertising-related business transactions and counseling, including the utilization of emerging technology and distribution methods in business. For example, Jeff represents clients in online strategies associated with advertising, products, services and content commercialized on the Internet through broadband channels, mobile platforms, broadcast and cable television distribution and print publishing. He also represents many organizations in large infrastructure-related projects, such as outsourcing, technology acquisitions, cloud computing initiatives and related services agreements.

Serving as a collaborative business partner through our clients’ biggest challenges, Jeff is part of the Firm’s cross-disciplinary, cross-jurisdictional Coronavirus Response Team helping to shape the guidance and next steps for clients impacted by the pandemic.

In the rapidly-evolving AI space, the last few days of this week saw significant AI developments occur perhaps even faster than usual.  For example, seven AI companies agreed to voluntary guidelines covering AI safety and security and ChatGPT rolled out a custom preferences tool to streamline usage. In addition, as a related point, Microsoft issued a transparency note for the Azure OpenAI service.  And on top of that, this week saw announcements of a number of generative AI commercial ventures which are beyond the scope of this particular post.

One of the many legal questions swirling around in the world of generative AI (“GenAI”) is to what extent Section 230 of the Communications Decency Act (CDA) applies to the provision of GenAI.  Can CDA immunity apply to GenAI-generated output and protect GenAI providers from potential third party liability?

On June 14, 2023, Senators Richard Blumenthal and Josh Hawley introduced the “No Section 230 Immunity for AI Act,” bipartisan legislation that would expressly remove most immunity under the CDA for a provider of an interactive computer service if the conduct underlying the claim or charge “involves the use or provision of generative artificial intelligence by the interactive computer service.” While the bill would eliminate “publisher” immunity under §230(c)(1) for claims involving the use or provision of generative artificial intelligence by an interactive computer service, immunity for so-called “Good Samaritan” blocking under § 230(c)(2)(A), which protects service providers and users from liability for claims arising out of good faith actions to screen or restrict access to “objectionable” material from their services, would not be affected.

Back in October 2022, the Supreme Court granted certiorari in Gonzalez v. Google, an appeal that challenged whether YouTube’s targeted algorithmic recommendations qualify as “traditional editorial functions” protected by the CDA — or, rather, whether such recommendations are not the actions of a “publisher” and thus fall outside of

ChatGPT has quickly become the talk of business, media and the Internet – reportedly, there were over 100 million monthly active users of the application just in January alone.

While there are many stories of the creative, humorous, apologetic, and in some cases unsettling interactions with ChatGPT,[1] the potential business applications for ChatGPT and other emerging generative artificial intelligence applications (generally referred to in this post as “GAI”) are plentiful. Many businesses see GAI as a potential game-changer.  But, like other new foundational technology developments, new issues and possible areas of risk are presented.

ChatGPT is being used by employees and consultants in business today.  Thus, businesses are well advised to evaluate the issues and risks to determine what policies or technical guardrails, if any, should be imposed on GAI’s use in the workplace.

At the close of 2022, New York Governor Kathy Hochul signed the “Digital Fair Repair Act” (S4101A/A7006-B) (to be codified at N.Y. GBL §399-nn) (the “Act”). The law makes New York the first state in the country to pass a consumer electronics right-to-repair law.[1] Similar bills are pending in other states. The Act is a slimmed down version of the bill that was first passed by the legislature last July.

Generally speaking, the Act will require original equipment manufacturers (OEMs), or their authorized repair providers, to make parts and tools and diagnostic and repair information required for the maintenance and repair of “digital electronic equipment” available to independent repair providers and consumers, on “fair and reasonable terms” (subject to certain exceptions). The law only applies to products that are both manufactured for the first time as well as sold or used in the state for the first time on or after the law’s effective date of July 1, 2023 (thus exempting electronic products currently owned by consumers).

On October 24, 2022, a Delaware district court held that certain claims under the Computer Fraud and Abuse Act (CFAA) relating to the controversial practice of web scraping were sufficient to survive the defendant’s motion to dismiss. (Ryanair DAC v. Booking Holdings Inc., No. 20-01191 (D. Del. Oct. 24, 2022)). The opinion potentially breathes life into the use of the CFAA to combat unwanted scraping.

In the case, Ryanair DAC (“Ryanair”), a European low-fare airline, brought various claims against Booking Holdings Inc. (and its well-known suite of online travel and hotel booking websites) (collectively, “Defendants”) for allegedly scraping the ticketing portion of the Ryanair site. Ryanair asserted that the ticketing portion of the site is only accessible to logged-in users and therefore the data on the site is not public data.

The decision is important as it offers answers (at least from one district court) to several unsettled legal issues about the scope of CFAA liability related to screen scraping. In particular, the decision addresses:

  • the potential for vicarious liability under the CFAA (which is important as many entities retain third party service providers to perform scraping)
  • how a data scraper’s use of evasive measures (e.g., spoofed email addresses, rotating IP addresses) may be considered under a CFAA claim centered on an “intent to defraud”
  • clarification as to the potential role of technical website-access limitations in analyzing CFAA “unauthorized access” liability

To find answers to these questions, the court’s opinion distills the holdings of two important CFAA rulings from this year – the Supreme Court’s holding in Van Buren that adopted a narrow interpretation of “exceeds unauthorized access” under the CFAA and the Ninth Circuit’s holding in the screen scraping hiQ case where that court found that the concept of “without authorization” under the CFAA does not apply to “public” websites.