Back in October 2022, the Supreme Court granted certiorari in Gonzalez v. Google, an appeal that challenged whether YouTube’s targeted algorithmic recommendations qualify as “traditional editorial functions” protected by the CDA — or, rather, whether such recommendations are not the actions of a “publisher” and thus fall outside of
Jeffrey Neuburger
Jeffrey Neuburger is co-head of Proskauer’s Technology, Media & Telecommunications Group, head of the Firm’s Blockchain Group and a member of the Firm’s Privacy & Cybersecurity Group.
Jeff’s practice focuses on technology, media and intellectual property-related transactions, counseling and dispute resolution. That expertise, combined with his professional experience at General Electric and academic experience in computer science, makes him a leader in the field.
As one of the architects of the technology law discipline, Jeff continues to lead on a range of business-critical transactions involving the use of emerging technology and distribution methods. For example, Jeff has become one of the foremost private practice lawyers in the country for the implementation of blockchain-based technology solutions, helping clients in a wide variety of industries capture the business opportunities presented by the rapid evolution of blockchain. He is a member of the New York State Bar Association’s Task Force on Emerging Digital Finance and Currency.
Jeff counsels on a variety of e-commerce, social media and advertising matters; represents many organizations in large infrastructure-related projects, such as outsourcing, technology acquisitions, cloud computing initiatives and related services agreements; advises on the implementation of biometric technology; and represents clients on a wide range of data aggregation, privacy and data security matters. In addition, Jeff assists clients on a wide range of issues related to intellectual property and publishing matters in the context of both technology-based applications and traditional media.
OpenAI Eases Procedure to Opt-Out of Inputs Being Used for Training Purposes
A quick update on a new development with OpenAI’s ChatGPT. One of the concerns raised by users of ChatGPT is the ability of OpenAI to use queries for the training of the GPT model, and therefore potentially expose confidential information to third parties. In our prior post on ChatGPT risks…
ChatGPT Risks and the Need for Corporate Policies
ChatGPT has quickly become the talk of business, media and the Internet – reportedly, there were over 100 million monthly active users of the application just in January alone.
While there are many stories of the creative, humorous, apologetic, and in some cases unsettling interactions with ChatGPT,[1] the potential business applications for ChatGPT and other emerging generative artificial intelligence applications (generally referred to in this post as “GAI”) are plentiful. Many businesses see GAI as a potential game-changer. But, like other new foundational technology developments, new issues and possible areas of risk are presented.
ChatGPT is being used by employees and consultants in business today. Thus, businesses are well advised to evaluate the issues and risks to determine what policies or technical guardrails, if any, should be imposed on GAI’s use in the workplace.
New York Enacts First State “Right-to-Repair” Law
At the close of 2022, New York Governor Kathy Hochul signed the “Digital Fair Repair Act” (S4101A/A7006-B) (to be codified at N.Y. GBL §399-nn) (the “Act”). The law makes New York the first state in the country to pass a consumer electronics right-to-repair law.[1] Similar bills are pending in other states. The Act is a slimmed down version of the bill that was first passed by the legislature last July.
Generally speaking, the Act will require original equipment manufacturers (OEMs), or their authorized repair providers, to make parts and tools and diagnostic and repair information required for the maintenance and repair of “digital electronic equipment” available to independent repair providers and consumers, on “fair and reasonable terms” (subject to certain exceptions). The law only applies to products that are both manufactured for the first time as well as sold or used in the state for the first time on or after the law’s effective date of July 1, 2023 (thus exempting electronic products currently owned by consumers).
hiQ and LinkedIn Reach Settlement in Landmark Scraping Case
UPDATE: On December 8, 2022, the court issued an order granting the Consent Judgment and Permanent Injunction.
On December 6, 2022, the parties in the long-running litigation between now-defunct data analytics company hiQ Labs, Inc. (“hiQ”) and LinkedIn Corp. (“LinkedIn”) filed a Stipulation and Proposed Consent Judgment (the “Stipulation”) with the California district court, indicating that they have reached a confidential settlement agreement resolving all outstanding claims in the case.
This case has been a litigation odyssey of sorts, to the Supreme Court and back: it started with the original district court injunction in 2017, Ninth Circuit affirmance in 2019, Supreme Court vacating of the order in 2021, Ninth Circuit issuing a new order in April 2022 affirming the original injunction, and back again where we started, the lower court in August 2022 issuing an order dissolving the preliminary injunction, and the most recent mixed ruling on November 4th, 2022. It certainly has been one of the most heavily-litigated scraping cases in recent memory and has been closely followed on our blog. Practically speaking, though, the dispute had essentially reached its logical end with the last court ruling in November – hiQ had prevailed on the Computer Fraud and Abuse Act (CFAA) “unauthorized access” issue related to public website data but was facing a ruling that it had breached LinkedIn’s User Agreement due to its scraping and creation of fake accounts (subject to its equitable defenses).
Data Scraper’s Declaratory Action Seeking Green Light to Scrape LinkedIn Survives Motion to Dismiss
On November 15, 2022, a California district court declined to dismiss a declaratory judgment action brought by a data scraper, 3taps, Inc. (“3taps”), against LinkedIn Corp. (“LinkedIn”). (3taps, Inc. v. LinkedIn Corp., No. 18-00855 (N.D. Cal. Nov. 15, 2022)). 3taps is seeking an order to clarify whether the federal Computer Fraud and Abuse Act (CFAA) (or its California state law counterpart) prevents it from accessing and using publicly-available data on LinkedIn, and whether scraping such data would also subject it to an action brought by LinkedIn for breach of contract or trespass.
This is not 3tap’s first experience with scraping litigation (see prior post). But if this dispute sounds strangely familiar and reminiscent of the long-running dispute between hiQ Labs and LinkedIn (which we’ve followed closely), it is. The 3taps action traces its origin, in part, to the original hiQ ruling in August 2017, where this same judge first granted a preliminary injunction in favor of hiQ, enjoining LinkedIn from blocking hiQ’s access to LinkedIn members’ public profiles. Following that ruling, 3taps sent a letter to LinkedIn stating that it also intended to scrape publicly-available data from LinkedIn. LinkedIn responded that while it was not considering legal action against 3taps, it cautioned that “any further access by 3taps to the LinkedIn website and LinkedIn’s servers is without LinkedIn’s or its members’ authorization.” Thus, the hiQ ruling, 3taps’s letter to LinkedIn, and LinkedIn’s reply were the genesis of the current declaratory judgment action filed by 3taps against LinkedIn.[1]
Court Finds hiQ Breached LinkedIn’s Terms Prohibiting Scraping, but in Mixed Ruling, Declines to Grant Summary Judgment to Either Party as to Certain Key Issues
On November 4, 2022, a California district court took up the parties cross-motions for summary judgment in the long-running scraping litigation involving social media site LinkedIn Corp.’s (“LinkedIn”) challenge to data analytics firm hiQ Labs, Inc.’s (“hiQ”) scraping of LinkedIn public profile data. (hiQ Labs, Inc. v. LinkedIn Corp., No. 17-3301 (N.D. Cal. Nov. 4, 2022)). The court mostly denied both parties’ motions for summary judgment on the principal scraping-related issues related to breach of contract and CFAA liability. While the court found that hiQ breached LinkedIn’s User Agreement both through its own scraping of LinkedIn’s site and using scraped data, and through its use of independent contractors (so-called “turkers”), who logged into LinkedIn to run quality assurance for hiQ’s “people analytics” product, there were factual issues surrounding hiQ’s waiver and estoppel defenses to its own scraping activities that foreclosed a judgment in favor of either party on that claim. Similarly, the court found material issues of fact which prevented a ruling on hiQ’s statute of limitations defense to LinkedIn’s claims under the Computer Fraud and Abuse Act (CFAA) based on emails exchanged among LinkedIn staff back in 2014 about hiQ’s activities that may or may not have given LinkedIn constructive knowledge about hiQ’s scraping activities and started the statute of limitations clock.
District Court Decision Brings New Life to CFAA to Combat Unwanted Scraping
On October 24, 2022, a Delaware district court held that certain claims under the Computer Fraud and Abuse Act (CFAA) relating to the controversial practice of web scraping were sufficient to survive the defendant’s motion to dismiss. (Ryanair DAC v. Booking Holdings Inc., No. 20-01191 (D. Del. Oct. 24, 2022)). The opinion potentially breathes life into the use of the CFAA to combat unwanted scraping.
In the case, Ryanair DAC (“Ryanair”), a European low-fare airline, brought various claims against Booking Holdings Inc. (and its well-known suite of online travel and hotel booking websites) (collectively, “Defendants”) for allegedly scraping the ticketing portion of the Ryanair site. Ryanair asserted that the ticketing portion of the site is only accessible to logged-in users and therefore the data on the site is not public data.
The decision is important as it offers answers (at least from one district court) to several unsettled legal issues about the scope of CFAA liability related to screen scraping. In particular, the decision addresses:
- the potential for vicarious liability under the CFAA (which is important as many entities retain third party service providers to perform scraping)
- how a data scraper’s use of evasive measures (e.g., spoofed email addresses, rotating IP addresses) may be considered under a CFAA claim centered on an “intent to defraud”
- clarification as to the potential role of technical website-access limitations in analyzing CFAA “unauthorized access” liability
To find answers to these questions, the court’s opinion distills the holdings of two important CFAA rulings from this year – the Supreme Court’s holding in Van Buren that adopted a narrow interpretation of “exceeds unauthorized access” under the CFAA and the Ninth Circuit’s holding in the screen scraping hiQ case where that court found that the concept of “without authorization” under the CFAA does not apply to “public” websites.
Important CDA Section 230 Case Lands in Supreme Court: Level of Protection Afforded Modern Online Platforms at Stake
Since the passage of Section 230 of the Communication Decency Act (“CDA”), the majority of federal circuits have interpreted the CDA to establish broad federal immunity to causes of action that would treat service providers as publishers of content provided by third parties. The CDA was passed in the early days of e-commerce and was written broadly enough to cover not only the online bulletin boards and not-so-very interactive websites that were common then, but also more modern online services, web 2.0 offerings and today’s platforms that might use algorithms to organize, repackage or recommend user-generated content.
Over 25 years ago, the Fourth Circuit, in the landmark Zeran case, the first major circuit court-level decision interpreting Section 230, held that Section 230 bars lawsuits, which, at their core, seek to hold a service provider liable for its exercise of a publisher’s “traditional editorial functions — such as deciding whether to publish, withdraw, postpone or alter content.” Courts have generally followed this reasoning ever since to determine whether an online provider is being treated as a “publisher” of third party content and thus entitled to immunity under the CDA. The scope of “traditional editorial functions” is at the heart of a case currently on the docket at the Supreme Court. On October 3, 2022, the Supreme Court granted certiorari in an appeal that is challenging whether a social media platform’s targeted algorithmic recommendations fall under the umbrella of “traditional editorial functions” protected by the CDA or whether such recommendations are not the actions of a “publisher” and thus fall outside of CDA immunity. (Gonzalez v. Google LLC, No. 21-1333 (U.S. cert. granted Oct. 3, 2022)).
App Store Protected by CDA Immunity (and Limitation of Liability) for Losses from Fraudulent Crypto Wallet App
In a recent ruling, a California district court held that Apple, as operator of that App Store, was protected from liability for losses resulting from that type of fraudulent activity. (Diep v. Apple Inc., No. 21-10063 (N.D. Cal. Sept. 2, 2022)). This case is important in that, in…