New Media and Technology Law Blog

That Was Close! The Supreme Court Declines Opportunity to Address CDA Immunity in Social Media

Back in October 2022, the Supreme Court granted certiorari in Gonzalez v. Google, an appeal that challenged whether YouTube’s targeted algorithmic recommendations qualify as “traditional editorial functions” protected by the CDA — or, rather, whether such recommendations are not the actions of a “publisher” and thus fall outside of CDA immunity. At the time, some commentators cautioned that an adverse ruling for YouTube would “break the internet” and subject a host of modern platforms to crippling liability for automated moderation or recommendation of third party content. At the very least, as we proffered in our prior post on the Gonzalez case, an adverse ruling would have created a major carve-out of the CDA, with online providers potentially losing immunity in instances where automated tools were used to organize, repackage, or recommend third party content.  On May 18, 2023, in what turned out to be a close shave for online platforms and Section 230, the Court declined to take up the CDA issues in the Gonzalez appeal.

The Supreme Court issued two decisions on May 18th: one in the Gonzalez case, and a second decision in Twitter v. Taamneh, a related case against Twitter and other platforms. This second case undertook an analysis of aiding and abetting liability under the federal Anti-Terrorism Act (ATA) (18 U.S.C. § 2333) and whether certain social media platforms could be held liable under the ATA for, among other things, algorithmic recommendation of third party terrorist content to other users. (See Gonzalez v. Google LLC, No. 21-1333, 598 U. S. ____ (May 18, 2023) (per curiam); Twitter, Inc. v. Taamneh, No. 21-1496, 598 U. S. ____ (May 18, 2023)).

Both cases involved allegations under the ATA that the social media defendants provided “material support” to ISIS terrorists through their use of the platforms.  However, the Twitter case focused on whether the social media company defendants could be considered to have aided and abetted ISIS in a 2017 terrorist attack at a Turkish nightclub. On the other hand, the Gonzalez case focused more on the availability of CDA 230 immunity for similar claims against YouTube.  Despite the principal issues being different, the cases were linked, and a ruling in Twitter would inevitably impact the outcome of the Gonzalez case. Given the interrelationship of the two cases, an adverse ruling in Twitter, which focused on the merits of plaintiffs’ ATA claims, would doom the plaintiffs’ chances in Gonzalez (as the underlying ATA claims involved relatively similar conduct on the part of the social media defendants in that case), and also would give the Court an off-ramp to avoid having to take up the CDA 230 issues at all in the Gonzalez appeal.  And that is exactly what happened.

A full overview of the Court’s analysis of aiding and abetting liability under the ATA is beyond the scope of this post.  However, in brief, the Court in the Twitter case dismissed the ATA claims against the social media defendants on the merits and seemed wary to validate a theory that could subject online platforms to vast liability merely for providing what amounts to a communication service. The Court stated:

“The fact that some bad actors took advantage of these platforms is insufficient to state a claim [under the ATA] that defendants knowingly gave substantial assistance and thereby aided and abetted those wrongdoers’ acts. And that is particu­larly true because a contrary holding would effectively hold any sort of communication provider liable for any sort of wrongdoing merely for knowing that the wrongdoers were using its services and failing to stop them. That conclusion would run roughshod over the typical limits on tort liability and take aiding and abetting far beyond its essential culpa­bility moorings.”

Armed with the decision in the Twitter case, the Court, in a per curiam opinion, noted that the holding similarly applied to the claims against YouTube in the Gonzalez case and remanded the case, noting that it need not delve into the social media defendants’ defenses under the CDA and that ultimately the claims in Gonzalez were likely insufficient on the merits as well:

“[W]e think it sufficient to acknowledge that much (if not all) of plaintiffs’ complaint seems to fail under…our decision in Twitter…. We therefore decline to address the application of §230 to a complaint that appears to state little, if any, plausible claim for relief. Instead, we vacate the judgment below and remand the case for the Ninth Circuit to consider plaintiffs’ complaint in light of our decision in Twitter.”

With the Supreme Court declining the opportunity to effect CDA reform on its own, the breadth of Section 230 remains unchanged and providers can continue to operate without the uncertainty that had hung over this appeal.  Moving forward, the industry will be closely watching the petition for certiorari challenging a Texas social media regulation (HB20) on First Amendment grounds currently being considered by the Supreme Court.  According to the Petitioners, HB20 presents “burdensome operational and disclosure requirements” and would chill editorial choices. Back in Congress, there is still a lot of political chatter around Washington about “reigning in Big Tech” from both sides of the aisle, but no consensus in how to achieve that through CDA reform without affecting the vibrant internet. The urgency to enact CDA reform appears to be tempered by the focus on the technology issue du jour, artificial intelligence (AI). In fact, following OpenAI CEO Sam Altman’s appearance before the Senate Judiciary Privacy, Technology, & the Law Subcommittee this past week, there appears to be some bipartisan appetite to regulate this emerging area. Will the CDA get swept into a broader AI-focused legislative effort? Or, will it be left aside, an orphaned legal issue, overshadowed by this new technology. We will have to wait and see how this all unwinds. But in the meantime, the CDA still survives!

OpenAI Eases Procedure to Opt-Out of Inputs Being Used for Training Purposes

A quick update on a new development with OpenAI’s ChatGPT. One of the concerns raised by users of ChatGPT is the ability of OpenAI to use queries for the training of the GPT model, and therefore potentially expose confidential information to third parties. In our prior post on ChatGPT risks and the need for corporate policies, we advised that if an organization using ChatGPT was concerned with this confidentiality issue it could use ChatGPT’s opt-out form to exclude inputs from the training process.  On April 25, 2023, OpenAI made the opt-out process easier, and announced that it has given users a new settings option to disable ChatGPT from showing chats in a user’s history sidebar and from using chats to improve ChatGPT via model training.  The announcement noted, however, that even if this option is selected, OpenAI will still retain conversations for thirty days and “review them only when needed to monitor for abuse, before permanently deleting.”  Users can find a toggle switch on the Settings menu, under “Data Controls.”  In addition, there is a new “Export” option in settings that allows users to export their ChatGPT data and receive a copy of it via email.

While prior to this development, users could elect to exclude their inputs from model training, some found the process of submitting the opt-out form to be cumbersome.  This new switch, which simplifies the process considerably, will allow more users to take advantage of the confidentiality feature.

Amazon Acts Against DMCA Abuse

Competition between Amazon’s third-party merchants is notoriously fierce. The online retail giant often finds itself playing the role of referee, banning what it considers unfair business practices (such as offering free products in exchange for perfect reviews, or targeting competitors with so-called “review bombing”). Last month, in the latest round of this push and pull, the online retail giant blew the whistle on several merchants who Amazon claims crossed a red line and may now have to face litigation in federal court.

Read the full post on our Minding Your Business blog.

ChatGPT Risks and the Need for Corporate Policies

ChatGPT has quickly become the talk of business, media and the Internet – reportedly, there were over 100 million monthly active users of the application just in January alone.

While there are many stories of the creative, humorous, apologetic, and in some cases unsettling interactions with ChatGPT,[1] the potential business applications for ChatGPT and other emerging generative artificial intelligence applications (generally referred to in this post as “GAI”) are plentiful. Many businesses see GAI as a potential game-changer.  But, like other new foundational technology developments, new issues and possible areas of risk are presented.

ChatGPT is being used by employees and consultants in business today.  Thus, businesses are well advised to evaluate the issues and risks to determine what policies or technical guardrails, if any, should be imposed on GAI’s use in the workplace. Continue Reading

New York Enacts First State “Right-to-Repair” Law

At the close of 2022, New York Governor Kathy Hochul signed the “Digital Fair Repair Act” (S4101A/A7006-B) (to be codified at N.Y. GBL §399-nn) (the “Act”). The law makes New York the first state in the country to pass a consumer electronics right-to-repair law.[1] Similar bills are pending in other states. The Act is a slimmed down version of the bill that was first passed by the legislature last July.

Generally speaking, the Act will require original equipment manufacturers (OEMs), or their authorized repair providers, to make parts and tools and diagnostic and repair information required for the maintenance and repair of “digital electronic equipment” available to independent repair providers and consumers, on “fair and reasonable terms” (subject to certain exceptions). The law only applies to products that are both manufactured for the first time as well as sold or used in the state for the first time on or after the law’s effective date of July 1, 2023 (thus exempting electronic products currently owned by consumers). Continue Reading

District Court Decision Brings New Life to CFAA to Combat Unwanted Scraping

On October 24, 2022, a Delaware district court held that certain claims under the Computer Fraud and Abuse Act (CFAA) relating to the controversial practice of web scraping were sufficient to survive the defendant’s motion to dismiss. (Ryanair DAC v. Booking Holdings Inc., No. 20-01191 (D. Del. Oct. 24, 2022)). The opinion potentially breathes life into the use of the CFAA to combat unwanted scraping.

In the case, Ryanair DAC (“Ryanair”), a European low-fare airline, brought various claims against Booking Holdings Inc. (and its well-known suite of online travel and hotel booking websites) (collectively, “Defendants”) for allegedly scraping the ticketing portion of the Ryanair site. Ryanair asserted that the ticketing portion of the site is only accessible to logged-in users and therefore the data on the site is not public data.

The decision is important as it offers answers (at least from one district court) to several unsettled legal issues about the scope of CFAA liability related to screen scraping. In particular, the decision addresses:

  • the potential for vicarious liability under the CFAA (which is important as many entities retain third party service providers to perform scraping)
  • how a data scraper’s use of evasive measures (e.g., spoofed email addresses, rotating IP addresses) may be considered under a CFAA claim centered on an “intent to defraud”
  • clarification as to the potential role of technical website-access limitations in analyzing CFAA “unauthorized access” liability

To find answers to these questions, the court’s opinion distills the holdings of two important CFAA rulings from this year – the Supreme Court’s holding in Van Buren that adopted a narrow interpretation of “exceeds unauthorized access” under the CFAA and the Ninth Circuit’s holding in the screen scraping hiQ case where that court found that the concept of “without authorization” under the CFAA does not apply to “public” websites. Continue Reading

Amazon’s Recent Acquisitions Highlight the Value of Consumer Data (and the Evolving Privacy Issues)

Roughly two weeks apart, on July 21, 2022 and August 5, 2022, respectively, Amazon made headlines for agreeing to acquire One Medical, “a human-centered and technology-powered primary care organization,” for approximately $3.9 billion and iRobot, a global consumer robot company, known for its creation of the Roomba vacuum, for approximately $1.7 billion. These proposed acquisitions have drawn the scrutiny of the Federal Trade Commission (FTC), which following President Biden’s 2021 Executive Order on antitrust and competition, has taken a more aggressive stance toward acquisitions by large technology companies in an effort to, in FTC Chair Lina Khan’s words, “prevent incumbents from unlawfully capturing control over emerging markets.”

Beyond antitrust issues, Amazon’s recent acquisition decisions bring the discussion of the collection of consumer information and its secondary uses, specifically location and health data (which we have previously written about), to the forefront, yet again.

Read the full post on our Privacy Law Blog.

LexBlog

This website uses third party cookies, over which we have no control. To deactivate the use of third party advertising cookies, you should alter the settings in your browser.

OK