Since the passage of Section 230 of the Communication Decency Act (“CDA”), the majority of federal circuits have interpreted the CDA to establish broad federal immunity to causes of action that would treat service providers as publishers of content provided by third parties.  The CDA was passed in the early days of e-commerce and was written broadly enough to cover not only the online bulletin boards and not-so-very interactive websites that were common then, but also more modern online services, web 2.0 offerings and today’s platforms that might use algorithms to organize, repackage or recommend user-generated content.

Over 25 years ago, the Fourth Circuit, in the landmark Zeran case, the first major circuit court-level decision interpreting Section 230, held that Section 230 bars lawsuits, which, at their core, seek to hold a service provider liable for its exercise of a publisher’s “traditional editorial functions — such as deciding whether to publish, withdraw, postpone or alter content.” Courts have generally followed this reasoning ever since to determine whether an online provider is being treated as a “publisher” of third party content and thus entitled to immunity under the CDA.  The scope of “traditional editorial functions” is at the heart of a case currently on the docket at the Supreme Court. On October 3, 2022, the Supreme Court granted certiorari in an appeal that is challenging whether a social media platform’s targeted algorithmic recommendations fall under the umbrella of “traditional editorial functions” protected by the CDA or whether such recommendations are not the actions of a “publisher” and thus fall outside of CDA immunity. (Gonzalez v. Google LLC, No. 21-1333 (U.S. cert. granted Oct. 3, 2022)).

In Gonzalez, the Ninth Circuit affirmed the district court’s dismissal of claims against Google under the Anti-Terrorism Act (ATA), 18 U.S.C. § 2333, for allegedly providing “material support” to ISIS by allowing terrorists to use YouTube (temporarily, before known accounts are terminated or content blocked) as a tool to facilitate recruitment and commit terrorism.  The court found that Google was entitled to CDA immunity for most claims, concluding that “a website’s use of content-neutral algorithms, without more, does not expose it to liability for content posted by a third-party” and that since the early days of the internet, websites have always decided how to display third-party content and to whom it should be shown, but “no case law denies § 230 immunity because of the ‘matchmaking’ results of such editorial decisions.” This holding followed the reasoning of a trio of noteworthy circuit court-level opinions rejecting plaintiffs’ attempt to make an end run around the CDA based on the assertion that online service providers lose immunity if they algorithmically recommend or recast content in another form to other users of the site.

The question presented in the Gonzalez’s petition is: “Does section 230(c)(1) immunize interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limit the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information?”

Before the advent of the web, a particular edition of a print newspaper was static – everyone who received that edition saw the same paper, with the same front page and headlines. Today, social media platforms, using algorithmic tools, can recommend, highlight or share dynamic content based on the user’s profile and prior inputs.. And despite the sophistication of automated filters and the retention of teams of screeners and moderators, the presence of hate speech, revenge porn, defamation, disinformation, terrorism materials and other repugnant content remains a reality on online platforms with millions (or billions) of users. The Gonzalez case centers around instances when a platform’s neutral automated tools unwittingly augment or recommend such content to users. The petitioner in Gonzalez argued that we have strayed from the path on which Congress set us out when it passed the CDA, and that automated recommendations of harmful content fall outside the protections of the CDA and are not “traditional editorial functions” of a publisher.  However, multiple circuit courts have issued opinions (sometimes over dissenting judges) ruling that such algorithmic recommendations are protected editorial decisions under the CDA, no different than when a newspaper publisher decides to run on the front page.

A holding by the Supreme Court narrowing the scope what is “publishing” activity under the CDA will undoubtedly have a tremendous impact on social media platforms and other providers that host third party content. Many types of online services (e.g., dating apps, search engines, online communities, e-commerce businesses, virtual worlds) use algorithms that are designed to match third party information or profiles with a consumer’s implicit interests, or else arrange and distribute third party information to form connections or foster engagement.  If the Supreme Court accepts Gonzalez’s “matchmaking” argument, such a ruling would create a major carve-out of the CDA and a modern provider might lose immunity in instances where automated tools are used to organize and recommend content provided by third parties. Indeed, internet services have also long relied on CDA immunity to use automated editorial tools to repackage or highlight third-party content displayed to users based on, among other things, users’ geolocation, chosen language of choice, and profile information.  And practically speaking, one of the reasons why CDA Section 230 is so effective is that providers can often dispense with suits over third party content at the motion to dismiss stage – one can envision if the Court rules algorithmic tools fall outside the CDA then future claims against providers will be crafted with this carve-out in mind and such litigation may be protracted and held up in discovery over what automated tools might have been used and how content may have been repackaged.

This is a case we will be watching closely. As Petitioner rightly states, whether CDA immunity covers algorithm-generated recommendations “is of enormous practical importance.” With no consensus in Congress about how to reform the CDA (a delicate task as it is difficult to regulate objectionable content online without affecting the vibrant internet), there is a possibility that the Supreme Court may effect reform on its own and alter how online platforms handle content going forward. If so, the principle of unintended consequences is sure to play a part in the ongoing operations of most online businesses going forward.