The appetite for acquisitions and investment in online businesses has never been stronger, with many of the most attractive online opportunities being businesses that host, manage and leverage user-generated content.  These businesses often rely on the immunities offered by Section 230 of the Communications Decency Act, 47 U.S.C. §230 (“Section 230” or the “CDA”) to protect them from liability associated with receiving, editing and posting or removing such content.  And investors have relied on the existence of robust immunity under the CDA in evaluating the risk associated with such investments in such businesses.  This seemed reasonable, as following the legacy of the landmark 1997 Zeran decision, for more than two decades courts have fairly consistently interpreted Section 230 to provide broad immunity for online providers of all types.

However, in the last five years or so, the bipartisan critics of the CDA have gotten more full-throated in decrying the presence of hate speech, revenge porn, defamation, disinformation and other objectionable content on online platforms. This issue has been building throughout the past year, and has reached a fever pitch in the weeks leading up to the election. The government’s zeal for “reining in social media” and pursuing reforms to Section 230, again, on a bipartisan basis, has come through loud and clear (even if the justifications for such reform differ on party lines). While we cannot predict exactly what structure these reforms will take, the political winds suggest that regardless of the administration in charge, change is afoot for the CDA in late 2020 or 2021. Operating businesses must take note, and investors should keep this in mind when conducting diligence reviews concerning potential investments.

Section 230 of the Communications Decency Act, 47 U.S.C. §230 (“Section 230” or the “CDA”), enacted in 1996, is generally viewed as the most important statute supporting the growth of Internet commerce.  The key provision of the CDA, Section 230(c)(1)(a), only 26 words long, simply states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This one sentence has been the source of bedrock service provider immunity for third party content made available through a provider’s infrastructure, thus enabling the growth of a vigorous online ecosystem. Without such immunity, providers would have to face what the Ninth Circuit once termed, “death by ten thousand duck-bites,” in having to fend off claims that they promoted or tacitly assented to objectionable third party content.

The brevity of this provision of the CDA is deceptive, however. The CDA – and the immunity it conveys – is controversial, and those 26 words have been the subject of hundreds, if not thousands, of litigations.  Critics of the CDA point to the proliferation of hate speech, revenge porn, defamatory material, disinformation and other objectionable content – in many cases, the sites hosting such third party content (knowingly or unknowingly) are protected by the broad scope of the CDA. Other objections are merely based on unhappiness about the content of the speech, albeit in many cases true, such as comments that are critical of individuals, their businesses or their interests. Litigants upset about such content have sought various CDA workarounds over the past two decades in a mostly unsuccessful attempt to bypass the immunity and reach the underlying service providers.

The back-and-forth debate around the scope and effects of the CDA and the broad discretion afforded online providers regarding content hosting and moderation decisions is not new.  However, it was brought into a new focus when the President, vexed at the way some of his more controversial posts were being treated by certain social media platforms, issued a May 20, 2020 Executive Order for the purpose of curtailing legal protections for online providers. The goal was to remedy what the White House believed was the online platforms’ “deceptive or pretextual actions stifling free and open debate by censoring certain viewpoints.”

The Executive Order – which is currently being challenged in court as unconstitutional – directed several federal agencies to undertake certain legislative and regulatory efforts toward CDA reform. Consequently, in June 2020 the DOJ stated “that the time is ripe to realign the scope of Section 230 with the realities of the modern internet” and released a 28-page document with its preliminary recommendations for reform of Section 230.  A month later, the Commerce Department submitted a petition requesting that the FCC write rules to limit the scope of CDA immunity and place potentially additional compliance requirements on many providers that host third party content.  Then, on September 23, 2020, the DOJ announced that it had sent its legislative proposal for amending the CDA to Congress. The DOJ, in its cover letter to Congress, summed up the need for reform: “The proposed legislation accordingly seeks to align the scope of Section 230 immunities with the realities of the modern internet while ensuring that the internet remains a place for free and vibrant discussion.”

The currents around the Communications Decency Act just got a little more turbulent as the White House and executive branch try to reel in the big fish of CDA reform.

On July 27, 2020, the Commerce Department submitted a petition requesting the FCC initiate a rulemaking to clarify the provisions of Section 230 of the Communications Decency Act (CDA). Unrelated, but part of the same fervor in Washington to “rein in social media,” the leaders of the major technology companies appeared before the House Judiciary Antitrust Subcommittee at a hearing yesterday, July 29, 2020, to discuss the Committee’s ongoing investigation of competition in the digital marketplace, where some members inquired about content moderation practices. Moreover, last month, a pair of Senators introduced the PACT Act, a targeted (but still substantial) update to the CDA (and other CDA reform bills are also being debated, including a bill to carve out sexually exploitative material involving children from the CDA`s reach).

Section 230 of the Communications Decency Act (“CDA”), 47 U.S.C. §230, enacted in 1996, is often cited as the most important law supporting the Internet, e-commerce and the online economy. Yet, it continues to be subject to intense criticism, including from politicians from both sides of the aisle. Many argue that the CDA has been applied in situations far beyond the original intent of Congress when the statue was enacted. Critics point to the role the CDA has played in protecting purveyors of hate speech, revenge porn, defamation, disinformation and other objectionable content.

Critics of the CDA raise valid concerns.  But what is the right way to address them? One must remember that for organizations that operate websites, mobile apps, social media networks, corporate networks and other online services, the CDA’s protections are extremely important.  Many of those businesses could be impaired if they were subject to liability (or the threat of liability) for objectionable third party content residing on their systems.

The criticism surrounding the CDA hit a fever pitch on May 28, 2020 when the President weighed in on the issue by signing an Executive Order attempting to curtail legal protections under Section 230. While the Executive Order was roundly labelled as political theater – and is currently being challenged in court as unconstitutional – it notably directed the Justice Department to submit draft proposed legislation (i.e., a CDA reform bill) to accomplish the policy objectives of the Order. This week, on June 17, 2020, the DOJ announced “that the time is ripe to realign the scope of Section 230 with the realities of the modern internet” and released a document with its recommendations for legislative reform of Section 230.  This is on the heels of a recent initiative by several GOP lawmakers to introduce their own version of a reform bill.

President Trump signed an Executive Order today attempting to curtail legal protections under Section 230 of the Communications Decency Act (“Section 230” or the “CDA”). The Executive Order strives to clarify that Section 230 immunity “should not extend beyond its text and purpose to provide protection for those who purport to provide users a forum for free and open speech, but in reality use their power over a vital means of communication to engage in deceptive or pretextual actions stifling free and open debate by censoring certain viewpoints.”

Section 230 protects online providers in many respects concerning the hosting of user-generated content and bars the imposition of distributor or publisher liability against a provider for the exercise of its editorial and self-regulatory functions with respect to such user content.  In response to certain moderation efforts toward the President’s own social media posts this week, the Executive Order seeks to remedy what the President claims is the social media platforms’ “selective censorship” of user content and the “flagging” of content that does not violate a provider’s terms of service.

The Executive Order does a number of things. It first directs:

  • The Commerce Department to file a petition for rulemaking with the FCC to clarify certain aspect of CDA immunity for online providers, namely Good Samaritan immunity under 47 U.S.C. §230(c)(2). Good Samaritan immunity provides  that an interactive computer service provider may not be made liable “on account of” its decision in “good faith” to restrict access to content that it considers to be “obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable.”
    • The provision essentially gives providers leeway to screen out or remove objectionable content in good faith from their platforms without fear of liability. Courts have generally held that the section does not require that the material actually be objectionable; rather, the CDA affords protection for blocking material that the provider or user considers to be “objectionable.” However, Section 230(c)(2)(A) requires that providers act in “good faith” in screening objectionable content.
    • The Executive Order states that this provision should not be “distorted” to protect providers that engage in “deceptive or pretextual actions (often contrary to their stated terms of service) to stifle viewpoints with which they disagree.” The order asks the FCC to clarify “the conditions under which an action restricting access to or availability of material is not ‘taken in good faith’” within the meaning of the CDA, specifically when such decisions are inconsistent with a provider’s terms of service or taken without adequate notice to the user.
    • Interestingly, the Order also directs the FCC to clarify if there are any circumstances where a provider that screens out content under the CDA’s Good Samaritan protection but fails to meet the statutory requirements should still be able to claim protection under CDA Section 230(c)(1) for “publisher” immunity (as, according to the Order, such decisions would be the provider’s own “editorial” decisions).
    • To be sure, courts in recent years have dismissed claims against services for terminating user accounts or screening out content that violates content policies using the more familiar Section 230(c)(1) publisher immunity, stating repeatedly that decisions to terminate an account (or not publish user content) are publisher decisions, and are protected under the CDA. It appears that the Executive Order is suggesting that years of federal court precedent – from the landmark 1997 Zeran case until today – that have espoused broad immunity under the CDA for providers’ “traditional editorial functions” regarding third party content (which include the decision whether to publish, withdraw, postpone or alter content provided by another) were perhaps decided in error.

In the swirl of scrutiny surrounding the big Silicon Valley tech companies and with some in Congress declaiming that Section 230 of the Communications Decency Act (CDA) should be curtailed, 2019 has quietly been an important year for CDA jurisprudence with a number of opinions enunciating robust immunity under CDA Section 230. In particular, there has been a trio of noteworthy circuit court-level opinions rejecting plaintiffs’ attempt to make an “end run” around the CDA based on the assertion that online service providers lose immunity if they algorithmically recommend or recast content in another form to other users of the site.

This week, in a case with an unsettling fact pattern, the Ninth Circuit made it a quartet – ruling that a now-shuttered social networking site was immune from liability under the CDA for connecting a user with a dealer who sold him narcotics that culminated in an overdose. The court found such immunity because the site’s functions, which included content recommendations and notifications to members of discussion groups, were “content-neutral tools” used to facilitate communications. (Dyroff v. The Ultimate Software Group, Inc., No. 18-15175 (9th Cir. Aug. 20, 2019)). 

In the past few months, there have been a number of notable decisions affirming broad immunity under the Communications Decency Act (CDA), 47 U.S.C. §230(c), for online providers that host third party content. The beat goes on, as in late May, a Utah district court ruled that the Tor Browser, which allows for anonymous communications and transactions on the internet, was protected by CDA Section 230 for a website’s sale of illegal substances to a minor on the dark web via the Tor Browser.

More recently, the D.C. Circuit affirmed the dismissal of claims brought by multiple locksmith companies (the “plaintiffs”) against the operators of the major search engines (the “defendants” or “providers”) for allegedly publishing the content of fraudulent locksmiths’ websites and translating street-address and area-code information on those websites into map pinpoints that were displayed in response to user search requests. (Marshall’s Locksmith Service v. Google LLC, No. 18-7018 (D.C. Cir. June 7, 2019)). According to the plaintiffs, by burying legitimate locksmiths listings (with actual, local physical locations) beneath so-called “scam” listings from locksmith call centers that act as lead generators for subcontractors, who may or may not be fully trained, plaintiffs’ legitimate businesses suffered market harm and were forced to pay for additional advertising. (Beyond this case, the issue of false local business listings appearing in Google Maps remains an ongoing concern, according to a report from the Wall Street Journal yesterday).

UPDATE: On December 31, 2019, the Ninth Circuit released an amended opinion in Enigma Software Group USA, LLC v. Malwarebytes, Inc., No. 17-17351 (9th Cir. Dec. 31, 2019). The case also involves competing providers of filtering software and issues concerning the scope of CDA §230(c)(2). In reversing the lower court’s dismissal of claims under the CDA, the Ninth Circuit held that “the phrase ‘otherwise objectionable’ does not include software that the provider finds objectionable for anticompetitive reasons.”

Three recent court decisions affirmed the robust immunity under the Communications Decency Act (CDA), 47 U.S.C. §230(c), for online providers that host third-party content: the Second Circuit’s decision in Herrick v. Grindr LLC, No. 18-396 (2d Cir. Mar. 27, 2019) (summary order), the Wisconsin Supreme Court’s opinion in Daniel v. Armslist, LLC, No. 2017AP344, 2019 WI 47 (Wis. Apr. 30, 2019),  and the Northern District of California’s decision in P.C. Drivers Headquarters, LP v. Malwarebytes Inc., No. 18-05409 (N.D. Cal. Mar. 6, 2019).