One of the many legal questions swirling around in the world of generative AI (“GenAI”) is to what extent Section 230 of the Communications Decency Act (CDA) applies to the provision of GenAI.  Can CDA immunity apply to GenAI-generated output and protect GenAI providers from potential third party liability?

On June 14, 2023, Senators Richard Blumenthal and Josh Hawley introduced the “No Section 230 Immunity for AI Act,” bipartisan legislation that would expressly remove most immunity under the CDA for a provider of an interactive computer service if the conduct underlying the claim or charge “involves the use or provision of generative artificial intelligence by the interactive computer service.” While the bill would eliminate “publisher” immunity under §230(c)(1) for claims involving the use or provision of generative artificial intelligence by an interactive computer service, immunity for so-called “Good Samaritan” blocking under § 230(c)(2)(A), which protects service providers and users from liability for claims arising out of good faith actions to screen or restrict access to “objectionable” material from their services, would not be affected.

In a recent ruling, a California district court held that Apple, as operator of that App Store, was protected from liability for losses resulting from that type of fraudulent activity. (Diep v. Apple Inc., No. 21-10063 (N.D. Cal. Sept. 2, 2022)). This case is important in that, in

Happy Silver Anniversary to Section 230 of Communications Decency Act (“CDA” or “Section 230”), which was signed into law by President Bill Clinton in February 1996. At that time, Congress enacted CDA Section 230 in response to case law that raised the specter of liability for any online service provider that attempted to moderate its platform, thus discouraging the screening out and blocking of offensive material. As has been extensively reported on this blog, the world of social media and user-generated content is supported by protections afforded by Section 230. Now, 25 years later, the CDA is at a crossroads of sorts and its protections have stoked some controversy. Yet, as it stands, Section 230 continues to provide robust immunity for online providers.

In a recent case, Google LLC (“Google”) successfully argued for the application of Section 230, resulting in a California district court ­dismissing, with leave to amend, a putative class action alleging consumer protection law claims against the Google Play App Store.  The claims concerned the offering for download of third party mobile video games that allow users to buy Loot Boxes, which are in-app purchases that contain a randomized assortment of items that can improve a player’s chances at advancing in a videogame.  The plaintiffs claimed these offerings constituted illegal “slot machines or devices” under California law.  (Coffee v. Google LLC, No. 20-03901 (N.D. Cal. Feb. 10, 2021)).

With the change in administrations in Washington, there has been a drive to enact or amend legislation in a variety of areas. However, most initiatives lack the zeal found with the bipartisan interest in “reining in social media” and pursuing reforms to Section 230 of the Communications Decency Act (CDA).  As we have documented,, the parade of bills and approaches to curtail the scope of the immunities given to “interactive computer services” under CDA Section 230 has come from both sides of the aisle (even if the justifications for such reform differ along party lines). The latest came on February 5, 2021, when Senators Warner, Hirono and Klobuchar announced the SAFE TECH Act.  The SAFE TECH Act would limit CDA immunity by enacting “targeted exceptions”  to the law’s broad grant of immunity.

Section 230 of the Communications Decency Act, 47 U.S.C. §230 (“Section 230” or the “CDA”), enacted in 1996, is generally viewed as the most important statute supporting the growth of Internet commerce.  The key provision of the CDA, Section 230(c)(1)(a), only 26 words long, simply states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This one sentence has been the source of bedrock service provider immunity for third party content made available through a provider’s infrastructure, thus enabling the growth of a vigorous online ecosystem. Without such immunity, providers would have to face what the Ninth Circuit once termed, “death by ten thousand duck-bites,” in having to fend off claims that they promoted or tacitly assented to objectionable third party content.

The brevity of this provision of the CDA is deceptive, however. The CDA – and the immunity it conveys – is controversial, and those 26 words have been the subject of hundreds, if not thousands, of litigations.  Critics of the CDA point to the proliferation of hate speech, revenge porn, defamatory material, disinformation and other objectionable content – in many cases, the sites hosting such third party content (knowingly or unknowingly) are protected by the broad scope of the CDA. Other objections are merely based on unhappiness about the content of the speech, albeit in many cases true, such as comments that are critical of individuals, their businesses or their interests. Litigants upset about such content have sought various CDA workarounds over the past two decades in a mostly unsuccessful attempt to bypass the immunity and reach the underlying service providers.

The back-and-forth debate around the scope and effects of the CDA and the broad discretion afforded online providers regarding content hosting and moderation decisions is not new.  However, it was brought into a new focus when the President, vexed at the way some of his more controversial posts were being treated by certain social media platforms, issued a May 20, 2020 Executive Order for the purpose of curtailing legal protections for online providers. The goal was to remedy what the White House believed was the online platforms’ “deceptive or pretextual actions stifling free and open debate by censoring certain viewpoints.”

The Executive Order – which is currently being challenged in court as unconstitutional – directed several federal agencies to undertake certain legislative and regulatory efforts toward CDA reform. Consequently, in June 2020 the DOJ stated “that the time is ripe to realign the scope of Section 230 with the realities of the modern internet” and released a 28-page document with its preliminary recommendations for reform of Section 230.  A month later, the Commerce Department submitted a petition requesting that the FCC write rules to limit the scope of CDA immunity and place potentially additional compliance requirements on many providers that host third party content.  Then, on September 23, 2020, the DOJ announced that it had sent its legislative proposal for amending the CDA to Congress. The DOJ, in its cover letter to Congress, summed up the need for reform: “The proposed legislation accordingly seeks to align the scope of Section 230 immunities with the realities of the modern internet while ensuring that the internet remains a place for free and vibrant discussion.”

Section 230 of the Communications Decency Act (“CDA”), 47 U.S.C. §230, enacted in 1996, is often cited as the most important law supporting the Internet, e-commerce and the online economy. Yet, it continues to be subject to intense criticism, including from politicians from both sides of the aisle. Many argue that the CDA has been applied in situations far beyond the original intent of Congress when the statue was enacted. Critics point to the role the CDA has played in protecting purveyors of hate speech, revenge porn, defamation, disinformation and other objectionable content.

Critics of the CDA raise valid concerns.  But what is the right way to address them? One must remember that for organizations that operate websites, mobile apps, social media networks, corporate networks and other online services, the CDA’s protections are extremely important.  Many of those businesses could be impaired if they were subject to liability (or the threat of liability) for objectionable third party content residing on their systems.

The criticism surrounding the CDA hit a fever pitch on May 28, 2020 when the President weighed in on the issue by signing an Executive Order attempting to curtail legal protections under Section 230. While the Executive Order was roundly labelled as political theater – and is currently being challenged in court as unconstitutional – it notably directed the Justice Department to submit draft proposed legislation (i.e., a CDA reform bill) to accomplish the policy objectives of the Order. This week, on June 17, 2020, the DOJ announced “that the time is ripe to realign the scope of Section 230 with the realities of the modern internet” and released a document with its recommendations for legislative reform of Section 230.  This is on the heels of a recent initiative by several GOP lawmakers to introduce their own version of a reform bill.

UPDATE: On the afternoon of May 28, 2020, the President signed the executive order concerning CDA Section 230. A copy/link to the order has not yet been posted on the White House’s website.

According to news reports, the Trump Administration (the “Administration”) is drafting and the President is set to sign an executive order to attempt to curtail legal protections under Section 230 of the Communications Decency Act (“Section 230” or the “CDA”). Section 230 protects online providers in many respects concerning the hosting of user-generated content and bars the imposition of distributor or publisher liability against a provider for the exercise of its editorial and self-regulatory functions with respect to such user content. In response to certain moderation efforts toward the President’s own social media posts this week, the executive order will purportedly seek to remedy what the President claims is the social media platforms’ “selective censorship” of user content and the “flagging” of content that is inappropriate, “even though it does not violate any stated terms of service.”

A purported draft of the executive order was leaked online. If issued, the executive order would, among other things, direct federal agencies to limit monies spent on social media advertising on platforms that violate free speech principles, and direct the White House Office of Digital Strategy to reestablish its online bias reporting tool and forward any complaints to the FTC. The draft executive order suggests that the FTC use its power to regulate deceptive practices against those platforms that fall under Section 230 to the extent they restrict speech in ways that do not match with posted terms or policies.  The order also would direct the DOJ to establish a working group with state attorneys general to study how state consumer protection laws could be applied to social media platform’s moderation practices.  Interestingly, the executive order draft would also direct the Commerce Department to file a petition for rulemaking to the FCC to clarify the conditions when an online provider removes “objectionable content” in good faith under the CDA’s Good Samaritan provision (which is a lesser-known, yet important companion to the better-known “publisher” immunity provision).

UPDATE: On January 22, 2019, the Supreme Court denied review of the California Supreme Court decision.

In a closely-followed dispute, the California Supreme Court vacated a lower court order, based upon a default judgment in a defamation action, which had directed Yelp, Inc. (“Yelp”), a non-party to the original suit, to take down certain consumer reviews posted on its site. (Hassell v. Bird, No. S235968, 2018 WL 3213933 (Cal. July 2, 2018)).  If the plaintiffs had included Yelp as a defendant in the original suit, such a suit would have likely been barred by Section 230 of the Communications Decency Act (“CDA” or “CDA Section 230”); instead, the plaintiffs adopted a litigation strategy to bypass such legal immunities.  In refusing to allow plaintiff’s “creative pleading” to avoid the CDA, the outcome was a win for online companies and platforms that host user-generated content (“A Case for the Internet,” declared Yelp).

Facebook recently announced that it would make changes to its news feed to prioritize content that users share and discuss and material from “reputable publishers.”  These changes are part of what Mark Zuckerberg says is a refocusing of Facebook from “helping [users] find relevant content to helping [users] have