One of the many legal questions swirling around in the world of generative AI (“GenAI”) is to what extent Section 230 of the Communications Decency Act (CDA) applies to the provision of GenAI.  Can CDA immunity apply to GenAI-generated output and protect GenAI providers from potential third party liability?

On June 14, 2023, Senators Richard Blumenthal and Josh Hawley introduced the “No Section 230 Immunity for AI Act,” bipartisan legislation that would expressly remove most immunity under the CDA for a provider of an interactive computer service if the conduct underlying the claim or charge “involves the use or provision of generative artificial intelligence by the interactive computer service.” While the bill would eliminate “publisher” immunity under §230(c)(1) for claims involving the use or provision of generative artificial intelligence by an interactive computer service, immunity for so-called “Good Samaritan” blocking under § 230(c)(2)(A), which protects service providers and users from liability for claims arising out of good faith actions to screen or restrict access to “objectionable” material from their services, would not be affected.

In a recent ruling, a California district court held that Apple, as operator of that App Store, was protected from liability for losses resulting from that type of fraudulent activity. (Diep v. Apple Inc., No. 21-10063 (N.D. Cal. Sept. 2, 2022)). This case is important in that, in

In the past month, there have been some notable developments surrounding Section 230 of the Communications Decency Act (“CDA” or “Section 230”) beyond the ongoing debate in Congress over the potential for legislative reform. These include a novel application of CDA in a FCRA online privacy case (Henderson v. The Source for Public Data, No. 20-294 (E.D. Va. May 19, 2021)) and the denial of CDA immunity in another case involving an alleged design defect in a social media app (Lemmon v. Snap Inc., No. 20-55295 (9th Cir. May 4, 2021), as well as the uncertainties surrounding a new Florida law that attempts to regulate content moderation decisions and user policies of large online platforms.