One of the many legal questions swirling around in the world of generative AI (“GenAI”) is to what extent Section 230 of the Communications Decency Act (CDA) applies to the provision of GenAI.  Can CDA immunity apply to GenAI-generated output and protect GenAI providers from potential third party liability?

On June 14, 2023, Senators Richard Blumenthal and Josh Hawley introduced the “No Section 230 Immunity for AI Act,” bipartisan legislation that would expressly remove most immunity under the CDA for a provider of an interactive computer service if the conduct underlying the claim or charge “involves the use or provision of generative artificial intelligence by the interactive computer service.” While the bill would eliminate “publisher” immunity under §230(c)(1) for claims involving the use or provision of generative artificial intelligence by an interactive computer service, immunity for so-called “Good Samaritan” blocking under § 230(c)(2)(A), which protects service providers and users from liability for claims arising out of good faith actions to screen or restrict access to “objectionable” material from their services, would not be affected.

The bill defines generative AI very broadly as: “an artificial intelligence system that is capable of generating novel text, video, images, audio, and other media based on prompts or other forms of data provided by a person.’’

Because the bill’s CDA carveout extends to any interactive computer service where the underlying conduct relates to the use of generative AI, it seems as if it could extend to computer-based services which have generative AI built into them. This may be beyond the intent of the drafters of the bill.  In fact, the press release accompanying the bill states that the bill was written to help build a framework for “AI platform accountability” and that the bill would strip immunity from “AI companies” in “civil claims or criminal prosecutions involving the use or provision of generative AI.”  Thus, it seems that the drafters may have written the proposed law to remove CDA immunity for “AI companies” (whatever that means) and not every service that builds in some generative AI functionality into its offerings.

The bill raises a key question: Would a court deem a generative AI platform or a computer service with GenAI functionality to be the information content provider or creator of its output that is responsible, in whole or in part, for the creation or development of the information provided (no CDA immunity), or merely the publisher of third party content based on third party training data (potential immunity under CDA Section 230)?

One might contend that generative AI tools, merely by that term, suggest they “generate” content and thus the service would not enjoy CDA immunity for claims arising from the outputted content (as opposed to a social media platform that hosts user-generated content). On the other hand, one might argue that a generative AI tool is not a person or entity creating independent content, rather an algorithm that arranges third party training data in some useful form in response to a user prompt, and thus should be protected by CDA immunity for output.  The Supreme Court recently declined the opportunity to comment on CDA immunity as it pertains to a social media platform’s algorithmic organization or presentation of content. With the introduction of a bill that would carve out generative AI providers from CDA publisher immunity, it may be that the senators, knowingly or not, have waded into this thorny legal issue.

Will this bill progress? Who knows. Congress does not have a successful track record in passing CDA-related legislation (see the travails of CDA reform), but GenAI seems to be a matter of bipartisan concern.  We will watch the bipartisan-sponsored bill’s progress carefully to see if it has a chance of passage through a divided Congress.