In recent years there has been a great demand for information about job listings, company reviews and employment data.   Recruiters, consultants, analysts and employment-related service providers, amongst others, are aggressively scraping job-posting sites to extract that type of information. Recall, for example, the long-running, landmark hiQ scraping litigation over the scraping of public LinkedIn data.

The two most recent disputes regarding scraping of employment and job-related data were brought by Jobiak LLC (“Jobiak”), an AI-based recruitment platform.  Jobiak filed two nearly-identical scraping suits in California district court alleging that competitors unlawfully scraped its database and copied its optimized job listings without authorization. (Jobiak LLC v. Botmakers LLC, No. 23-08604 (C.D. Cal. Filed Oct. 12, 2023); Jobiak LLC v. Aspen Technology Labs, Inc., No. 23-08728 (C.D. Cal. Filed Oct. 17, 2023)).

One of the many legal questions swirling around in the world of generative AI (“GenAI”) is to what extent Section 230 of the Communications Decency Act (CDA) applies to the provision of GenAI.  Can CDA immunity apply to GenAI-generated output and protect GenAI providers from potential third party liability?

On June 14, 2023, Senators Richard Blumenthal and Josh Hawley introduced the “No Section 230 Immunity for AI Act,” bipartisan legislation that would expressly remove most immunity under the CDA for a provider of an interactive computer service if the conduct underlying the claim or charge “involves the use or provision of generative artificial intelligence by the interactive computer service.” While the bill would eliminate “publisher” immunity under §230(c)(1) for claims involving the use or provision of generative artificial intelligence by an interactive computer service, immunity for so-called “Good Samaritan” blocking under § 230(c)(2)(A), which protects service providers and users from liability for claims arising out of good faith actions to screen or restrict access to “objectionable” material from their services, would not be affected.

Back in October 2022, the Supreme Court granted certiorari in Gonzalez v. Google, an appeal that challenged whether YouTube’s targeted algorithmic recommendations qualify as “traditional editorial functions” protected by the CDA — or, rather, whether such recommendations are not the actions of a “publisher” and thus fall outside of

Since the passage of Section 230 of the Communication Decency Act (“CDA”), the majority of federal circuits have interpreted the CDA to establish broad federal immunity to causes of action that would treat service providers as publishers of content provided by third parties.  The CDA was passed in the early days of e-commerce and was written broadly enough to cover not only the online bulletin boards and not-so-very interactive websites that were common then, but also more modern online services, web 2.0 offerings and today’s platforms that might use algorithms to organize, repackage or recommend user-generated content.

Over 25 years ago, the Fourth Circuit, in the landmark Zeran case, the first major circuit court-level decision interpreting Section 230, held that Section 230 bars lawsuits, which, at their core, seek to hold a service provider liable for its exercise of a publisher’s “traditional editorial functions — such as deciding whether to publish, withdraw, postpone or alter content.” Courts have generally followed this reasoning ever since to determine whether an online provider is being treated as a “publisher” of third party content and thus entitled to immunity under the CDA.  The scope of “traditional editorial functions” is at the heart of a case currently on the docket at the Supreme Court. On October 3, 2022, the Supreme Court granted certiorari in an appeal that is challenging whether a social media platform’s targeted algorithmic recommendations fall under the umbrella of “traditional editorial functions” protected by the CDA or whether such recommendations are not the actions of a “publisher” and thus fall outside of CDA immunity. (Gonzalez v. Google LLC, No. 21-1333 (U.S. cert. granted Oct. 3, 2022)).

On July 30, 2021, a New York district court declined to dismiss copyright infringement claims with respect to an online article that included an “embedded” video (i.e., shown via a link to a video hosted on another site).  The case involved a video hosted on a social media platform that made embedding available as a function of the platform.  The court ruled that the plaintiff-photographer plausibly alleged that the defendants’ “embed” may constitute copyright infringement and violate his display right in the copyrighted video, rejecting the defendants’ argument that embedding is not a “display” when the image at issue remains on a third-party’s server (Nicklen v. Sinclair Broadcast Group, Inc., No. 20-10300 (S.D.N.Y. July 30, 2021)).  Notably, this is the second New York court to decline to adopt the Ninth Circuit’s “server test” first adopted in the 2007 Perfect 10 decision, which held that the infringement of the public display right in a photographic image depends, in part, on where the image was hosted.  With this being the latest New York court finding the server test inapt for an online infringement case outside of the search engine context (even if other meritorious defenses may exist), website publishers have received another stark reminder to reexamine inline linking practices.

In the past month, there have been some notable developments surrounding Section 230 of the Communications Decency Act (“CDA” or “Section 230”) beyond the ongoing debate in Congress over the potential for legislative reform. These include a novel application of CDA in a FCRA online privacy case (Henderson v. The Source for Public Data, No. 20-294 (E.D. Va. May 19, 2021)) and the denial of CDA immunity in another case involving an alleged design defect in a social media app (Lemmon v. Snap Inc., No. 20-55295 (9th Cir. May 4, 2021), as well as the uncertainties surrounding a new Florida law that attempts to regulate content moderation decisions and user policies of large online platforms.  

On May 14, 2021, President Biden issued an executive order revoking, among other things, his predecessor’s action (Executive Order 13295 of May 28, 2020) that directed the executive branch to clarify certain provisions under Section 230 of the Communications Decency Act (“Section 230” or the “CDA”) and remedy what former President Trump had claimed was the social media platforms’ “selective censorship” of user content and the “flagging” of content that does not violate a provider’s terms of service. The now-revoked executive order had, among other things, directed the Commerce Department to petition for rulemaking with the FCC to clarify certain aspect of CDA immunity for online providers (the FCC invited public input on the topic, but did not ultimately move forward with a proposed rulemaking) and requested the DOJ to draft proposed legislation curtailing the protections under the CDA (the DOJ submitted a reform proposal to Congress last October).

With the change in administrations in Washington, there has been a drive to enact or amend legislation in a variety of areas. However, most initiatives lack the zeal found with the bipartisan interest in “reining in social media” and pursuing reforms to Section 230 of the Communications Decency Act (CDA).  As we have documented,, the parade of bills and approaches to curtail the scope of the immunities given to “interactive computer services” under CDA Section 230 has come from both sides of the aisle (even if the justifications for such reform differ along party lines). The latest came on February 5, 2021, when Senators Warner, Hirono and Klobuchar announced the SAFE TECH Act.  The SAFE TECH Act would limit CDA immunity by enacting “targeted exceptions”  to the law’s broad grant of immunity.

The appetite for acquisitions and investment in online businesses has never been stronger, with many of the most attractive online opportunities being businesses that host, manage and leverage user-generated content.  These businesses often rely on the immunities offered by Section 230 of the Communications Decency Act, 47 U.S.C. §230 (“Section 230” or the “CDA”) to protect them from liability associated with receiving, editing and posting or removing such content.  And investors have relied on the existence of robust immunity under the CDA in evaluating the risk associated with such investments in such businesses.  This seemed reasonable, as following the legacy of the landmark 1997 Zeran decision, for more than two decades courts have fairly consistently interpreted Section 230 to provide broad immunity for online providers of all types.

However, in the last five years or so, the bipartisan critics of the CDA have gotten more full-throated in decrying the presence of hate speech, revenge porn, defamation, disinformation and other objectionable content on online platforms. This issue has been building throughout the past year, and has reached a fever pitch in the weeks leading up to the election. The government’s zeal for “reining in social media” and pursuing reforms to Section 230, again, on a bipartisan basis, has come through loud and clear (even if the justifications for such reform differ on party lines). While we cannot predict exactly what structure these reforms will take, the political winds suggest that regardless of the administration in charge, change is afoot for the CDA in late 2020 or 2021. Operating businesses must take note, and investors should keep this in mind when conducting diligence reviews concerning potential investments.