In the closing days of August, two federal appeals courts issued noteworthy decisions at the intersection of workplace conduct, computer law and online platforms.  The two opinions were released during a period of time this past summer amidst the continuing flurry of AI-related case developments and perhaps did not get wide media attention (but which might prove to be important cases in the future).

  • Second Circuit – CDA Section 230. The court ruled that a software platform was not entitled to CDA Section 230 immunity – at least at the early stage in the case – based on allegations that it actively contributed to the unlawful software content at issue by manufacturing and distributing an emissions-control “defeat devices.” (U.S. v. EZ Lynk, SEZC, No. 24-2386 (2d Cir. Aug. 20, 2025)). The opinion’s discussion of what it means to be a “developer” of content has implications for future litigation that might involve generative AI, app stores, marketplaces, and IoT ecosystems, where certain fact patterns could blur the line between passive hosting and active co-development.
  • Third Circuit – CFAA and Trade Secrets: Days later, the Third Circuit issued an important decision (subsequently amended, with minor changes that did not change the holding) that further develops CFAA case law post-Van Buren. The court held that CFAA liability, an anti-hacking statute, does not extend to workplace computer use violations. (NRA Group, LLC v. Durenleau, No. 24-1123 (3d Cir. Aug. 26, 2025) (vacated by Oct. 7, 2025 amended opinion), reh’g en banc denied (Oct. 7, 2025)). The court also addressed and rejected a novel claim of trade secret misappropriation based on access to account passwords.     

Together, the cases show how courts continue to interpret the reach of technology-related statutes in contexts never contemplated when those laws were first enacted.

  • Law establishes national prohibition against nonconsensual online publication of intimate images of individuals, both authentic and computer-generated.
  • First federal law regulating AI-generated content.
  • Creates requirement that covered platforms promptly remove depictions upon receiving notice of their existence and a valid takedown request.
  • For many online service providers, complying with the Take It Down Act’s notice-and-takedown requirement may warrant revising their existing DMCA takedown notice provisions and processes.
  • Another carve-out to CDA immunity? More like a dichotomy of sorts…. 

On May 19, 2025, President Trump signed the bipartisan-supported Take it Down Act into law. The law prohibits any person from using an “interactive computer service” to publish, or threaten to publish, nonconsensual intimate imagery (NCII), including AI-generated NCII (colloquially known as revenge pornography or deepfake revenge pornography). Additionally, the law requires that, within one year of enactment, social media companies and other covered platforms implement a notice-and-takedown mechanism that allows victims to report NCII.  Platforms must then remove properly reported imagery (and any known identical copies) within 48 hours of receiving a compliant request.

Section 230 of the Communications Decency Act (the “CDA” or “Section 230”), known prolifically as “the 26 words that created the internet,” remains the subject of ongoing controversy. As extensively reported on this blog, the world of social media, user-generated content, and e-commerce has been consistently

One of the many legal questions swirling around in the world of generative AI (“GenAI”) is to what extent Section 230 of the Communications Decency Act (CDA) applies to the provision of GenAI.  Can CDA immunity apply to GenAI-generated output and protect GenAI providers from potential third party liability?

On June 14, 2023, Senators Richard Blumenthal and Josh Hawley introduced the “No Section 230 Immunity for AI Act,” bipartisan legislation that would expressly remove most immunity under the CDA for a provider of an interactive computer service if the conduct underlying the claim or charge “involves the use or provision of generative artificial intelligence by the interactive computer service.” While the bill would eliminate “publisher” immunity under §230(c)(1) for claims involving the use or provision of generative artificial intelligence by an interactive computer service, immunity for so-called “Good Samaritan” blocking under § 230(c)(2)(A), which protects service providers and users from liability for claims arising out of good faith actions to screen or restrict access to “objectionable” material from their services, would not be affected.

Back in October 2022, the Supreme Court granted certiorari in Gonzalez v. Google, an appeal that challenged whether YouTube’s targeted algorithmic recommendations qualify as “traditional editorial functions” protected by the CDA — or, rather, whether such recommendations are not the actions of a “publisher” and thus fall outside of

Since the passage of Section 230 of the Communication Decency Act (“CDA”), the majority of federal circuits have interpreted the CDA to establish broad federal immunity to causes of action that would treat service providers as publishers of content provided by third parties.  The CDA was passed in the early days of e-commerce and was written broadly enough to cover not only the online bulletin boards and not-so-very interactive websites that were common then, but also more modern online services, web 2.0 offerings and today’s platforms that might use algorithms to organize, repackage or recommend user-generated content.

Over 25 years ago, the Fourth Circuit, in the landmark Zeran case, the first major circuit court-level decision interpreting Section 230, held that Section 230 bars lawsuits, which, at their core, seek to hold a service provider liable for its exercise of a publisher’s “traditional editorial functions — such as deciding whether to publish, withdraw, postpone or alter content.” Courts have generally followed this reasoning ever since to determine whether an online provider is being treated as a “publisher” of third party content and thus entitled to immunity under the CDA.  The scope of “traditional editorial functions” is at the heart of a case currently on the docket at the Supreme Court. On October 3, 2022, the Supreme Court granted certiorari in an appeal that is challenging whether a social media platform’s targeted algorithmic recommendations fall under the umbrella of “traditional editorial functions” protected by the CDA or whether such recommendations are not the actions of a “publisher” and thus fall outside of CDA immunity. (Gonzalez v. Google LLC, No. 21-1333 (U.S. cert. granted Oct. 3, 2022)).

In a recent ruling, a California district court held that Apple, as operator of that App Store, was protected from liability for losses resulting from that type of fraudulent activity. (Diep v. Apple Inc., No. 21-10063 (N.D. Cal. Sept. 2, 2022)). This case is important in that, in

On May 14, 2021, President Biden issued an executive order revoking, among other things, his predecessor’s action (Executive Order 13295 of May 28, 2020) that directed the executive branch to clarify certain provisions under Section 230 of the Communications Decency Act (“Section 230” or the “CDA”) and remedy what former President Trump had claimed was the social media platforms’ “selective censorship” of user content and the “flagging” of content that does not violate a provider’s terms of service. The now-revoked executive order had, among other things, directed the Commerce Department to petition for rulemaking with the FCC to clarify certain aspect of CDA immunity for online providers (the FCC invited public input on the topic, but did not ultimately move forward with a proposed rulemaking) and requested the DOJ to draft proposed legislation curtailing the protections under the CDA (the DOJ submitted a reform proposal to Congress last October).

Happy Silver Anniversary to Section 230 of Communications Decency Act (“CDA” or “Section 230”), which was signed into law by President Bill Clinton in February 1996. At that time, Congress enacted CDA Section 230 in response to case law that raised the specter of liability for any online service provider that attempted to moderate its platform, thus discouraging the screening out and blocking of offensive material. As has been extensively reported on this blog, the world of social media and user-generated content is supported by protections afforded by Section 230. Now, 25 years later, the CDA is at a crossroads of sorts and its protections have stoked some controversy. Yet, as it stands, Section 230 continues to provide robust immunity for online providers.

In a recent case, Google LLC (“Google”) successfully argued for the application of Section 230, resulting in a California district court ­dismissing, with leave to amend, a putative class action alleging consumer protection law claims against the Google Play App Store.  The claims concerned the offering for download of third party mobile video games that allow users to buy Loot Boxes, which are in-app purchases that contain a randomized assortment of items that can improve a player’s chances at advancing in a videogame.  The plaintiffs claimed these offerings constituted illegal “slot machines or devices” under California law.  (Coffee v. Google LLC, No. 20-03901 (N.D. Cal. Feb. 10, 2021)).