On March 21, 2024, in a bold regulatory move, Tennessee Governor Bill Lee signed the Ensuring Likeness Voice and Image Security (“ELVIS”) Act (Tenn. Code Ann. §47-25-1101 et seq.) – a law which, as Gov. Lee stated, covers “new, personalized generative AI cloning models and services that enable human

One of the many legal questions swirling around in the world of generative AI (“GenAI”) is to what extent Section 230 of the Communications Decency Act (CDA) applies to the provision of GenAI.  Can CDA immunity apply to GenAI-generated output and protect GenAI providers from potential third party liability?

On June 14, 2023, Senators Richard Blumenthal and Josh Hawley introduced the “No Section 230 Immunity for AI Act,” bipartisan legislation that would expressly remove most immunity under the CDA for a provider of an interactive computer service if the conduct underlying the claim or charge “involves the use or provision of generative artificial intelligence by the interactive computer service.” While the bill would eliminate “publisher” immunity under §230(c)(1) for claims involving the use or provision of generative artificial intelligence by an interactive computer service, immunity for so-called “Good Samaritan” blocking under § 230(c)(2)(A), which protects service providers and users from liability for claims arising out of good faith actions to screen or restrict access to “objectionable” material from their services, would not be affected.

At the close of 2022, New York Governor Kathy Hochul signed the “Digital Fair Repair Act” (S4101A/A7006-B) (to be codified at N.Y. GBL §399-nn) (the “Act”). The law makes New York the first state in the country to pass a consumer electronics right-to-repair law.[1] Similar bills are pending in other states. The Act is a slimmed down version of the bill that was first passed by the legislature last July.

Generally speaking, the Act will require original equipment manufacturers (OEMs), or their authorized repair providers, to make parts and tools and diagnostic and repair information required for the maintenance and repair of “digital electronic equipment” available to independent repair providers and consumers, on “fair and reasonable terms” (subject to certain exceptions). The law only applies to products that are both manufactured for the first time as well as sold or used in the state for the first time on or after the law’s effective date of July 1, 2023 (thus exempting electronic products currently owned by consumers).

On June 15, 2022, Senator Elizabeth Warren introduced a bill, cosponsored by a host of other Democratic and independent Senators, the “Health and Location Data Protection Act of 2022,” which, subject to a few exceptions, would, among other things, prohibit the selling, sharing or transferring location data and health data. The bill gives the Federal Trade Commission (FTC) rulemaking and enforcement authority for violations of the law and also grants state attorneys general the right to bring actions; notably, the law would also give a private right of action to persons adversely affected by a violation of the proposed law.

With the change in administrations in Washington, there has been a drive to enact or amend legislation in a variety of areas. However, most initiatives lack the zeal found with the bipartisan interest in “reining in social media” and pursuing reforms to Section 230 of the Communications Decency Act (CDA).  As we have documented,, the parade of bills and approaches to curtail the scope of the immunities given to “interactive computer services” under CDA Section 230 has come from both sides of the aisle (even if the justifications for such reform differ along party lines). The latest came on February 5, 2021, when Senators Warner, Hirono and Klobuchar announced the SAFE TECH Act.  The SAFE TECH Act would limit CDA immunity by enacting “targeted exceptions”  to the law’s broad grant of immunity.

New York has enacted a new law, effective February 9, 2021, regulating automatic renewal and some “free trial” type agreements. While some organizations may have already taken steps to be in compliance with industry requirements, the federal Restore Online Shoppers’ Confidence Act (ROSCA), and similar auto-renewal laws in place

Section 230 of the Communications Decency Act, 47 U.S.C. §230 (“Section 230” or the “CDA”), enacted in 1996, is generally viewed as the most important statute supporting the growth of Internet commerce.  The key provision of the CDA, Section 230(c)(1)(a), only 26 words long, simply states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This one sentence has been the source of bedrock service provider immunity for third party content made available through a provider’s infrastructure, thus enabling the growth of a vigorous online ecosystem. Without such immunity, providers would have to face what the Ninth Circuit once termed, “death by ten thousand duck-bites,” in having to fend off claims that they promoted or tacitly assented to objectionable third party content.

The brevity of this provision of the CDA is deceptive, however. The CDA – and the immunity it conveys – is controversial, and those 26 words have been the subject of hundreds, if not thousands, of litigations.  Critics of the CDA point to the proliferation of hate speech, revenge porn, defamatory material, disinformation and other objectionable content – in many cases, the sites hosting such third party content (knowingly or unknowingly) are protected by the broad scope of the CDA. Other objections are merely based on unhappiness about the content of the speech, albeit in many cases true, such as comments that are critical of individuals, their businesses or their interests. Litigants upset about such content have sought various CDA workarounds over the past two decades in a mostly unsuccessful attempt to bypass the immunity and reach the underlying service providers.

The back-and-forth debate around the scope and effects of the CDA and the broad discretion afforded online providers regarding content hosting and moderation decisions is not new.  However, it was brought into a new focus when the President, vexed at the way some of his more controversial posts were being treated by certain social media platforms, issued a May 20, 2020 Executive Order for the purpose of curtailing legal protections for online providers. The goal was to remedy what the White House believed was the online platforms’ “deceptive or pretextual actions stifling free and open debate by censoring certain viewpoints.”

The Executive Order – which is currently being challenged in court as unconstitutional – directed several federal agencies to undertake certain legislative and regulatory efforts toward CDA reform. Consequently, in June 2020 the DOJ stated “that the time is ripe to realign the scope of Section 230 with the realities of the modern internet” and released a 28-page document with its preliminary recommendations for reform of Section 230.  A month later, the Commerce Department submitted a petition requesting that the FCC write rules to limit the scope of CDA immunity and place potentially additional compliance requirements on many providers that host third party content.  Then, on September 23, 2020, the DOJ announced that it had sent its legislative proposal for amending the CDA to Congress. The DOJ, in its cover letter to Congress, summed up the need for reform: “The proposed legislation accordingly seeks to align the scope of Section 230 immunities with the realities of the modern internet while ensuring that the internet remains a place for free and vibrant discussion.”

While Washington’s comprehensive data privacy bill (SB 6182) — inspired by California’s CCPA — died when legislators could not hammer out a compromise over enforcement mechanisms, the state legislature did reach agreement and Gov. Jay Inslee signed into law a facial recognition bill (SB 6280) that provides some important privacy and antidiscrimination provisions regarding state and local governmental use of the technology.

During the 2016 election, certain Russian operatives used fake social media profiles to influence voters and also created bot accounts to add likes to and share posts across the internet.  And more recently, in January 2019, the New York Attorney General and Office of the Florida Attorney General announced settlements with certain entities that sold fake social media engagement, such as followers, likes and views.  Moreover, many of the social media platforms have had recent purges of millions of fake accounts.  Thus, it’s clear that bots and automated activity on social media platforms has been on everyone’s radar…including state legislators’ too.

Indeed, California passed a chatbot disclosure law (SB-1001) last September that makes it unlawful for persons to mislead users about their artificial bot identity in certain circumstances, and it is only now coming into effect on July 1st.  In essence, the purpose of law was to inform users when they are interacting with a virtual assistant or chatbot or automated social media account so that users could change their behavior or expectations accordingly.  Entities that may interact online or via mobile applications with their customers regarding commercial transactions via a chatbot on their own website or automated account on another platform should certainly take note of the new California law’s disclosure requirements.