Photo of Jonathan Mollod

Jonathan P. Mollod is an attorney and content editor and a part of the firm’s Technology, Media and Telecommunications (TMT) Group. Jonathan earned his J.D. from Vanderbilt Law School. He focuses on issues involving technology, media, intellectual property and licensing issues and general online/tech law issues of the day.

In the closing days of August, two federal appeals courts issued noteworthy decisions at the intersection of workplace conduct, computer law and online platforms.  The two opinions were released during a period of time this past summer amidst the continuing flurry of AI-related case developments and perhaps did not get wide media attention (but which might prove to be important cases in the future).

  • Second Circuit – CDA Section 230. The court ruled that a software platform was not entitled to CDA Section 230 immunity – at least at the early stage in the case – based on allegations that it actively contributed to the unlawful software content at issue by manufacturing and distributing an emissions-control “defeat devices.” (U.S. v. EZ Lynk, SEZC, No. 24-2386 (2d Cir. Aug. 20, 2025)). The opinion’s discussion of what it means to be a “developer” of content has implications for future litigation that might involve generative AI, app stores, marketplaces, and IoT ecosystems, where certain fact patterns could blur the line between passive hosting and active co-development.
  • Third Circuit – CFAA and Trade Secrets: Days later, the Third Circuit issued an important decision (subsequently amended, with minor changes that did not change the holding) that further develops CFAA case law post-Van Buren. The court held that CFAA liability, an anti-hacking statute, does not extend to workplace computer use violations. (NRA Group, LLC v. Durenleau, No. 24-1123 (3d Cir. Aug. 26, 2025) (vacated by Oct. 7, 2025 amended opinion), reh’g en banc denied (Oct. 7, 2025)). The court also addressed and rejected a novel claim of trade secret misappropriation based on access to account passwords.     

Together, the cases show how courts continue to interpret the reach of technology-related statutes in contexts never contemplated when those laws were first enacted.

In Cody v. Jill Acquisition LLC, No. 25-937 (S.D. Cal. June 30, 2025), the Southern District of California declined to enforce a retail site’s terms of use and compel arbitration, holding that the plaintiff, who used guest checkout to place an online order at the retail clothing site, did not have adequate notice of the terms and the arbitration clause. This case should serve as a wake-up call for online entities to reexamine electronic contracting processes. It exemplifies how, even if a website’s visual design and its placement of the hyperlinked Terms of Use during user checkout are comparable to other presentations that have been deemed enforceable, a court could still decline to enforce online terms if the context of the transaction is not the typical e-commerce transaction between a registered customer and a retail site. In this case, the court found that by checking out as a guest without creating an account, the user was less likely to expect a continuing relationship and, therefore, the site’s notice and presentation of the terms below the “Place Order” button were not conspicuous enough in this instance to bind the plaintiff.

On June 3, 2025, Oregon Governor Tina Kotek signed HB 2008 into law to amend the Oregon Consumer Privacy Act,[1] the state’s comprehensive data privacy law. Among other items, effective January 1, 2026, the “sale” of two categories of personal data will be prohibited

  • Precise geolocation information that can pinpoint an individual or device with a 1,750-foot radius, absent some specific communications or utility-related exceptions
  • Personal data of anyone under sixteen years of age, provided that the data controller “has actual knowledge that, or willfully disregards whether, the consumer is under 16 years of age”[2]

The location data provision echoes a similar prohibition that was passed in Maryland last year.[3] 

Location data is considered “sensitive” because it can be readily collected from mobile devices or web browsing activities and can reveal a great deal about an individual’s habits, interests and movements. Beyond targeted advertising, anonymized location data can be a valuable source of alternative data for businesses gathering insights on competitors or consumer foot traffic or migration patterns and population growth.

As a result, the Oregon law – and the possibility of other similar state enactments that could restrict the sale of precise location data – represents an important development affecting data brokers and entities that use such data for location-based advertising and profiling and to create other data products and insights from location data. HB 2008’s definition of “sale” may potentially affect not just direct sales of precise location data but bundling and other licensing arrangements, subject to certain exceptions and uses. The new law will also add to customers’ due diligence process examining their data vendors’ collection practices.

  • Law establishes national prohibition against nonconsensual online publication of intimate images of individuals, both authentic and computer-generated.
  • First federal law regulating AI-generated content.
  • Creates requirement that covered platforms promptly remove depictions upon receiving notice of their existence and a valid takedown request.
  • For many online service providers, complying with the Take It Down Act’s notice-and-takedown requirement may warrant revising their existing DMCA takedown notice provisions and processes.
  • Another carve-out to CDA immunity? More like a dichotomy of sorts…. 

On May 19, 2025, President Trump signed the bipartisan-supported Take it Down Act into law. The law prohibits any person from using an “interactive computer service” to publish, or threaten to publish, nonconsensual intimate imagery (NCII), including AI-generated NCII (colloquially known as revenge pornography or deepfake revenge pornography). Additionally, the law requires that, within one year of enactment, social media companies and other covered platforms implement a notice-and-takedown mechanism that allows victims to report NCII.  Platforms must then remove properly reported imagery (and any known identical copies) within 48 hours of receiving a compliant request.

In May 2024, we released Part I of this series, in which we discussed agentic AI as an emerging technology enabling a new generation of AI-based hardware devices and software tools that can take actions on behalf of users. It turned out we were early – very early – to the discussion, with several months elapsing before agentic AI became as widely known and discussed as it is today. In this Part II, we return to the topic to explore legal issues concerning user liability for agentic AI-assisted transactions and open questions about existing legal frameworks’ applicability to the new generation of AI-assisted transactions.

Background: Snapshot of the Current State of “Agents”[1]

“Intelligent” electronic assistants are not new—the original generation, such as Amazon’s Alexa, have been offering narrow capabilities for specific tasks for more than a decade. However, as OpenAI’s CEO Sam Altman commented in May 2024, an advanced AI assistant or “super-competent colleague” could be the killer app of the future. Later, Altman noted during a Reddit AMA session: “We will have better and better models. But I think the thing that will feel like the next giant breakthrough will be agents.” A McKinsey report on AI agents echoes this sentiment: “The technology is moving from thought to action.” Agentic AI represents not only a technological evolution, but also a potential means to further spread (and monetize) AI technology beyond its current uses by consumers and businesses. Major AI developers and others have already embraced this shift, announcing initiatives in the agentic AI space. For example:  

  • Anthropic announced an updated frontier AI model in public beta capable of interacting with and using computers like human users;
  • Google unveiled Gemini 2.0, its new AI model for the agentic era, alongside Project Mariner, a prototype leveraging Gemini 2.0 to perform tasks via an experimental Chrome browser extension (while keeping a “human in the loop”);
  • OpenAI launched a “research preview” of Operator, an AI tool that can interface with computers on users’ behalf, and launched beta feature “Tasks” in ChatGPT to facilitate ongoing or future task management beyond merely responding to real time prompts;
  • LexisNexis announced the availability of “Protégé,” a personalized AI assistant with agentic AI capabilities;
  • Perplexity recently rolled out “Shop Like a Pro,” an AI-powered shopping recommendation and buying feature that allows Perplexity Pro users to research products and, for those merchants whose sites are integrated with the tool, purchase items directly on Perplexity; and
  • Amazon announced Alexa+, a new generation of Alexa that has agentic capabilities, including enabling Alexa to navigate the internet and execute tasks, as well as Amazon Nova Act, an AI model designed to perform actions within a web browser.

Beyond these examples, other startups and established tech companies are also developing AI “agents” in this country and overseas (including the invite-only release of Manus AI by Butterfly Effect, an AI developer in China). As a recent Microsoft piece speculates, the generative AI future may involve a “new ecosystem or marketplace of agents,” akin to the current smartphone app ecosystem.  Although early agentic AI device releases have received mixed reviews and seem to still have much unrealized potential, they demonstrate the capability of such devices to execute multistep actions in response to natural language instructions.

Like prior technological revolutions—personal computers in the 1980s, e-commerce in the 1990s and smartphones in the 2000s—the emergence of agentic AI technology challenges existing legal frameworks. Let’s take a look at some of those issues – starting with basic questions about contract law.

Section 230 of the Communications Decency Act (the “CDA” or “Section 230”), known prolifically as “the 26 words that created the internet,” remains the subject of ongoing controversy. As extensively reported on this blog, the world of social media, user-generated content, and e-commerce has been consistently

Generative AI has been most synonymous in the public mind with “AI” since the commercial breakout of ChatGPT in November 2022. Consumers and businesses have seen the fruits of impressive innovation in various generative models’ ability to create audio, video, images and text, analyze and transform data, perform Q&A chatbot

Last week, OpenAI rolled out ChatGPT Team, a flexible subscription structure for small-to-medium sized businesses (with two or more users) that are not large enough to warrant the expense of a ChatGPT Enterprise subscription (which requires a minimum of 150 licensed users).  Despite being less expensive than its Enterprise counterpart, ChatGPT Team provides for the use of the latest OpenAI models with the robust privacy, security and confidentiality protections that previously only applied to the ChatGPT Enterprise subscription and which are far more protective than the terms that govern ordinary personal accounts. This development could be the proverbial “game changer” for smaller businesses, as for the first time, they can have access to tools previously only available to OpenAI Enterprise customers, under OpenAI’s more favorable Business Terms and the privacy policies listed on the Enterprise Privacy page, without making the financial or technical commitment required under an Enterprise relationship. 

Thus, for example, ChatGPT Team customers would be covered by the Business Terms’ non-training commitment (OpenAI’s Team announcement states: “We never train on your business data or conversations”), and by other data security controls, as well as Open AI’s “Copyright Shield,” which offers indemnity for customers in the event that a generated output infringes third party IP.[1] Moreover, under the enterprise-level privacy protections, customers can also create custom GPT models that are for in-house use and not shared with anyone else.

As noted above, until now, the protections under the OpenAI Business Terms were likely beyond reach for many small and medium sized businesses, either because of the financial commitment required by OpenAI’s Enterprise agreement or because of the unavailability of the technical infrastructure necessary to implement the OpenAI API Service. In the past, such smaller entities might resort to having employees use free or paid OpenAI products under individual accounts, with internal precautions (like restrictive AI policies) in place to avoid confidentiality and privacy concerns.[2]

As we’ve seen over the last year, one generative AI provider’s rollout of a new product, tool or contractual protection often results in other providers following suit. Indeed, earlier this week Microsoft announced that it is “expanding Copilot for Microsoft 365 availability to small and medium-sized businesses.” With businesses of all sizes using, testing or developing custom GAI products to stay abreast with the competition, we will watch for future announcements from other providers about more flexible licensing plans for small-to-medium sized businesses.