On June 3, 2025, Oregon Governor Tina Kotek signed HB 2008 into law to amend the Oregon Consumer Privacy Act,[1] the state’s comprehensive data privacy law. Among other items, effective January 1, 2026, the “sale” of two categories of personal data will be prohibited

  • Precise geolocation information that can pinpoint an individual or device with a 1,750-foot radius, absent some specific communications or utility-related exceptions
  • Personal data of anyone under sixteen years of age, provided that the data controller “has actual knowledge that, or willfully disregards whether, the consumer is under 16 years of age”[2]

The location data provision echoes a similar prohibition that was passed in Maryland last year.[3] 

Location data is considered “sensitive” because it can be readily collected from mobile devices or web browsing activities and can reveal a great deal about an individual’s habits, interests and movements. Beyond targeted advertising, anonymized location data can be a valuable source of alternative data for businesses gathering insights on competitors or consumer foot traffic or migration patterns and population growth.

As a result, the Oregon law – and the possibility of other similar state enactments that could restrict the sale of precise location data – represents an important development affecting data brokers and entities that use such data for location-based advertising and profiling and to create other data products and insights from location data. HB 2008’s definition of “sale” may potentially affect not just direct sales of precise location data but bundling and other licensing arrangements, subject to certain exceptions and uses. The new law will also add to customers’ due diligence process examining their data vendors’ collection practices.

  • Law establishes national prohibition against nonconsensual online publication of intimate images of individuals, both authentic and computer-generated.
  • First federal law regulating AI-generated content.
  • Creates requirement that covered platforms promptly remove depictions upon receiving notice of their existence and a valid takedown request.
  • For many online service providers, complying with the Take It Down Act’s notice-and-takedown requirement may warrant revising their existing DMCA takedown notice provisions and processes.
  • Another carve-out to CDA immunity? More like a dichotomy of sorts…. 

On May 19, 2025, President Trump signed the bipartisan-supported Take it Down Act into law. The law prohibits any person from using an “interactive computer service” to publish, or threaten to publish, nonconsensual intimate imagery (NCII), including AI-generated NCII (colloquially known as revenge pornography or deepfake revenge pornography). Additionally, the law requires that, within one year of enactment, social media companies and other covered platforms implement a notice-and-takedown mechanism that allows victims to report NCII.  Platforms must then remove properly reported imagery (and any known identical copies) within 48 hours of receiving a compliant request.

UPDATE (April 17, 2025): The below reflects a development occurring after our publication of the original post.

On April 11, 2025, the National Security Division (the “NSD”) released several documents setting out initial guidance on how to comply with the Rule, which the NSD refers to as the Data Security

  • Uses of lnformation Limited to “What is Reasonably Necessary”
  • Use of Deidentified Data Not Within Scope
  • Screen Scraping Survives

After a yearslong lead-up, the Consumer Financial Protection Bureau (CFPB) published its final “open banking” rule in October. The rule effectuates the section of the Consumer Financial Protection Act, which charged

The Federal Trade Commission (FTC) has long expressed a concern about the potential misuse of location data.  For example, in a 2022 blog post, “Location, health, and other sensitive information: FTC committed to fully enforcing the law against illegal use and sharing of highly sensitive data,” the agency termed the entire location data ecosystem “opaque” and has investigated the practices and brought enforcement actions against mobile app operators and data brokers with respect to sensitive data.

One such FTC enforcement began with an August 2022 complaint against Kochava, Inc. (“Kochava”), a digital marketing and analytics firm, seeking an order halting Kochava’s alleged acquisition and downstream sale of “massive amounts” of precise geolocation data collected from consumers’ mobile devices. In that complaint, the FTC alleged that Kochava, in its role as a data broker, collects a wealth of information about consumers and their mobile devices by, among other means, purchasing data from outside entities to sell to its own customers.  Among other things, the FTC alleged that the location data provided by Kochava to its customers was not anonymized and that it was possible, using third party services, to use the geolocation data combined with other information to identify a mobile device user or owner.

In May 2023, an Idaho district court granted Kochava’s motion to dismiss the FTC’s complaint, with leave to amend. Subsequently, the FTC filed an amended complaint, and Kochava requested that the court keep the amended complaint under seal, which it did until it could rule on the merits of the parties’ arguments.

On November 3, 2023, the court granted the FTC’s motion to unseal the amended complaint, finding no “compelling reason” to keep the amended complaint under seal and rejecting Kochava’s arguments that the amended complaint’s allegations were “knowingly false” or “misleading.” (FTC v. Kochava Inc., No. 22-00377 (D. Idaho Nov. 3, 2023)). As a result, the FTC’s amended complaint has been unsealed to the public.

On October 30, 2023, President Biden issued an “Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence” (Fact Sheet, here) designed to spur new AI safety and security standards, encourage the development of privacy-preserving technologies in conjunction with AI training, address certain instances of algorithmic discrimination, advance the responsible use of AI in healthcare, study the impacts of AI on the labor market, support AI research and a competitive environment in the industry, and issue guidance on the use of AI by federal agencies.  This latest move builds on the White House’s previously-released “Blueprint for an AI Bill of Rights” and its announcement this past summer that it had secured voluntary commitments from major AI companies focusing on what the White House termed as “three principles that must be fundamental to the future of AI – safety, security, and trust.” 

Roughly two weeks apart, on July 21, 2022 and August 5, 2022, respectively, Amazon made headlines for agreeing to acquire One Medical, “a human-centered and technology-powered primary care organization,” for approximately $3.9 billion and iRobot, a global consumer robot company, known for its creation of the Roomba vacuum,

On August 29, 2022, the Federal Trade Commission (FTC) announced that it had filed a complaint against Kochava, Inc. (“Kochava”), a digital marketing and analytics firm, seeking an order halting Kochava’s alleged acquisition and downstream sale of “massive amounts” of precise geolocation data collected from consumers’ mobile devices.

The complaint alleges that the data is collected in a format that would allow third parties to track consumers’ movements to and from sensitive locations, including those related to reproductive health, places of worship, and their private residences, among others.  The FTC alleged that “consumers have no insight into how this data is used” and that they do not typically know that inferences about them and their behaviors will be drawn from this information.  The FTC claimed that the sale or license of this sensitive data, which could present an “unwarranted intrusion” into personal privacy, was an unfair business practice under Section 5 of the FTC Act.