The Federal Trade Commission (FTC) has long expressed a concern about the potential misuse of location data.  For example, in a 2022 blog post, “Location, health, and other sensitive information: FTC committed to fully enforcing the law against illegal use and sharing of highly sensitive data,” the agency termed the entire location data ecosystem “opaque” and has investigated the practices and brought enforcement actions against mobile app operators and data brokers with respect to sensitive data.

One such FTC enforcement began with an August 2022 complaint against Kochava, Inc. (“Kochava”), a digital marketing and analytics firm, seeking an order halting Kochava’s alleged acquisition and downstream sale of “massive amounts” of precise geolocation data collected from consumers’ mobile devices. In that complaint, the FTC alleged that Kochava, in its role as a data broker, collects a wealth of information about consumers and their mobile devices by, among other means, purchasing data from outside entities to sell to its own customers.  Among other things, the FTC alleged that the location data provided by Kochava to its customers was not anonymized and that it was possible, using third party services, to use the geolocation data combined with other information to identify a mobile device user or owner.

In May 2023, an Idaho district court granted Kochava’s motion to dismiss the FTC’s complaint, with leave to amend. Subsequently, the FTC filed an amended complaint, and Kochava requested that the court keep the amended complaint under seal, which it did until it could rule on the merits of the parties’ arguments.

On November 3, 2023, the court granted the FTC’s motion to unseal the amended complaint, finding no “compelling reason” to keep the amended complaint under seal and rejecting Kochava’s arguments that the amended complaint’s allegations were “knowingly false” or “misleading.” (FTC v. Kochava Inc., No. 22-00377 (D. Idaho Nov. 3, 2023)). As a result, the FTC’s amended complaint has been unsealed to the public.

On August 29, 2022, the Federal Trade Commission (FTC) announced that it had filed a complaint against Kochava, Inc. (“Kochava”), a digital marketing and analytics firm, seeking an order halting Kochava’s alleged acquisition and downstream sale of “massive amounts” of precise geolocation data collected from consumers’ mobile devices.

The complaint alleges that the data is collected in a format that would allow third parties to track consumers’ movements to and from sensitive locations, including those related to reproductive health, places of worship, and their private residences, among others.  The FTC alleged that “consumers have no insight into how this data is used” and that they do not typically know that inferences about them and their behaviors will be drawn from this information.  The FTC claimed that the sale or license of this sensitive data, which could present an “unwarranted intrusion” into personal privacy, was an unfair business practice under Section 5 of the FTC Act.

On July 11, 2022, the Federal Trade Commission (FTC) published “Location, health, and other sensitive information: FTC committed to fully enforcing the law against illegal use and sharing of highly sensitive data,” on its Business Blog.  The blog post is likely related to an Executive Order (the “EO”) signed by President Biden in the wake of the Supreme Court’s Dobbs decision. Among other things, the EO directed the FTC to consider taking steps to protect consumers’ privacy when seeking information about and related to the provision of reproductive health care services.

While this latest drumbeat on this issue came from the FTC, we expect to see attention to this issue by other regulators, including, perhaps, the Department of Justice as well as state attorneys general.

Although the FTC post centers on location data and reproductive health services, it is likely that there will be more scrutiny of the collection and use of location data in general. This renewed focus will potentially subject a wide group of digital ecosystem participants to increased attention.  The spotlight will likely fall on interactive platforms, app publishers, software development kit (SDK) developers, data brokers and data analytics firms – over practices concerning the collection, sharing and perceived misuse of data generally.

President Trump signed an Executive Order today attempting to curtail legal protections under Section 230 of the Communications Decency Act (“Section 230” or the “CDA”). The Executive Order strives to clarify that Section 230 immunity “should not extend beyond its text and purpose to provide protection for those who purport to provide users a forum for free and open speech, but in reality use their power over a vital means of communication to engage in deceptive or pretextual actions stifling free and open debate by censoring certain viewpoints.”

Section 230 protects online providers in many respects concerning the hosting of user-generated content and bars the imposition of distributor or publisher liability against a provider for the exercise of its editorial and self-regulatory functions with respect to such user content.  In response to certain moderation efforts toward the President’s own social media posts this week, the Executive Order seeks to remedy what the President claims is the social media platforms’ “selective censorship” of user content and the “flagging” of content that does not violate a provider’s terms of service.

The Executive Order does a number of things. It first directs:

  • The Commerce Department to file a petition for rulemaking with the FCC to clarify certain aspect of CDA immunity for online providers, namely Good Samaritan immunity under 47 U.S.C. §230(c)(2). Good Samaritan immunity provides  that an interactive computer service provider may not be made liable “on account of” its decision in “good faith” to restrict access to content that it considers to be “obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable.”
    • The provision essentially gives providers leeway to screen out or remove objectionable content in good faith from their platforms without fear of liability. Courts have generally held that the section does not require that the material actually be objectionable; rather, the CDA affords protection for blocking material that the provider or user considers to be “objectionable.” However, Section 230(c)(2)(A) requires that providers act in “good faith” in screening objectionable content.
    • The Executive Order states that this provision should not be “distorted” to protect providers that engage in “deceptive or pretextual actions (often contrary to their stated terms of service) to stifle viewpoints with which they disagree.” The order asks the FCC to clarify “the conditions under which an action restricting access to or availability of material is not ‘taken in good faith’” within the meaning of the CDA, specifically when such decisions are inconsistent with a provider’s terms of service or taken without adequate notice to the user.
    • Interestingly, the Order also directs the FCC to clarify if there are any circumstances where a provider that screens out content under the CDA’s Good Samaritan protection but fails to meet the statutory requirements should still be able to claim protection under CDA Section 230(c)(1) for “publisher” immunity (as, according to the Order, such decisions would be the provider’s own “editorial” decisions).
    • To be sure, courts in recent years have dismissed claims against services for terminating user accounts or screening out content that violates content policies using the more familiar Section 230(c)(1) publisher immunity, stating repeatedly that decisions to terminate an account (or not publish user content) are publisher decisions, and are protected under the CDA. It appears that the Executive Order is suggesting that years of federal court precedent – from the landmark 1997 Zeran case until today – that have espoused broad immunity under the CDA for providers’ “traditional editorial functions” regarding third party content (which include the decision whether to publish, withdraw, postpone or alter content provided by another) were perhaps decided in error.

Teami, LLC (“Teami”), a marketer of teas and skincare products, agreed to settle FTC charges alleging that its retained social media influencers did not sufficiently disclose that they were being paid to promote Teami’s products. The FTC’s Complaint also included allegations that Teami made unsupported weight-loss and health claims about its products, an issue that is beyond the scope of this blog post. The Stipulated Order for Permanent Injunction and Monetary Judgment was approved by a Florida district on March 17, 2020.

This settlement is significant in that it identifies clear steps that an advertiser can follow in the interest of avoiding similar FTC allegations of deception with respect to paid endorsers. Compliance in this area remains an ongoing concern as the FTC reiterated in a statement accompanying the settlement that: “[T]he Commission is committed to seeking strong remedies against advertisers that deceive consumers because deceptive or inaccurate information online prevents consumers from making informed purchasing decisions….”

This week, the FTC entered into a proposed settlement with Unrollme Inc. (“Unrollme”), a free personal email management service that offers to assist consumers in managing the flood of subscription emails in their inboxes. The FTC alleged that Unrollme made certain deceptive statements to consumers, who may have had privacy concerns, to persuade them to grant the company access to their email accounts. (In re Unrolllme Inc., File No 172 3139 (FTC proposed settlement announced Aug. 8, 2019).

This settlement touches many relevant issues, including the delicate nature of online providers’ privacy practices relating to consumer data collection, the importance for consumers to comprehend the extent of data collection when signing up for and consenting to a new online service or app, and the need for downstream recipients of anonymized market data to understand how such data is collected and processed.  (See also our prior post covering an enforcement action involving user geolocation data collected from a mobile weather app).

Senators Brian Schatz (D) and Roy Blunt (R) recently introduced S.847, the “Commercial Facial Recognition Privacy Act of 2019,” a bill that would, subject to certain important exceptions,  generally prohibit the commercial use of facial recognition technology to identify and track consumers without consent. The bill, as drafted would place limitations on the third-party sharing of collected faceprint data, as well as require covered entities to meet certain minimum data security standards. As this bill wends its way through Congress (it has been referred to the Committee of Commerce, Science and Transportation), it is worth watching because it is a bipartisan bill with a narrow scope that has garnered the early conceptual support of Microsoft and other technology companies.

On August 29th, a Ninth Circuit panel unanimously held that the FTC has no power to challenge “throttling” of unlimited data plan customers by mobile broadband providers as an “unfair or deceptive act.”  The panel found that a core source of FTC authority (Section 5 of the FTC Act) does not apply to any “common carriers” that are subject to regulation under the Communications Act of 1934.  (FTC v. AT&T Mobility LLC, No. 14-04785 (9th Cir. Aug. 29, 2016)).