The Federal Trade Commission (FTC) has long expressed a concern about the potential misuse of location data.  For example, in a 2022 blog post, “Location, health, and other sensitive information: FTC committed to fully enforcing the law against illegal use and sharing of highly sensitive data,” the agency termed the entire location data ecosystem “opaque” and has investigated the practices and brought enforcement actions against mobile app operators and data brokers with respect to sensitive data.

One such FTC enforcement began with an August 2022 complaint against Kochava, Inc. (“Kochava”), a digital marketing and analytics firm, seeking an order halting Kochava’s alleged acquisition and downstream sale of “massive amounts” of precise geolocation data collected from consumers’ mobile devices. In that complaint, the FTC alleged that Kochava, in its role as a data broker, collects a wealth of information about consumers and their mobile devices by, among other means, purchasing data from outside entities to sell to its own customers.  Among other things, the FTC alleged that the location data provided by Kochava to its customers was not anonymized and that it was possible, using third party services, to use the geolocation data combined with other information to identify a mobile device user or owner.

In May 2023, an Idaho district court granted Kochava’s motion to dismiss the FTC’s complaint, with leave to amend. Subsequently, the FTC filed an amended complaint, and Kochava requested that the court keep the amended complaint under seal, which it did until it could rule on the merits of the parties’ arguments.

On November 3, 2023, the court granted the FTC’s motion to unseal the amended complaint, finding no “compelling reason” to keep the amended complaint under seal and rejecting Kochava’s arguments that the amended complaint’s allegations were “knowingly false” or “misleading.” (FTC v. Kochava Inc., No. 22-00377 (D. Idaho Nov. 3, 2023)). As a result, the FTC’s amended complaint has been unsealed to the public.

On October 30, 2023, President Biden issued an “Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence” (Fact Sheet, here) designed to spur new AI safety and security standards, encourage the development of privacy-preserving technologies in conjunction with AI training, address certain instances of algorithmic discrimination, advance the responsible use of AI in healthcare, study the impacts of AI on the labor market, support AI research and a competitive environment in the industry, and issue guidance on the use of AI by federal agencies.  This latest move builds on the White House’s previously-released “Blueprint for an AI Bill of Rights” and its announcement this past summer that it had secured voluntary commitments from major AI companies focusing on what the White House termed as “three principles that must be fundamental to the future of AI – safety, security, and trust.” 

On August 29, 2022, the Federal Trade Commission (FTC) announced that it had filed a complaint against Kochava, Inc. (“Kochava”), a digital marketing and analytics firm, seeking an order halting Kochava’s alleged acquisition and downstream sale of “massive amounts” of precise geolocation data collected from consumers’ mobile devices.

The complaint alleges that the data is collected in a format that would allow third parties to track consumers’ movements to and from sensitive locations, including those related to reproductive health, places of worship, and their private residences, among others.  The FTC alleged that “consumers have no insight into how this data is used” and that they do not typically know that inferences about them and their behaviors will be drawn from this information.  The FTC claimed that the sale or license of this sensitive data, which could present an “unwarranted intrusion” into personal privacy, was an unfair business practice under Section 5 of the FTC Act.

On August 11, 2022, the Federal Trade Commission (FTC) issued an Advance Notice of Proposed Rulemaking (ANPR) and announced it was exploring a rulemaking process to “crack down on harmful commercial surveillance” and lax data security.  The agency defines commercial surveillance as “the collection, aggregation, analysis, retention, transfer, or monetization of consumer data and the direct derivatives of that information.”

The FTC View

The FTC has not released any proposed rules but seeks public comment on the harms stemming from commercial surveillance and whether new rules are needed to protect consumer data privacy. As part of the ANPR, and before setting out a host of questions for public comment, the FTC offers its take on the opaque ecosystem surrounding the collection of mobile data and personal information (which the FTC asserts is often done without consumers’ full understanding). The FTC discusses the subsequent sharing and sale of information to data aggregators and brokers that then sell data access or data analysis products to marketers, researchers, or other businesses interested in gaining insights from alternative data sources. The agency argues that based on news reporting, published research and its own enforcement actions, the benefits of the current consumer data marketplace may be outweighed by “harmful commercial surveillance and lax data security practices,” thus potentially requiring rules to protect consumers and to offer more regulatory clarity to companies beyond the FTC’s case-by-case enforcement. As FTC Chair Lina Khan said in her statement accompanying the ANPR: “[T]he growing digitization of our economy—coupled with business models that can incentivize endless hoovering up of sensitive user data and a vast expansion of how this data is used —means that potentially unlawful practices may be prevalent, with case-by-case enforcement failing to adequately deter lawbreaking or remedy the resulting harms.”

FTC Invitation for Comment

After describing the FTC view on the issues, the Commission invites public comment on whether it should implement new trade regulation rules or other regulatory alternatives concerning the ways companies (1) collect, aggregate, protect, use, analyze, and retain consumer data, as well as (2) transfer, share, sell, or otherwise monetize that data in ways that are unfair or deceptive.  Within the ANPR are a myriad of questions (too numerous to list here; a fact sheet is available here and the press release also offers a breakdown). Though, perhaps the multimillion-dollar questions asked by the agency are: Which kinds of data should be subject to a potential privacy rule?  To what extent, if at all, should a new regulation impose limitations on companies’ collection, use, and retention of consumer data?

On July 11, 2022, the Federal Trade Commission (FTC) published “Location, health, and other sensitive information: FTC committed to fully enforcing the law against illegal use and sharing of highly sensitive data,” on its Business Blog.  The blog post is likely related to an Executive Order (the “EO”) signed by President Biden in the wake of the Supreme Court’s Dobbs decision. Among other things, the EO directed the FTC to consider taking steps to protect consumers’ privacy when seeking information about and related to the provision of reproductive health care services.

While this latest drumbeat on this issue came from the FTC, we expect to see attention to this issue by other regulators, including, perhaps, the Department of Justice as well as state attorneys general.

Although the FTC post centers on location data and reproductive health services, it is likely that there will be more scrutiny of the collection and use of location data in general. This renewed focus will potentially subject a wide group of digital ecosystem participants to increased attention.  The spotlight will likely fall on interactive platforms, app publishers, software development kit (SDK) developers, data brokers and data analytics firms – over practices concerning the collection, sharing and perceived misuse of data generally.

On March 9, 2022, the President issued an Executive Order (the “E.O.”) that articulates a high-level, wide-ranging national strategy for regulating and fostering innovation in the burgeoning digital assets space.  The strategy is intended to encourage innovation yet still provide adequate oversight to control systemic risks and the attendant investor,

On September 14, 2021, the Securities and Exchange Commission (“SEC”) filed a settled securities fraud action against App Annie Inc., one of the largest sellers of market data on how apps on mobile devices are performing, and its co-founder and former CEO and Chairman Bertrand Schmitt.  The settlement is the

The currents around the Communications Decency Act just got a little more turbulent as the White House and executive branch try to reel in the big fish of CDA reform.

On July 27, 2020, the Commerce Department submitted a petition requesting the FCC initiate a rulemaking to clarify the provisions of Section 230 of the Communications Decency Act (CDA). Unrelated, but part of the same fervor in Washington to “rein in social media,” the leaders of the major technology companies appeared before the House Judiciary Antitrust Subcommittee at a hearing yesterday, July 29, 2020, to discuss the Committee’s ongoing investigation of competition in the digital marketplace, where some members inquired about content moderation practices. Moreover, last month, a pair of Senators introduced the PACT Act, a targeted (but still substantial) update to the CDA (and other CDA reform bills are also being debated, including a bill to carve out sexually exploitative material involving children from the CDA`s reach).

President Trump signed an Executive Order today attempting to curtail legal protections under Section 230 of the Communications Decency Act (“Section 230” or the “CDA”). The Executive Order strives to clarify that Section 230 immunity “should not extend beyond its text and purpose to provide protection for those who purport to provide users a forum for free and open speech, but in reality use their power over a vital means of communication to engage in deceptive or pretextual actions stifling free and open debate by censoring certain viewpoints.”

Section 230 protects online providers in many respects concerning the hosting of user-generated content and bars the imposition of distributor or publisher liability against a provider for the exercise of its editorial and self-regulatory functions with respect to such user content.  In response to certain moderation efforts toward the President’s own social media posts this week, the Executive Order seeks to remedy what the President claims is the social media platforms’ “selective censorship” of user content and the “flagging” of content that does not violate a provider’s terms of service.

The Executive Order does a number of things. It first directs:

  • The Commerce Department to file a petition for rulemaking with the FCC to clarify certain aspect of CDA immunity for online providers, namely Good Samaritan immunity under 47 U.S.C. §230(c)(2). Good Samaritan immunity provides  that an interactive computer service provider may not be made liable “on account of” its decision in “good faith” to restrict access to content that it considers to be “obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable.”
    • The provision essentially gives providers leeway to screen out or remove objectionable content in good faith from their platforms without fear of liability. Courts have generally held that the section does not require that the material actually be objectionable; rather, the CDA affords protection for blocking material that the provider or user considers to be “objectionable.” However, Section 230(c)(2)(A) requires that providers act in “good faith” in screening objectionable content.
    • The Executive Order states that this provision should not be “distorted” to protect providers that engage in “deceptive or pretextual actions (often contrary to their stated terms of service) to stifle viewpoints with which they disagree.” The order asks the FCC to clarify “the conditions under which an action restricting access to or availability of material is not ‘taken in good faith’” within the meaning of the CDA, specifically when such decisions are inconsistent with a provider’s terms of service or taken without adequate notice to the user.
    • Interestingly, the Order also directs the FCC to clarify if there are any circumstances where a provider that screens out content under the CDA’s Good Samaritan protection but fails to meet the statutory requirements should still be able to claim protection under CDA Section 230(c)(1) for “publisher” immunity (as, according to the Order, such decisions would be the provider’s own “editorial” decisions).
    • To be sure, courts in recent years have dismissed claims against services for terminating user accounts or screening out content that violates content policies using the more familiar Section 230(c)(1) publisher immunity, stating repeatedly that decisions to terminate an account (or not publish user content) are publisher decisions, and are protected under the CDA. It appears that the Executive Order is suggesting that years of federal court precedent – from the landmark 1997 Zeran case until today – that have espoused broad immunity under the CDA for providers’ “traditional editorial functions” regarding third party content (which include the decision whether to publish, withdraw, postpone or alter content provided by another) were perhaps decided in error.

UPDATE: On the afternoon of May 28, 2020, the President signed the executive order concerning CDA Section 230. A copy/link to the order has not yet been posted on the White House’s website.

According to news reports, the Trump Administration (the “Administration”) is drafting and the President is set to sign an executive order to attempt to curtail legal protections under Section 230 of the Communications Decency Act (“Section 230” or the “CDA”). Section 230 protects online providers in many respects concerning the hosting of user-generated content and bars the imposition of distributor or publisher liability against a provider for the exercise of its editorial and self-regulatory functions with respect to such user content. In response to certain moderation efforts toward the President’s own social media posts this week, the executive order will purportedly seek to remedy what the President claims is the social media platforms’ “selective censorship” of user content and the “flagging” of content that is inappropriate, “even though it does not violate any stated terms of service.”

A purported draft of the executive order was leaked online. If issued, the executive order would, among other things, direct federal agencies to limit monies spent on social media advertising on platforms that violate free speech principles, and direct the White House Office of Digital Strategy to reestablish its online bias reporting tool and forward any complaints to the FTC. The draft executive order suggests that the FTC use its power to regulate deceptive practices against those platforms that fall under Section 230 to the extent they restrict speech in ways that do not match with posted terms or policies.  The order also would direct the DOJ to establish a working group with state attorneys general to study how state consumer protection laws could be applied to social media platform’s moderation practices.  Interestingly, the executive order draft would also direct the Commerce Department to file a petition for rulemaking to the FCC to clarify the conditions when an online provider removes “objectionable content” in good faith under the CDA’s Good Samaritan provision (which is a lesser-known, yet important companion to the better-known “publisher” immunity provision).