President Trump signed an Executive Order today attempting to curtail legal protections under Section 230 of the Communications Decency Act (“Section 230” or the “CDA”). The Executive Order strives to clarify that Section 230 immunity “should not extend beyond its text and purpose to provide protection for those who purport to provide users a forum for free and open speech, but in reality use their power over a vital means of communication to engage in deceptive or pretextual actions stifling free and open debate by censoring certain viewpoints.”

Section 230 protects online providers in many respects concerning the hosting of user-generated content and bars the imposition of distributor or publisher liability against a provider for the exercise of its editorial and self-regulatory functions with respect to such user content.  In response to certain moderation efforts toward the President’s own social media posts this week, the Executive Order seeks to remedy what the President claims is the social media platforms’ “selective censorship” of user content and the “flagging” of content that does not violate a provider’s terms of service.

The Executive Order does a number of things. It first directs:

  • The Commerce Department to file a petition for rulemaking with the FCC to clarify certain aspect of CDA immunity for online providers, namely Good Samaritan immunity under 47 U.S.C. §230(c)(2). Good Samaritan immunity provides  that an interactive computer service provider may not be made liable “on account of” its decision in “good faith” to restrict access to content that it considers to be “obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable.”
    • The provision essentially gives providers leeway to screen out or remove objectionable content in good faith from their platforms without fear of liability. Courts have generally held that the section does not require that the material actually be objectionable; rather, the CDA affords protection for blocking material that the provider or user considers to be “objectionable.” However, Section 230(c)(2)(A) requires that providers act in “good faith” in screening objectionable content.
    • The Executive Order states that this provision should not be “distorted” to protect providers that engage in “deceptive or pretextual actions (often contrary to their stated terms of service) to stifle viewpoints with which they disagree.” The order asks the FCC to clarify “the conditions under which an action restricting access to or availability of material is not ‘taken in good faith’” within the meaning of the CDA, specifically when such decisions are inconsistent with a provider’s terms of service or taken without adequate notice to the user.
    • Interestingly, the Order also directs the FCC to clarify if there are any circumstances where a provider that screens out content under the CDA’s Good Samaritan protection but fails to meet the statutory requirements should still be able to claim protection under CDA Section 230(c)(1) for “publisher” immunity (as, according to the Order, such decisions would be the provider’s own “editorial” decisions).
    • To be sure, courts in recent years have dismissed claims against services for terminating user accounts or screening out content that violates content policies using the more familiar Section 230(c)(1) publisher immunity, stating repeatedly that decisions to terminate an account (or not publish user content) are publisher decisions, and are protected under the CDA. It appears that the Executive Order is suggesting that years of federal court precedent – from the landmark 1997 Zeran case until today – that have espoused broad immunity under the CDA for providers’ “traditional editorial functions” regarding third party content (which include the decision whether to publish, withdraw, postpone or alter content provided by another) were perhaps decided in error.

UPDATE: On the afternoon of May 28, 2020, the President signed the executive order concerning CDA Section 230. A copy/link to the order has not yet been posted on the White House’s website.

According to news reports, the Trump Administration (the “Administration”) is drafting and the President is set to sign an executive order to attempt to curtail legal protections under Section 230 of the Communications Decency Act (“Section 230” or the “CDA”). Section 230 protects online providers in many respects concerning the hosting of user-generated content and bars the imposition of distributor or publisher liability against a provider for the exercise of its editorial and self-regulatory functions with respect to such user content. In response to certain moderation efforts toward the President’s own social media posts this week, the executive order will purportedly seek to remedy what the President claims is the social media platforms’ “selective censorship” of user content and the “flagging” of content that is inappropriate, “even though it does not violate any stated terms of service.”

A purported draft of the executive order was leaked online. If issued, the executive order would, among other things, direct federal agencies to limit monies spent on social media advertising on platforms that violate free speech principles, and direct the White House Office of Digital Strategy to reestablish its online bias reporting tool and forward any complaints to the FTC. The draft executive order suggests that the FTC use its power to regulate deceptive practices against those platforms that fall under Section 230 to the extent they restrict speech in ways that do not match with posted terms or policies.  The order also would direct the DOJ to establish a working group with state attorneys general to study how state consumer protection laws could be applied to social media platform’s moderation practices.  Interestingly, the executive order draft would also direct the Commerce Department to file a petition for rulemaking to the FCC to clarify the conditions when an online provider removes “objectionable content” in good faith under the CDA’s Good Samaritan provision (which is a lesser-known, yet important companion to the better-known “publisher” immunity provision).

In 2018, Congress passed the Foreign Investment Risk Review Modernization Act (FIRRMA) to modernize the Committee on Foreign Investment in the United States (CFIUS). CFIUS is chaired by the Secretary of the Treasury and is empowered to review certain transactions involving foreign investment in the U.S.

On January 7, 2019, the Securities and Exchange Commission’s Office of Compliance Inspections and Examinations (OCIE) announced its 2020 examination priorities. In doing so, OCIE identified certain areas of technology-related concern, and in particular, on the issue of alternative data and cybersecurity. [For a more detailed review of OCIE’s

On January 7, 2019, the federal Office of Management and Budget (OMB) released a draft of a memorandum setting forth guidance to assist federal agencies in developing regulatory and non-regulatory approaches regarding artificial intelligence (AI).  This draft guidance will be available for public comment for sixty days, after which it will be finalized and issued to federal agencies.

According to the draft, the guidance was developed with the intent to reduce barriers to innovation while also balancing privacy and security concerns and respect for IP. The proposed guidance features ten principles to guide regulatory approaches to AI applications.  In addition, in what may be a boon for those in the private sector developing AI infrastructure, the OMB reinforces the objective of making federal data and models generally available to the private sector for non-federal use in developing AI systems.

Initial responses to the proposed guidance has been mixed, and it remains to be seen how the principles in the guidance (when finalized) will be put in practice. Notably, however, those who intend to invest significant resources in AI-based infrastructure should be aware of what may prove to be the emerging blueprint for AI regulation in the near future.

With the online shopping season in full swing, the FTC decided that online retailers might benefit from a reminder as to the dos and don’ts for social media influencers.  Thus, the FTC released a new guide, “Disclosures 101 for Social Media Influencers,” that reiterates its position about the responsibility of “influencers” to disclose “material” connections with brands when endorsing products in online posts.  Beyond this new guide, which is written in an easy-to-read brochure format (with headings such a “How to Disclose” and “When to Disclose”), the FTC released related videos to convey the message that influencers should “stay on the right side of the law” and disclose when appropriate the relationship with a brand he or she is endorsing.  This latest reminder to influencers comes on the heels of the FTC sending 90 letters to influencers in April 2017 notifying them of their responsibilities under the FTC”s Endorsement Guides, and the prior publishing of an Endorsement Guides FAQ. With the release of fresh guidance, now is a good time for brands with relationships with influencers to ensure endorsements are not deceptive and remain on the right side of the law.  Indeed, advertisers should have reasonable programs in place to train and monitor members of their influencer network and influencers themselves should remain aware of requirements under the Endorsement Guides. 

This week, the FTC entered into a proposed settlement with Unrollme Inc. (“Unrollme”), a free personal email management service that offers to assist consumers in managing the flood of subscription emails in their inboxes. The FTC alleged that Unrollme made certain deceptive statements to consumers, who may have had privacy concerns, to persuade them to grant the company access to their email accounts. (In re Unrolllme Inc., File No 172 3139 (FTC proposed settlement announced Aug. 8, 2019).

This settlement touches many relevant issues, including the delicate nature of online providers’ privacy practices relating to consumer data collection, the importance for consumers to comprehend the extent of data collection when signing up for and consenting to a new online service or app, and the need for downstream recipients of anonymized market data to understand how such data is collected and processed.  (See also our prior post covering an enforcement action involving user geolocation data collected from a mobile weather app).

In an effort to modernize communications, the Federal Communications Commission (“FCC”) decided to allow cable operators to deliver general subscriber notices required under so-called Subpart T rules (47 CFR §§ 76.1601 et seq.) to verified customer email addresses. This decision was announced through a Report and Order on November 15, 2018. This update is part of the greater trend towards using electronic communications and electronic contracting to replace paper as supported by the federal Electronic Signatures in Global and National Commerce Act (“E-Sign Act”) and related state laws. The E-Sign Act allows electronic records to satisfy legal requirements that certain information to be provided in writing if the consumer has affirmatively consented to such use. However, the E-Sign Act allows federal agencies like the FCC to exempt a specified category or type of record from the normally required consent requirements if it makes the agency’s requirements less burdensome and does not harm consumers. In this case, based on an understanding that it would be impractical for cable operators to attempt to receive permission from each individual customer prior to initiating electronic delivery of these general notices, the FCC waived the consent requirement pursuant to their discretion under the E-Sign Act.

Last week the WSJ published an article detailing how companies are monetizing smartphone location data by selling it to hedge fund clients.  The data vendor featured in the WSJ article obtains geolocation data from about 1,000 apps that fund managers use to predict trends involving public companies.  However, as we’ve