UPDATE: On the afternoon of May 28, 2020, the President signed the executive order concerning CDA Section 230. A copy/link to the order has not yet been posted on the White House’s website.

According to news reports, the Trump Administration (the “Administration”) is drafting and the President is set to sign an executive order to attempt to curtail legal protections under Section 230 of the Communications Decency Act (“Section 230” or the “CDA”). Section 230 protects online providers in many respects concerning the hosting of user-generated content and bars the imposition of distributor or publisher liability against a provider for the exercise of its editorial and self-regulatory functions with respect to such user content. In response to certain moderation efforts toward the President’s own social media posts this week, the executive order will purportedly seek to remedy what the President claims is the social media platforms’ “selective censorship” of user content and the “flagging” of content that is inappropriate, “even though it does not violate any stated terms of service.”

A purported draft of the executive order was leaked online. If issued, the executive order would, among other things, direct federal agencies to limit monies spent on social media advertising on platforms that violate free speech principles, and direct the White House Office of Digital Strategy to reestablish its online bias reporting tool and forward any complaints to the FTC. The draft executive order suggests that the FTC use its power to regulate deceptive practices against those platforms that fall under Section 230 to the extent they restrict speech in ways that do not match with posted terms or policies.  The order also would direct the DOJ to establish a working group with state attorneys general to study how state consumer protection laws could be applied to social media platform’s moderation practices.  Interestingly, the executive order draft would also direct the Commerce Department to file a petition for rulemaking to the FCC to clarify the conditions when an online provider removes “objectionable content” in good faith under the CDA’s Good Samaritan provision (which is a lesser-known, yet important companion to the better-known “publisher” immunity provision).

Late last month, the French data protection authority, the CNIL, published guidance surrounding considerations behind what it calls “commercial prospecting,” meaning scraping publicly available website data to obtain individuals’ contact info for purposes of selling such data to third parties for direct marketing purposes.  The guidance is noteworthy in two

Despite continued scrutiny over the legal immunity online providers enjoy under Section 230 of the Communications Decency Act (CDA), online platforms continue to successfully invoke its protections. This is illustrated by three recent decisions in which courts dismissed claims that sought to impose liability on providers for hosting or restricting access to user content and for providing a much-discussed social media app filter.

In one case, a California district court dismissed a negligence claim against online real estate database Zillow over a fraudulent posting, holding that any allegation of a duty to monitor new users and prevent false listing information inherently derives from Zillow’s status as a publisher and is therefore barred by the CDA. (924 Bel Air Road LLC v. Zillow Group Inc., No. 19-01368 (C.D. Cal. Feb. 18, 2020)). In the second, the Ninth Circuit, in an important ruling, affirmed the dismissal of claims against YouTube for violations of the First Amendment and the Lanham Act over its decision to restrict access to the plaintiff’s uploaded videos. The Ninth Circuit found that despite YouTube’s ubiquity and its role as a public-facing platform, it is a private forum not subject to judicial scrutiny under the First Amendment. It also found that its statements concerning its content moderation policies could not form a basis of false advertising liability. (Prager Univ. v. Google LLC, No. 18-15712 (9th Cir. Feb. 26, 2020)). And in a third case, the operator of the messaging app Snapchat was granted CDA immunity in a wrongful death suit brought by individuals killed in a high-speed automobile crash where one of the boys in the car had sent a snap using the app’s Speed Filter, which had captured the speed of the car at 123MPH, minutes before the fatal accident. (Lemmon v. Snap, Inc., No. 19-4504 (C.D. Cal. Feb. 25, 2020)).   

With the online shopping season in full swing, the FTC decided that online retailers might benefit from a reminder as to the dos and don’ts for social media influencers.  Thus, the FTC released a new guide, “Disclosures 101 for Social Media Influencers,” that reiterates its position about the responsibility of “influencers” to disclose “material” connections with brands when endorsing products in online posts.  Beyond this new guide, which is written in an easy-to-read brochure format (with headings such a “How to Disclose” and “When to Disclose”), the FTC released related videos to convey the message that influencers should “stay on the right side of the law” and disclose when appropriate the relationship with a brand he or she is endorsing.  This latest reminder to influencers comes on the heels of the FTC sending 90 letters to influencers in April 2017 notifying them of their responsibilities under the FTC”s Endorsement Guides, and the prior publishing of an Endorsement Guides FAQ. With the release of fresh guidance, now is a good time for brands with relationships with influencers to ensure endorsements are not deceptive and remain on the right side of the law.  Indeed, advertisers should have reasonable programs in place to train and monitor members of their influencer network and influencers themselves should remain aware of requirements under the Endorsement Guides. 

Last month, LinkedIn Corp. (“LinkedIn”) filed a petition for rehearing en banc of the Ninth Circuit’s blockbuster decision in hiQ Labs, Inc. v. LinkedIn Corp., No. 17-16783 (9th Cir. Sept. 9, 2019). The crucial question before the original panel concerned the scope of Computer Fraud and Abuse Act

On October 11, 2019, LinkedIn Corp. (“LinkedIn”) filed a petition for rehearing en banc of the Ninth Circuit’s blockbuster decision in hiQ Labs, Inc. v. LinkedIn Corp., No. 17-16783 (9th Cir. Sept. 9, 2019). The crucial question before the original panel concerned the scope of Computer Fraud and

UPDATE: On October 14, 2019, the parties entered into a Joint Stipulation dismissing the case, with prejudice.  It appears from some reports that Stackla’s access to Facebook has been reinstated as part of the settlement.
UPDATE: On September 27, 2019, the California district court issued its written order denying Stackla’s request for a TRO.  In short, the court found that, at this early stage, Stackla only demonstrated “speculative harm” and its “vague statements” did not sufficiently show that restoration of access to Facebook’s API would cure the alleged impending reality of Stackla losing customers and being driven out of business (“The extraordinary relief of a pre-adjudicatory injunction demands more precision with respect to when irreparable harm will occur than ‘soon.’”).  As for weighing whether a TRO would be in the public interest, the court, while understanding Stackla’s predicament, found that issuing a TRO could hamper Facebook’s ability to “decisively police its social-media platforms” and that there was a public interest in allowing a company to police the integrity of its platforms (“Facebook’s enforcement activities would be compromised if judicial review were expected to precede rather than follow its enforcement actions”). [emphasis in original]. This ruling leaves the issue for another day, perhaps during a preliminary injunction hearing, after some additional briefing of the issues.

The ink is barely dry on the landmark Ninth Circuit hiQ Labs decision. Yet, a new dispute has already cropped up testing the bounds of the CFAA and the ability of a platform to enforce terms restricting unauthorized scraping of social media content. (See Stackla, Inc. v. Facebook, Inc., No. 19-5849 (N.D. Cal. filed Sept. 19, 2019)).  This dispute involves Facebook and a social media sentiment tracking company, Stackla, Inc., which, as part of its business, accesses Facebook and Instagram content. This past Wednesday, September 25th, the judge in the case denied Stackla, Inc.’s request for emergency relief restoring its access to Facebook’s platform. While the judge has yet to issue a written ruling, the initial pleadings and memoranda filed in the case are noteworthy and bring up important issues surrounding the hot issue of scraping.

The Stackla dispute has echoes of hiQ v LinkedIn. Both involve the open nature of “public” websites (although the “public” nature of the content at issue appears to be in dispute.)  Both disputes address whether the Computer Fraud and Abuse Act (the “CFAA”) can be used as a tool to prevent the scraping of such sites. Both disputes address how a platform may use its terms of use to prohibit automated scraping or data collection beyond the scope of such terms, although the discussion in hiQ was extremely brief.  And like hiQ, Stackla asserts that if not for the ability to use Facebook and Instagram data, Stackla would be out of business. Thus both disputes address whether a court’s equitable powers should come into play if a platform’s termination of access will result in a particular company’s insolvency.  Given the Ninth Circuit’s opinion in favor of hiQ, it is highly likely that Stackla’s lawyers believed the Ninth Circuit decision was their golden ticket in this case. The judge’s ruling on the request for emergency relief suggests they may be disappointed.