The appetite for acquisitions and investment in online businesses has never been stronger, with many of the most attractive online opportunities being businesses that host, manage and leverage user-generated content.  These businesses often rely on the immunities offered by Section 230 of the Communications Decency Act, 47 U.S.C. §230 (“Section 230” or the “CDA”) to protect them from liability associated with receiving, editing and posting or removing such content.  And investors have relied on the existence of robust immunity under the CDA in evaluating the risk associated with such investments in such businesses.  This seemed reasonable, as following the legacy of the landmark 1997 Zeran decision, for more than two decades courts have fairly consistently interpreted Section 230 to provide broad immunity for online providers of all types.

However, in the last five years or so, the bipartisan critics of the CDA have gotten more full-throated in decrying the presence of hate speech, revenge porn, defamation, disinformation and other objectionable content on online platforms. This issue has been building throughout the past year, and has reached a fever pitch in the weeks leading up to the election. The government’s zeal for “reining in social media” and pursuing reforms to Section 230, again, on a bipartisan basis, has come through loud and clear (even if the justifications for such reform differ on party lines). While we cannot predict exactly what structure these reforms will take, the political winds suggest that regardless of the administration in charge, change is afoot for the CDA in late 2020 or 2021. Operating businesses must take note, and investors should keep this in mind when conducting diligence reviews concerning potential investments.

In continuing its efforts to enforce its terms and policies against developers that engage in unauthorized scraping of user data, this week Facebook brought suit against two marketing analytics firms, BrandTotal Ltd (“BrandTotal”) and Unimania, Inc. (“Unimania”) (collectively, the “Defendants”) (Facebook, Inc. v. BrandTotal Ltd., No. 20Civ04246

Section 230 of the Communications Decency Act, 47 U.S.C. §230 (“Section 230” or the “CDA”), enacted in 1996, is generally viewed as the most important statute supporting the growth of Internet commerce.  The key provision of the CDA, Section 230(c)(1)(a), only 26 words long, simply states: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This one sentence has been the source of bedrock service provider immunity for third party content made available through a provider’s infrastructure, thus enabling the growth of a vigorous online ecosystem. Without such immunity, providers would have to face what the Ninth Circuit once termed, “death by ten thousand duck-bites,” in having to fend off claims that they promoted or tacitly assented to objectionable third party content.

The brevity of this provision of the CDA is deceptive, however. The CDA – and the immunity it conveys – is controversial, and those 26 words have been the subject of hundreds, if not thousands, of litigations.  Critics of the CDA point to the proliferation of hate speech, revenge porn, defamatory material, disinformation and other objectionable content – in many cases, the sites hosting such third party content (knowingly or unknowingly) are protected by the broad scope of the CDA. Other objections are merely based on unhappiness about the content of the speech, albeit in many cases true, such as comments that are critical of individuals, their businesses or their interests. Litigants upset about such content have sought various CDA workarounds over the past two decades in a mostly unsuccessful attempt to bypass the immunity and reach the underlying service providers.

The back-and-forth debate around the scope and effects of the CDA and the broad discretion afforded online providers regarding content hosting and moderation decisions is not new.  However, it was brought into a new focus when the President, vexed at the way some of his more controversial posts were being treated by certain social media platforms, issued a May 20, 2020 Executive Order for the purpose of curtailing legal protections for online providers. The goal was to remedy what the White House believed was the online platforms’ “deceptive or pretextual actions stifling free and open debate by censoring certain viewpoints.”

The Executive Order – which is currently being challenged in court as unconstitutional – directed several federal agencies to undertake certain legislative and regulatory efforts toward CDA reform. Consequently, in June 2020 the DOJ stated “that the time is ripe to realign the scope of Section 230 with the realities of the modern internet” and released a 28-page document with its preliminary recommendations for reform of Section 230.  A month later, the Commerce Department submitted a petition requesting that the FCC write rules to limit the scope of CDA immunity and place potentially additional compliance requirements on many providers that host third party content.  Then, on September 23, 2020, the DOJ announced that it had sent its legislative proposal for amending the CDA to Congress. The DOJ, in its cover letter to Congress, summed up the need for reform: “The proposed legislation accordingly seeks to align the scope of Section 230 immunities with the realities of the modern internet while ensuring that the internet remains a place for free and vibrant discussion.”

Many online services feature comprehensive terms of use intended to protect their business from various types of risks.  While it is often the case that a great deal of thought goes into the creation of those terms, frequently less attention is paid to how those terms are actually presented to users of the service. As case law continues to demonstrate, certain mobile and website presentations will be held to be enforceable, others will not.  Courts continue to indicate that enforceability of terms accessible by hyperlink depends on the totality of the circumstances, namely the clarity and conspicuousness of the relevant interface (both web and mobile) presenting the terms to the user. In a prior post about electronic contracting this year, we outlined, among other things, the danger of having a cluttered registration screen.  In this post, we will spotlight five recent rulings from the past few months where courts blessed the mobile contracting processes of e-commerce companies, as well as one case which demonstrates the danger of using a pre-checked box to indicate assent to online terms.

The moral of these stories is clear – the presentation of online terms is essential to enhancing the likelihood that they will be enforced, if need be. Thus, the design of the registration or sign-up page is not just an issue for the marketing, design and technical teams – the legal team must focus on how a court would likely view a registration interface, including pointing out the little things that can make a big difference in enforceability. A failure to present the terms properly could result in the most carefully drafted terms of service ultimately having no impact on the business at all.

The currents around the Communications Decency Act just got a little more turbulent as the White House and executive branch try to reel in the big fish of CDA reform.

On July 27, 2020, the Commerce Department submitted a petition requesting the FCC initiate a rulemaking to clarify the provisions of Section 230 of the Communications Decency Act (CDA). Unrelated, but part of the same fervor in Washington to “rein in social media,” the leaders of the major technology companies appeared before the House Judiciary Antitrust Subcommittee at a hearing yesterday, July 29, 2020, to discuss the Committee’s ongoing investigation of competition in the digital marketplace, where some members inquired about content moderation practices. Moreover, last month, a pair of Senators introduced the PACT Act, a targeted (but still substantial) update to the CDA (and other CDA reform bills are also being debated, including a bill to carve out sexually exploitative material involving children from the CDA`s reach).

Section 230 of the Communications Decency Act (“CDA”), 47 U.S.C. §230, enacted in 1996, is often cited as the most important law supporting the Internet, e-commerce and the online economy. Yet, it continues to be subject to intense criticism, including from politicians from both sides of the aisle. Many argue that the CDA has been applied in situations far beyond the original intent of Congress when the statue was enacted. Critics point to the role the CDA has played in protecting purveyors of hate speech, revenge porn, defamation, disinformation and other objectionable content.

Critics of the CDA raise valid concerns.  But what is the right way to address them? One must remember that for organizations that operate websites, mobile apps, social media networks, corporate networks and other online services, the CDA’s protections are extremely important.  Many of those businesses could be impaired if they were subject to liability (or the threat of liability) for objectionable third party content residing on their systems.

The criticism surrounding the CDA hit a fever pitch on May 28, 2020 when the President weighed in on the issue by signing an Executive Order attempting to curtail legal protections under Section 230. While the Executive Order was roundly labelled as political theater – and is currently being challenged in court as unconstitutional – it notably directed the Justice Department to submit draft proposed legislation (i.e., a CDA reform bill) to accomplish the policy objectives of the Order. This week, on June 17, 2020, the DOJ announced “that the time is ripe to realign the scope of Section 230 with the realities of the modern internet” and released a document with its recommendations for legislative reform of Section 230.  This is on the heels of a recent initiative by several GOP lawmakers to introduce their own version of a reform bill.

In recent years, courts have issued a host of rulings as to whether online or mobile users received adequate notice of and consented to user agreements or website terms when completing an online purchase or registering for a service. Some online agreements have been enforced, while others have not. In each case, judges have examined the circumstances behind the transaction closely, scrutinizing the user interface and how the terms are presented before a user completes a transaction. In general, most courts seek to determine whether the terms are reasonably conspicuous to the prudent internet user and whether the user manifested sufficient assent by signing up for a service or completing a transaction.

From the perspective of making a sign-up process as smooth as possible, there is often an interest in moving the reference to terms and conditions out of the main flow of user sign-ups.  However, as we were reminded recently by an Illinois court examining the interfaces of DVD rental company Redbox, one does so at risk of finding those terms to be unenforceable.

The Illinois court noted numerous shortcomings with Redbox’s electronic contracting process.  It found that because links to the relevant terms were not clearly and conspicuously displayed, customers did not have constructive notice that they were assenting to those terms when hitting the “Pay Now” button to rent a DVD at a kiosk or by signing into a Redbox account online. (Wilson v. Redbox Automated Retail, LLC, No. 19-01993 (N.D. Ill. Mar. 25, 2020)). As such, the court denied Redbox’s motion to compel arbitration of plaintiff’s claims.

Teami, LLC (“Teami”), a marketer of teas and skincare products, agreed to settle FTC charges alleging that its retained social media influencers did not sufficiently disclose that they were being paid to promote Teami’s products. The FTC’s Complaint also included allegations that Teami made unsupported weight-loss and health claims about its products, an issue that is beyond the scope of this blog post. The Stipulated Order for Permanent Injunction and Monetary Judgment was approved by a Florida district on March 17, 2020.

This settlement is significant in that it identifies clear steps that an advertiser can follow in the interest of avoiding similar FTC allegations of deception with respect to paid endorsers. Compliance in this area remains an ongoing concern as the FTC reiterated in a statement accompanying the settlement that: “[T]he Commission is committed to seeking strong remedies against advertisers that deceive consumers because deceptive or inaccurate information online prevents consumers from making informed purchasing decisions….”

Despite continued scrutiny over the legal immunity online providers enjoy under Section 230 of the Communications Decency Act (CDA), online platforms continue to successfully invoke its protections. This is illustrated by three recent decisions in which courts dismissed claims that sought to impose liability on providers for hosting or restricting access to user content and for providing a much-discussed social media app filter.

In one case, a California district court dismissed a negligence claim against online real estate database Zillow over a fraudulent posting, holding that any allegation of a duty to monitor new users and prevent false listing information inherently derives from Zillow’s status as a publisher and is therefore barred by the CDA. (924 Bel Air Road LLC v. Zillow Group Inc., No. 19-01368 (C.D. Cal. Feb. 18, 2020)). In the second, the Ninth Circuit, in an important ruling, affirmed the dismissal of claims against YouTube for violations of the First Amendment and the Lanham Act over its decision to restrict access to the plaintiff’s uploaded videos. The Ninth Circuit found that despite YouTube’s ubiquity and its role as a public-facing platform, it is a private forum not subject to judicial scrutiny under the First Amendment. It also found that its statements concerning its content moderation policies could not form a basis of false advertising liability. (Prager Univ. v. Google LLC, No. 18-15712 (9th Cir. Feb. 26, 2020)). And in a third case, the operator of the messaging app Snapchat was granted CDA immunity in a wrongful death suit brought by individuals killed in a high-speed automobile crash where one of the boys in the car had sent a snap using the app’s Speed Filter, which had captured the speed of the car at 123MPH, minutes before the fatal accident. (Lemmon v. Snap, Inc., No. 19-4504 (C.D. Cal. Feb. 25, 2020)).