New Media and Technology Law Blog

California Legislature Nearing Final Debate of Biometric and Geolocation Data Security Bill

With the session ending on August 31st, the California legislature is debating a bill (AB 83) that would expand data security requirements for businesses that maintain personal information of California residents to include, among other things, protection for geolocation and biometric data. Under existing law (Cal. Civ. Code §1798.81.5(b)), a person or business that owns, licenses, or maintains a California resident’s “personal information,” must implement and maintain “reasonable security procedures and practices appropriate to the nature of the information.”   The current law also lists multiple types of covered “personal information.” Continue Reading

Switching Consumer Device to Ad-Supported Environment Is Not Deceptive under New York Law

If your company sells a smart device to a consumer, can it later turn the device into a paid advertising platform? Can it do so without advanced disclosure?  A recent court ruling suggests the answer is “yes,” at least in New York. Continue Reading

Cable Network May Proceed with Claims Against Distributor on Theories Beyond Written Contract

2015 and 2016 saw a wave of transactions among cable, satellite, and other linear programming distributors: AT&T & DirecTV, Altice and Suddenlink, etc. That transactional wave is beginning to spawn a litigation wave, principally over interpretation and application of the pre-existing licenses and contracts between networks and distributors. A recent ruling in one California case is noteworthy to the extent that it allowed a network to proceed against a distributor on multiple theories beyond the parties’ written contract.

Read the full post on our Minding Your Business Blog.

 

 

 

 

Browsewrap Agreement Held Unenforceable – Website Designers Take Note!

In Nghiem v Dick’s Sporting Goods, Inc., No. 16-00097 (C.D. Cal. July 5, 2016), the Central District of California held browsewrap terms to be unenforceable because the hyperlink to the terms was “sandwiched” between two links near the bottom of the third column of links in a website footer.  Website developers – and their lawyers – should take note of this case, part of an emerging trend of judicial scrutiny over how browsewrap terms are presented. Courts have, in many instances, refused to enforce browsewraps due to a finding of a lack of user notice and assent. In this case, the most recent example of a court’s specific analysis of website design, a court suggests that what has become a fairly standard approach to browsewrap presentment fails to achieve the intended purpose.    Continue Reading

CFAA Double Feature: Ninth Circuit Issues Two Important Decisions on the Scope of Liability Related to Data Scraping and Unauthorized Access to Employer Databases

  • Unauthorized Access: A former employee, whose access has been revoked, and who uses a current employee’s login credentials to gain network access to his former company’s network, violates the CFAA. [U.S. v. Nosal, 2016 WL 3608752 (9th Cir. July 5, 2016)]
  • Data Scraping: A commercial entity that accesses a public website after permission has been explicitly revoked can be civilly liable under the CFAA. However, a violation of the terms of use of a website, without more, cannot be the basis for liability under the CFAA, a ruling that runs contrary to language from one circuit level decision regarding potential CFAA liability for screen scraping activities (See e.g., EF Cultural Travel BV v. Zefer Corp., 318 F.3d 58 (1st Cir. 2003)). [Facebook, Inc. v. Power Ventures, Inc., No. 13-17102 (9th July 12, 2016)]

This past week, the Ninth Circuit released two important decisions that clarify the scope of liability under the federal Computer Fraud and Abuse Act (CFAA), 18 U.S.C. § 1030.  The Act was originally designed to target hackers, but has lately been brought to bear in many contexts involving wrongful access of company networks by current and former employees and in cases involving the unauthorized scraping of data from publicly available websites. Continue Reading

No VPPA Liability for Disclosure of Certain Anonymous Digital Identifiers

Another court has contributed to the ongoing debate over the scope of the term “personally identifiable information” under the Video Privacy Protection Act – a statute enacted in 1988 to protect the privacy of consumers’ videotape rental and purchase history but lately applied to the modern age of video streaming services and online video viewing. Generally speaking, the term “personally identifiable information” (or PII) is not limited to only disclosure of a consumer’s name, but courts and litigants have wrestled over how to define the scope, particularly with respect to the disclosure of digital identifiers such as Android or Roku device IDs or other tracking information stored by website cookies. This past week, the Third Circuit ruled that certain digital identifiers collected from web users did not qualify as PII under the statute.

In In re: Nickelodeon Consumer Privacy Litig., No. 15-1441 (3d Cir. June 27, 2016), the plaintiffs alleged, among other things, that Viacom and Google unlawfully used cookies to track children’s web browsing and video-watching habits on Nickelodeon websites for the purpose of selling targeted advertising. More specifically, the plaintiffs asserted that Viacom disclosed to Google URL information that effectively revealed what videos minor users watched on Nickelodeon’s websites, and static digital identifiers (i.e., IP addresses, browser fingerprints, and unique device identifiers) that purportedly enabled Google to link the watching of those videos to the users’ real-world identities.  In short, the PII at issue in this case was a user’s IP address, browser fingerprint (i.e., a user’s browser and operating system settings that can present a relatively distinctive digital “fingerprint” for tracking purposes), and a unique device identifier (i.e., an anonymous number linked to certain mobile devices).  Using these points of data, the plaintiffs claim that Google and Viacom could link online and offline activity and identify specific users.

The Third Circuit affirmed the dismissal of the VPPA claims against Google and Viacom (but allowed a state privacy claim to continue against Viacom, ruling that such a claim was not preempted by the Children’s Online Privacy Protection Act (COPPA)).

The appeals court made two important holdings regarding VPPA liability:

  1. Plaintiffs may sue only a person who discloses PII, not an entity that receives such information; and
  2. The VPPA’s prohibition on the disclosure of personally identifiable information applies only to the kind of information that would readily permit an ordinary person to identify a specific individual’s video-watching behavior. As such, the kinds of disclosures at issue in the case, including digital identifiers like IP addresses and browser fingerprints, fall outside the Act’s protections.

Subject to certain exceptions, the VPPA prohibits “video tape service providers” from knowingly disclosing, to a third-party, “personally identifiable information concerning any consumer.” 18 U.S.C. §2710(b). Video tape service provider means “any person, engaged in the business… of rental, sale, or delivery of prerecorded video cassette tapes or similar audio visual materials.” 18 U.S.C. § 2710(a)(4). The term “personally identifiable information” includes “information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider.” 18 U.S.C. §2710(a)(3).

As to the claim against Google, the plaintiffs abandoned their earlier argument that Google was a “video tape service provider” and instead contended that the VPPA extends liability to both providers who disclose personally identifiable information and the person who receives that information.  Despite some statutory ambiguity, the Third Circuit agreed with sister courts in stating that liability is limited to the prohibition against disclosure of PII, not to the receipt of PII. Thus, since Google is not alleged to have disclosed any information, the court affirmed dismissal of the VPPA claim against it.

The core of the opinion discusses the claim against Viacom and whether the definition of PII extends to the kind of static digital identifiers allegedly disclosed by Viacom to Google. Numerous district courts have grappled with the question of whether the VPPA applies to certain anonymous digital identifiers. The plaintiffs urged a broad interpretation akin to the recent interpretation of the VPPA by the First Circuit, which held that the disclosure of a user’s Android ID + GPS data + video viewing information qualified as PII under the VPPA. Viacom, by contrast, argued that static digital identifiers, such as IP addresses or browser fingerprints, are not PII because such data, by itself, does not identify a particular person. The parties’ contrary positions reflect a fundamental disagreement heard in courts across the country over what kinds of information are sufficiently “personally identifying” for their disclosure to trigger liability under the VPPA.

While admitting that the phrase “personally identifiable information” in the statute is not straightforward, the court agreed with Viacom’s narrower understanding and followed the majority view in holding that the Act protects personally identifiable information that identifies a specific person and ties that person to particular videos that the person watched:

“The allegation that Google will assemble otherwise anonymous pieces of data to unmask the identity of individual children is, at least with respect to the kind of identifiers at issue here, simply too hypothetical to support liability under the Video Privacy Protection Act.”

In the court’s view, PII means the kind of information that would readily permit an ordinary person to identify a specific individual’s video-watching behavior, but should not be construed so broadly as to cover the kinds of the static digital identifiers at issue. The court recognized that other, more concrete disclosures based on new technology, such as geolocation data or social media customer ID numbers, can suffice, in certain circumstances, to qualify under the statute, but that the digital identifiers in the instant case were simply “too far afield” to trigger liability.

The court recognized that its holding failed to provide an easy test for other courts to apply in subsequent cases:

“We recognize that our interpretation of the phrase ‘personally identifiable information’ has not resulted in a single-sentence holding capable of mechanistically deciding future cases. We have not endeavored to craft such a rule, nor do we think, given the rapid pace of technological change in our digital era, such a rule would even be advisable.”

Despite the appeals court’s narrow reading of the scope of liability under the VPPA, the decision leaves some unanswered questions about what kinds of disclosures violate the statute. The question over what combination of digital identifiers crosses the line into PII under the statute remains an emerging issue. Companies that deliver video to users via online or mobile platforms should continue to be vigilant about knowing what kinds of personal information and tracking data they collect and what is shared with third parties, including ad networks or data analytics companies.  Indeed, the court cautioned companies in the business of streaming digital video “to think carefully about customer notice and consent.”

 

 

 

NTIA Multistakeholder Process Finalizes General Privacy Guidelines for Commercial Facial Recognition Use

We’ve previously blogged about the National Telecommunications and Information Administration (NTIA) privacy multistakeholder process to address concerns associated with the emerging commercial use of facial recognition technology. Notably, last year, the self-regulatory initiative hit a stumbling block when nine consumer advocacy groups withdrew from the process due to a lack of consensus on a minimum standard of consent.  Regardless, the remaining participants continued on and last week, the stakeholders concluded the process and came to a consensus on final privacy guidelines, “Privacy Best Practice Recommendations For Commercial Facial Recognition Use.”

The guidelines generally apply to “covered entities,” or any person, including corporate affiliates, that collects, stores, or processes facial template data. The guidelines do not apply to the use of facial recognition for the purpose of aggregate or non-identifying analysis (e.g., the counting of unique visitors to a particular location), and is not applicable to certain governmental uses of the technology, such as for law enforcement or national security. Moreover, under the guidelines, data that has been “reasonably de-identified” is not facial template data and therefore not covered by the best practices.

The guidelines are generally broken down into several categories:

  • Transparency: Covered entities are encouraged to reasonably disclose to consumers their practices regarding the collection, storage and use of faceprints and update such policies in the event of material changes. Policies should generally describe the foreseeable purposes of data collection, the entity’s data retention and de-identification practices, and whether the entity offers the consumer the ability to review or delete any facial template data.  Where facial recognition technology is used at a physical location, the entity is encouraged to provide “concise notice” to consumers of such use.
  • Recommended Practices: Before implementing facial recognition technology, the guidelines suggest that entities consider certain important issues, including:
    • Voluntary or involuntary enrollment
    • Types of other sensitive data being captured and any other risks to the consumer
    • Whether faceprints will be used to determine certain eligibility for or access to certain activities covered under law (e.g., employment, healthcare)
    • Reasonable consumer expectations regarding the use of the data
  • Data Sharing: Covered entities that use facial recognition to determine an individual’s identity are encouraged to offer the individual the opportunity to control the sharing of such data with unaffiliated third parties (note: an unaffiliated third party does not include a covered entity’s vendor or supplier that provides a product or service related to the facial template data).
  • Data Security: Reasonable security measures should be used to safeguard collected data, consistent with the operator’s size, the nature and scope of the activities, and the sensitive nature of the data.
  • Redress: Covered entities are encouraged to offer consumers a process to submit concerns over the entity’s use of faceprints.

In the end, the recommendations are merely best practices for the emerging use of facial recognition technology, and they will certainly spark more debate on the issue.  Following the release, privacy advocates generally criticized the guidelines and had hoped that stronger notice and consent principles and additional guidance on how to handle certain privacy risks had been part of the final document.  It remains to be seen how many of the suggested guidelines will be implemented in practice, and whether consumers themselves will nudge the industry to erect additional privacy controls.

In the meantime, entities must still consider compliance issues surrounding the noteworthy Illinois biometric privacy law (the Biometric Information Privacy Act, or BIPA) enacted in 2008 and now the subject of much recent litigation, including:

We will continue to monitor the latest legal and important industry developments relating to biometric privacy.

FTC Prevails in Action against Amazon for Unlawfully Billing Parents for Children’s Unauthorized In-App Purchases

In the wake of thousands of parental complaints about unauthorized in-app purchases made by their children, resulting in millions of dollars in disputed charges, the Federal Trade Commission (“FTC”) brought suit against Amazon, Inc. (“Amazon”) in July 2014. The FTC sought a court order requiring refunds to consumers for unauthorized charges and permanently banning the company from billing parents and other account holders for in-app charges without their consent.  This past April, a Washington district court granted the FTC’s motion for summary judgment on liability, ruling that the billing of account holders for in-app purchases made by children without the account holders’ express informed consent constituted an unfair practice under Section 5 of the FTC Act.  (FTC v. Amazon.com, Inc., 2016 WL 1643973 (W.D. Wash. Apr. 26, 2016)).  Despite rejecting Amazon’s challenge to the FTC’s claims, the court denied the FTC request for injunctive relief to prevent future violations.  The ruling underscores the trend of FTC enforcement, as it moves its enforcement energies more toward the mobile industry, and the importance of building proper consent mechanisms when consumers, especially children, are charged for purchases.

Amazon’s in-app purchasing functionality began in November 2011 and despite regular parental complaints, no updates were made to the in-app charge framework until March 2012. While Amazon instituted some protections regarding in-app charges over $20 in 2012, and in 2013 offered users additional disclosures about in-app charges and options for prior consent in some circumstances, these updates did not prevent children from making unauthorized in-app purchases. Amazon did not make sufficient changes to its in-app purchasing methods until June 2014. At this time, in-app purchasing required account holders’ express informed consent prior to completing purchases on its newer devices. For example, before a user’s first in-app purchase is completed, users are prompted to answer whether they want to require a password for each future purchase or permit purchases without a password going forward; if users choose to require a password for future purchases, they are also prompted to set the parental controls to prevent app purchases by children or limit the amount of time children spend on these apps.

According to the FTC, children’s games often encouraged children to acquire virtual items in ways that blurred the lines between what cost virtual currency and what cost real money, and some children were even able to incur “real money” charges by clicking buttons at random during play.  Moreover, it appeared that many parents simply did not understand that several apps, particularly ones labeled “FREE,” allowed users to make in-app charges because such a notification was not conspicuously presented alongside the app’s price prior to download, but instead was buried within a longer description that users had to scroll down to read.

In granting summary judgment on liability under the FTC Act, the court rejected Amazon’s argument that its liberal refund practices sufficiently mitigated harm. The court reasoned that many customers may not have been aware of any unauthorized purchases and that the time spent pursuing refunds constitutes an additional injury.   The court also rejected Amazon’s argument that the injuries were reasonably avoidable, indicating that it is unreasonable to expect a consumer to be familiar with in-app purchases within apps labeled as “FREE.”  Lastly, Amazon’s policy stated they did not provide refunds for in-app purchases so it was entirely reasonable for a consumer to be unaware of the refund procedures for unauthorized in-app purchases.

In denying the FTC’s request for permanent injunctive relief, the court noted that Amazon had already implemented measures to protect consumers, and it found no “cognizable danger of recurring violation.” Since the implementation of an in-app charge framework requiring account holders’ express informed consent began in June 2014 the likelihood of future unlawful conduct is minimal, despite the fact that there still exists the possibility of in-app purchases under $1 without authorization on older, first generation Kindle devices (a device not sold since 2012).

With Amazon’s liability under Section 5 of the FTC Act having been established, further briefing will be required to determine monetary damages that ran, in general, from when in-app charges began in November 2011 up until June 2014 when Amazon’s revised in-app purchase prompt was instituted. It remains to be seen whether the amount of damages or any settlement will be in line with previous settlements the FTC has reached with other mobile platforms regarding issues involving similar in-app charges.

It appears, following the Amazon ruling, that best practices for a mobile platform or owner of an app that features in-app purchasing opportunities should include clear notice to the consumer and express informed consent from the account holder, for apps directed to both children and adults. Clear and conspicuous notice that in-app purchasing exists should be provided prior to downloading the app and express informed consent, in some form, should be provided before the in-app purchase is made. Further note, that while refund policies for unauthorized in-app purchases may reduce the number of consumer complaints, the Amazon court stressed that such policies may not be sufficient to prevent liability under Section 5 of the FTC Act as the process of acquiring these refunds has been deemed an additional injury to the consumers.

Craigslist Files Another Suit against Data Scraper

For years, craigslist has aggressively used technological and legal methods to prevent unauthorized parties from scraping, linking to or accessing user postings for their own commercial purposes.  In a prior post, we briefly discussed craigslist’s action against a certain aggregator that was scraping content from the craigslist site (despite having received a cease and desist letter informing it that it was no long permitted to access the site) and offering the data to outside developers through an API. (See generally Craigslist, Inc. v. 3Taps, Inc., 2013 WL 1819999 (N.D. Cal. Apr. 30, 2013)).  In 2015, craigslist subsequently settled the 3Taps lawsuit, with relief against various defendants that included monetary payments and a permanent injunction barring the defendants from accessing any craigslist content, circumventing any technological measures that prohibit spidering activity or otherwise representing that they were affiliated with craigslist.

This past April, the 3Taps saga, in a way, was resurrected.  Craigslist filed a complaint against the real estate listing site RadPad, an entity that had allegedly received, for a limited time period, scraped craigslist data from 3Taps that it used on its own website.  In its complaint, craigslist claims that after the 3Taps litigation was settled in June 2015, RadPad or its agents began their own independent efforts to scrape craigslist site.  Craigslist alleges that RadPad used sophisticated techniques to evade detection and scrape thousands of user postings and thereafter harvested users’ contact information to send spam over the site’s messaging system in an effort to entice users to switch to RadPad’s services.  (See Craigslist, Inc. v. RadPad, Inc., No. 16-1856 (N.D. Cal. filed Apr. 8, 2016)).  In its complaint seeking compensatory damages and injunctive relief, craigslist brought several causes of action, including:

  • Breach of Contract: The complaint alleges that as a user of the site, RadPad was presented with and agreed to the site’s Terms of Use, which prohibits scraping and spidering activity, collection of user contact information, as well as unsolicited spam.
  • CAN-SPAM (and California state spam law): RadPad allegedly initiated the transmission of commercial email messages with misleading subject headings and a non-functioning opt-out mechanism, among other violations, and also had allegedly collected email addresses using email harvesting software.  Craigslist asserts that it was adversely affected and incurred expenses to combat the spam messages and is entitled to statutory damages.
  • Computer Fraud and Abuse Act (CFAA) (and California state law equivalent): The complaint alleges that RadPad accessed craigslist’s site in contravention of the Terms of Use and thereby gained unauthorized access to craigslist’s servers and obtained valuable user data.  Websites seeking to deter unauthorized screen scraping frequently advance this federal cause of action, with mixed results.
  • Copyright Infringement: Craigslist claims that RadPad is liable for secondary copyright infringement for inducing 3Taps’ prior copyright infringement, by allegedly assisting 3Taps in shaping the “data feed” and advising on how to circumvent the site’s technological blocks.

In response, RadPad filed its answer late last month, arguing that craigslist is attempting to exclude RadPad from accessing publicly-available information that would allow it to compete in the classified-ad market for real estate rentals.  In its counterclaim, RadPad claims that, in its efforts to block RadPad, craigslist has prevented email messages containing the word “RadPad” from being delivered to landlords in response to craigslist listings, an act that, it alleges, constitutes unfair competition.  RadPad is also seeking a declaration that craigslist is wrongfully asserting copyright claims over rental listings that are not copyrightable subject matter.

Outside of the dispute, the debate continues.  Digital rights advocates have argued that content on publicly-available websites is implicitly free to disseminate across the web, while web services hosting valuable user-generated content or other data typically wish to exercise control over which parties can access and use it for commercial purposes.  While the law surrounding scraping remains unsettled, craigslist has notched some notable litigation successes in recent years, including, in the prior 3Taps case.  In that case, a California district court ruled, among other things, that an owner of a publicly-accessible website may, through a cease-and-desist letter and use of IP address blocking technology, revoke a specific user’s authorization to access that website. Such lack of “authorization” could form the basis of a viable claim under the federal Computer Fraud and Abuse Act and state law counterpart. See Craigslist, Inc. v. 3Taps, Inc., 2013 WL 4447520 (N.D. Cal. Aug. 16, 2013).

It remains to be seen whether the court will consider any of the issues in the RadPad dispute on the merits or whether the parties will resolve the matter with some agreed-upon restrictions limiting or barring RadPad’s access to craigslist’s site.  We will continue to watch this case and similar data scraping disputes carefully.

LexBlog