New Media and Technology Law Blog

CFAA Double Feature: Ninth Circuit Issues Two Important Decisions on the Scope of Liability Related to Data Scraping and Unauthorized Access to Employer Databases

  • Unauthorized Access: A former employee, whose access has been revoked, and who uses a current employee’s login credentials to gain network access to his former company’s network, violates the CFAA. [U.S. v. Nosal, 2016 WL 3608752 (9th Cir. July 5, 2016)]
  • Data Scraping: A commercial entity that accesses a public website after permission has been explicitly revoked can be civilly liable under the CFAA. However, a violation of the terms of use of a website, without more, cannot be the basis for liability under the CFAA, a ruling that runs contrary to language from one circuit level decision regarding potential CFAA liability for screen scraping activities (See e.g., EF Cultural Travel BV v. Zefer Corp., 318 F.3d 58 (1st Cir. 2003)). [Facebook, Inc. v. Power Ventures, Inc., No. 13-17102 (9th July 12, 2016)]

This past week, the Ninth Circuit released two important decisions that clarify the scope of liability under the federal Computer Fraud and Abuse Act (CFAA), 18 U.S.C. § 1030.  The Act was originally designed to target hackers, but has lately been brought to bear in many contexts involving wrongful access of company networks by current and former employees and in cases involving the unauthorized scraping of data from publicly available websites. Continue Reading

No VPPA Liability for Disclosure of Certain Anonymous Digital Identifiers

Another court has contributed to the ongoing debate over the scope of the term “personally identifiable information” under the Video Privacy Protection Act – a statute enacted in 1988 to protect the privacy of consumers’ videotape rental and purchase history but lately applied to the modern age of video streaming services and online video viewing. Generally speaking, the term “personally identifiable information” (or PII) is not limited to only disclosure of a consumer’s name, but courts and litigants have wrestled over how to define the scope, particularly with respect to the disclosure of digital identifiers such as Android or Roku device IDs or other tracking information stored by website cookies. This past week, the Third Circuit ruled that certain digital identifiers collected from web users did not qualify as PII under the statute.

In In re: Nickelodeon Consumer Privacy Litig., No. 15-1441 (3d Cir. June 27, 2016), the plaintiffs alleged, among other things, that Viacom and Google unlawfully used cookies to track children’s web browsing and video-watching habits on Nickelodeon websites for the purpose of selling targeted advertising. More specifically, the plaintiffs asserted that Viacom disclosed to Google URL information that effectively revealed what videos minor users watched on Nickelodeon’s websites, and static digital identifiers (i.e., IP addresses, browser fingerprints, and unique device identifiers) that purportedly enabled Google to link the watching of those videos to the users’ real-world identities.  In short, the PII at issue in this case was a user’s IP address, browser fingerprint (i.e., a user’s browser and operating system settings that can present a relatively distinctive digital “fingerprint” for tracking purposes), and a unique device identifier (i.e., an anonymous number linked to certain mobile devices).  Using these points of data, the plaintiffs claim that Google and Viacom could link online and offline activity and identify specific users.

The Third Circuit affirmed the dismissal of the VPPA claims against Google and Viacom (but allowed a state privacy claim to continue against Viacom, ruling that such a claim was not preempted by the Children’s Online Privacy Protection Act (COPPA)).

The appeals court made two important holdings regarding VPPA liability:

  1. Plaintiffs may sue only a person who discloses PII, not an entity that receives such information; and
  2. The VPPA’s prohibition on the disclosure of personally identifiable information applies only to the kind of information that would readily permit an ordinary person to identify a specific individual’s video-watching behavior. As such, the kinds of disclosures at issue in the case, including digital identifiers like IP addresses and browser fingerprints, fall outside the Act’s protections.

Subject to certain exceptions, the VPPA prohibits “video tape service providers” from knowingly disclosing, to a third-party, “personally identifiable information concerning any consumer.” 18 U.S.C. §2710(b). Video tape service provider means “any person, engaged in the business… of rental, sale, or delivery of prerecorded video cassette tapes or similar audio visual materials.” 18 U.S.C. § 2710(a)(4). The term “personally identifiable information” includes “information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider.” 18 U.S.C. §2710(a)(3).

As to the claim against Google, the plaintiffs abandoned their earlier argument that Google was a “video tape service provider” and instead contended that the VPPA extends liability to both providers who disclose personally identifiable information and the person who receives that information.  Despite some statutory ambiguity, the Third Circuit agreed with sister courts in stating that liability is limited to the prohibition against disclosure of PII, not to the receipt of PII. Thus, since Google is not alleged to have disclosed any information, the court affirmed dismissal of the VPPA claim against it.

The core of the opinion discusses the claim against Viacom and whether the definition of PII extends to the kind of static digital identifiers allegedly disclosed by Viacom to Google. Numerous district courts have grappled with the question of whether the VPPA applies to certain anonymous digital identifiers. The plaintiffs urged a broad interpretation akin to the recent interpretation of the VPPA by the First Circuit, which held that the disclosure of a user’s Android ID + GPS data + video viewing information qualified as PII under the VPPA. Viacom, by contrast, argued that static digital identifiers, such as IP addresses or browser fingerprints, are not PII because such data, by itself, does not identify a particular person. The parties’ contrary positions reflect a fundamental disagreement heard in courts across the country over what kinds of information are sufficiently “personally identifying” for their disclosure to trigger liability under the VPPA.

While admitting that the phrase “personally identifiable information” in the statute is not straightforward, the court agreed with Viacom’s narrower understanding and followed the majority view in holding that the Act protects personally identifiable information that identifies a specific person and ties that person to particular videos that the person watched:

“The allegation that Google will assemble otherwise anonymous pieces of data to unmask the identity of individual children is, at least with respect to the kind of identifiers at issue here, simply too hypothetical to support liability under the Video Privacy Protection Act.”

In the court’s view, PII means the kind of information that would readily permit an ordinary person to identify a specific individual’s video-watching behavior, but should not be construed so broadly as to cover the kinds of the static digital identifiers at issue. The court recognized that other, more concrete disclosures based on new technology, such as geolocation data or social media customer ID numbers, can suffice, in certain circumstances, to qualify under the statute, but that the digital identifiers in the instant case were simply “too far afield” to trigger liability.

The court recognized that its holding failed to provide an easy test for other courts to apply in subsequent cases:

“We recognize that our interpretation of the phrase ‘personally identifiable information’ has not resulted in a single-sentence holding capable of mechanistically deciding future cases. We have not endeavored to craft such a rule, nor do we think, given the rapid pace of technological change in our digital era, such a rule would even be advisable.”

Despite the appeals court’s narrow reading of the scope of liability under the VPPA, the decision leaves some unanswered questions about what kinds of disclosures violate the statute. The question over what combination of digital identifiers crosses the line into PII under the statute remains an emerging issue. Companies that deliver video to users via online or mobile platforms should continue to be vigilant about knowing what kinds of personal information and tracking data they collect and what is shared with third parties, including ad networks or data analytics companies.  Indeed, the court cautioned companies in the business of streaming digital video “to think carefully about customer notice and consent.”




NTIA Multistakeholder Process Finalizes General Privacy Guidelines for Commercial Facial Recognition Use

We’ve previously blogged about the National Telecommunications and Information Administration (NTIA) privacy multistakeholder process to address concerns associated with the emerging commercial use of facial recognition technology. Notably, last year, the self-regulatory initiative hit a stumbling block when nine consumer advocacy groups withdrew from the process due to a lack of consensus on a minimum standard of consent.  Regardless, the remaining participants continued on and last week, the stakeholders concluded the process and came to a consensus on final privacy guidelines, “Privacy Best Practice Recommendations For Commercial Facial Recognition Use.”

The guidelines generally apply to “covered entities,” or any person, including corporate affiliates, that collects, stores, or processes facial template data. The guidelines do not apply to the use of facial recognition for the purpose of aggregate or non-identifying analysis (e.g., the counting of unique visitors to a particular location), and is not applicable to certain governmental uses of the technology, such as for law enforcement or national security. Moreover, under the guidelines, data that has been “reasonably de-identified” is not facial template data and therefore not covered by the best practices.

The guidelines are generally broken down into several categories:

  • Transparency: Covered entities are encouraged to reasonably disclose to consumers their practices regarding the collection, storage and use of faceprints and update such policies in the event of material changes. Policies should generally describe the foreseeable purposes of data collection, the entity’s data retention and de-identification practices, and whether the entity offers the consumer the ability to review or delete any facial template data.  Where facial recognition technology is used at a physical location, the entity is encouraged to provide “concise notice” to consumers of such use.
  • Recommended Practices: Before implementing facial recognition technology, the guidelines suggest that entities consider certain important issues, including:
    • Voluntary or involuntary enrollment
    • Types of other sensitive data being captured and any other risks to the consumer
    • Whether faceprints will be used to determine certain eligibility for or access to certain activities covered under law (e.g., employment, healthcare)
    • Reasonable consumer expectations regarding the use of the data
  • Data Sharing: Covered entities that use facial recognition to determine an individual’s identity are encouraged to offer the individual the opportunity to control the sharing of such data with unaffiliated third parties (note: an unaffiliated third party does not include a covered entity’s vendor or supplier that provides a product or service related to the facial template data).
  • Data Security: Reasonable security measures should be used to safeguard collected data, consistent with the operator’s size, the nature and scope of the activities, and the sensitive nature of the data.
  • Redress: Covered entities are encouraged to offer consumers a process to submit concerns over the entity’s use of faceprints.

In the end, the recommendations are merely best practices for the emerging use of facial recognition technology, and they will certainly spark more debate on the issue.  Following the release, privacy advocates generally criticized the guidelines and had hoped that stronger notice and consent principles and additional guidance on how to handle certain privacy risks had been part of the final document.  It remains to be seen how many of the suggested guidelines will be implemented in practice, and whether consumers themselves will nudge the industry to erect additional privacy controls.

In the meantime, entities must still consider compliance issues surrounding the noteworthy Illinois biometric privacy law (the Biometric Information Privacy Act, or BIPA) enacted in 2008 and now the subject of much recent litigation, including:

We will continue to monitor the latest legal and important industry developments relating to biometric privacy.

FTC Prevails in Action against Amazon for Unlawfully Billing Parents for Children’s Unauthorized In-App Purchases

In the wake of thousands of parental complaints about unauthorized in-app purchases made by their children, resulting in millions of dollars in disputed charges, the Federal Trade Commission (“FTC”) brought suit against Amazon, Inc. (“Amazon”) in July 2014. The FTC sought a court order requiring refunds to consumers for unauthorized charges and permanently banning the company from billing parents and other account holders for in-app charges without their consent.  This past April, a Washington district court granted the FTC’s motion for summary judgment on liability, ruling that the billing of account holders for in-app purchases made by children without the account holders’ express informed consent constituted an unfair practice under Section 5 of the FTC Act.  (FTC v., Inc., 2016 WL 1643973 (W.D. Wash. Apr. 26, 2016)).  Despite rejecting Amazon’s challenge to the FTC’s claims, the court denied the FTC request for injunctive relief to prevent future violations.  The ruling underscores the trend of FTC enforcement, as it moves its enforcement energies more toward the mobile industry, and the importance of building proper consent mechanisms when consumers, especially children, are charged for purchases.

Amazon’s in-app purchasing functionality began in November 2011 and despite regular parental complaints, no updates were made to the in-app charge framework until March 2012. While Amazon instituted some protections regarding in-app charges over $20 in 2012, and in 2013 offered users additional disclosures about in-app charges and options for prior consent in some circumstances, these updates did not prevent children from making unauthorized in-app purchases. Amazon did not make sufficient changes to its in-app purchasing methods until June 2014. At this time, in-app purchasing required account holders’ express informed consent prior to completing purchases on its newer devices. For example, before a user’s first in-app purchase is completed, users are prompted to answer whether they want to require a password for each future purchase or permit purchases without a password going forward; if users choose to require a password for future purchases, they are also prompted to set the parental controls to prevent app purchases by children or limit the amount of time children spend on these apps.

According to the FTC, children’s games often encouraged children to acquire virtual items in ways that blurred the lines between what cost virtual currency and what cost real money, and some children were even able to incur “real money” charges by clicking buttons at random during play.  Moreover, it appeared that many parents simply did not understand that several apps, particularly ones labeled “FREE,” allowed users to make in-app charges because such a notification was not conspicuously presented alongside the app’s price prior to download, but instead was buried within a longer description that users had to scroll down to read.

In granting summary judgment on liability under the FTC Act, the court rejected Amazon’s argument that its liberal refund practices sufficiently mitigated harm. The court reasoned that many customers may not have been aware of any unauthorized purchases and that the time spent pursuing refunds constitutes an additional injury.   The court also rejected Amazon’s argument that the injuries were reasonably avoidable, indicating that it is unreasonable to expect a consumer to be familiar with in-app purchases within apps labeled as “FREE.”  Lastly, Amazon’s policy stated they did not provide refunds for in-app purchases so it was entirely reasonable for a consumer to be unaware of the refund procedures for unauthorized in-app purchases.

In denying the FTC’s request for permanent injunctive relief, the court noted that Amazon had already implemented measures to protect consumers, and it found no “cognizable danger of recurring violation.” Since the implementation of an in-app charge framework requiring account holders’ express informed consent began in June 2014 the likelihood of future unlawful conduct is minimal, despite the fact that there still exists the possibility of in-app purchases under $1 without authorization on older, first generation Kindle devices (a device not sold since 2012).

With Amazon’s liability under Section 5 of the FTC Act having been established, further briefing will be required to determine monetary damages that ran, in general, from when in-app charges began in November 2011 up until June 2014 when Amazon’s revised in-app purchase prompt was instituted. It remains to be seen whether the amount of damages or any settlement will be in line with previous settlements the FTC has reached with other mobile platforms regarding issues involving similar in-app charges.

It appears, following the Amazon ruling, that best practices for a mobile platform or owner of an app that features in-app purchasing opportunities should include clear notice to the consumer and express informed consent from the account holder, for apps directed to both children and adults. Clear and conspicuous notice that in-app purchasing exists should be provided prior to downloading the app and express informed consent, in some form, should be provided before the in-app purchase is made. Further note, that while refund policies for unauthorized in-app purchases may reduce the number of consumer complaints, the Amazon court stressed that such policies may not be sufficient to prevent liability under Section 5 of the FTC Act as the process of acquiring these refunds has been deemed an additional injury to the consumers.

Craigslist Files Another Suit against Data Scraper

For years, craigslist has aggressively used technological and legal methods to prevent unauthorized parties from scraping, linking to or accessing user postings for their own commercial purposes.  In a prior post, we briefly discussed craigslist’s action against a certain aggregator that was scraping content from the craigslist site (despite having received a cease and desist letter informing it that it was no long permitted to access the site) and offering the data to outside developers through an API. (See generally Craigslist, Inc. v. 3Taps, Inc., 2013 WL 1819999 (N.D. Cal. Apr. 30, 2013)).  In 2015, craigslist subsequently settled the 3Taps lawsuit, with relief against various defendants that included monetary payments and a permanent injunction barring the defendants from accessing any craigslist content, circumventing any technological measures that prohibit spidering activity or otherwise representing that they were affiliated with craigslist.

This past April, the 3Taps saga, in a way, was resurrected.  Craigslist filed a complaint against the real estate listing site RadPad, an entity that had allegedly received, for a limited time period, scraped craigslist data from 3Taps that it used on its own website.  In its complaint, craigslist claims that after the 3Taps litigation was settled in June 2015, RadPad or its agents began their own independent efforts to scrape craigslist site.  Craigslist alleges that RadPad used sophisticated techniques to evade detection and scrape thousands of user postings and thereafter harvested users’ contact information to send spam over the site’s messaging system in an effort to entice users to switch to RadPad’s services.  (See Craigslist, Inc. v. RadPad, Inc., No. 16-1856 (N.D. Cal. filed Apr. 8, 2016)).  In its complaint seeking compensatory damages and injunctive relief, craigslist brought several causes of action, including:

  • Breach of Contract: The complaint alleges that as a user of the site, RadPad was presented with and agreed to the site’s Terms of Use, which prohibits scraping and spidering activity, collection of user contact information, as well as unsolicited spam.
  • CAN-SPAM (and California state spam law): RadPad allegedly initiated the transmission of commercial email messages with misleading subject headings and a non-functioning opt-out mechanism, among other violations, and also had allegedly collected email addresses using email harvesting software.  Craigslist asserts that it was adversely affected and incurred expenses to combat the spam messages and is entitled to statutory damages.
  • Computer Fraud and Abuse Act (CFAA) (and California state law equivalent): The complaint alleges that RadPad accessed craigslist’s site in contravention of the Terms of Use and thereby gained unauthorized access to craigslist’s servers and obtained valuable user data.  Websites seeking to deter unauthorized screen scraping frequently advance this federal cause of action, with mixed results.
  • Copyright Infringement: Craigslist claims that RadPad is liable for secondary copyright infringement for inducing 3Taps’ prior copyright infringement, by allegedly assisting 3Taps in shaping the “data feed” and advising on how to circumvent the site’s technological blocks.

In response, RadPad filed its answer late last month, arguing that craigslist is attempting to exclude RadPad from accessing publicly-available information that would allow it to compete in the classified-ad market for real estate rentals.  In its counterclaim, RadPad claims that, in its efforts to block RadPad, craigslist has prevented email messages containing the word “RadPad” from being delivered to landlords in response to craigslist listings, an act that, it alleges, constitutes unfair competition.  RadPad is also seeking a declaration that craigslist is wrongfully asserting copyright claims over rental listings that are not copyrightable subject matter.

Outside of the dispute, the debate continues.  Digital rights advocates have argued that content on publicly-available websites is implicitly free to disseminate across the web, while web services hosting valuable user-generated content or other data typically wish to exercise control over which parties can access and use it for commercial purposes.  While the law surrounding scraping remains unsettled, craigslist has notched some notable litigation successes in recent years, including, in the prior 3Taps case.  In that case, a California district court ruled, among other things, that an owner of a publicly-accessible website may, through a cease-and-desist letter and use of IP address blocking technology, revoke a specific user’s authorization to access that website. Such lack of “authorization” could form the basis of a viable claim under the federal Computer Fraud and Abuse Act and state law counterpart. See Craigslist, Inc. v. 3Taps, Inc., 2013 WL 4447520 (N.D. Cal. Aug. 16, 2013).

It remains to be seen whether the court will consider any of the issues in the RadPad dispute on the merits or whether the parties will resolve the matter with some agreed-upon restrictions limiting or barring RadPad’s access to craigslist’s site.  We will continue to watch this case and similar data scraping disputes carefully.

Proposed Amendment to Illinois Law Would Have Changed Shape of Biometric Privacy Litigation

Late last week, the Illinois state senate considered an amendment tacked onto to an unrelated bill that would have revised the Illinois’ Biometric Information Privacy Act, a law that has been the subject of much debate and litigation in the past year.  This amendment had the potential to drastically affect the current litany of lawsuits lodged against technology companies over their photo-tagging services.  In the wake of heavy lobbying against the amendment by opponents to the change, and as the legislative session neared a close, the senator who proposed the amendment announced that it would, for now, be put on hold.

The use of facial recognition technology by certain web and mobile services with social aspects has become an emerging concern in the past year.  Multiple social media companies have been ensnared in litigation over their use of facial recognition technology to provide certain photo-tagging and other related services, with plaintiffs seeking statutory damages under Illinois’ Biometric Information Privacy Act (BIPA).

Plaintiffs alleging violations of BIPA generally assert that certain web and mobile services have amassed users’ faceprints without the requisite notice and consent by using advanced facial recognition technology to extract biometric identifiers from uploaded user photographs.  Defendants, in turn, have argued that BIPA expressly excludes from its coverage “photographs” and “any information derived from photographs” and that the statute’s use of the term “scan of hand or face geometry” was only meant to cover in-person scans of a person’s actual hand or face (not the scan of an uploaded photograph).  Thus far, such defense arguments have been rejected at the early motion to dismiss phase of several ongoing disputes.

The amendment to BIPA (attached to a pending bill regarding unclaimed property) would shorten the reach of the law, echoing the interpretations of BIPA advanced by the various defendants in ongoing litigation.  The BIPA amendment expressly excludes both physical and digital photographs from the definition of “biometric identifier” and most notably, limits the definition of “scan of hand or face geometry” to in-person scans (“data resulting from an in-person process whereby a part of the body is traversed by a detector or an electronic beam”).  Such an amendment would effectively abrogate BIPA claims related to the collection of user faceprints by online services.

It is unclear whether this revision to the Illinois biometric privacy law will be taken up and debated when the Illinois legislature reconvenes.

Tenth Circuit Affirms Lower Court Ruling on Meaning of “User” in DMCA §512(c) Safe Harbor

Title II of the Digital Millennium Copyright Act (DMCA) offers safe harbors for qualifying service providers to limit their liability for claims of copyright infringement. To benefit from the Section 512(c) safe harbor, a storage provider must establish that the infringing content was stored “at the direction of the user.”  17 U.S.C. § 512(c)(1).  The statute does not define “user” and until recently, no court had interpreted the term.

Last May, we wrote about a Colorado district court decision that interpreted what “storage at the direction of a user” means in the context of online media — specifically, the business model of, a “content farm” style site which posts articles written by independent contractors on popular topics of the day.  The dispute before the lower court centered on whether was entitled to protection under the § 512(c) safe harbor.  More specifically, the question became whether the contributors to the Examiner (who had to sign an “Examiners Independent Contractor Agreement and License” before receiving permission to post to the site) were “users” under § 512(c), that is, were the plaintiffs’ photographs stored on defendant’s system at the direction of the site’s contributors or stored at the direction of the defendant.

In BWP Media USA, Inc. v. Clarity Digital Group, LLC, 2016 WL 1622399 (10th Cir. Apr. 25, 2016), the appeals court affirmed the lower court’s holding that the infringing photographs were not uploaded at the direction of the defendant and was protected under the DMCA safe harbor.  The Tenth Circuit found that, in the absence of evidence that the defendant directed the contributors to upload the plaintiffs’ photographs to the site, the defendant’s policies (e.g., prohibiting use of infringing content in the user agreement, having a repeat infringer policy and offering contributors free access to a licensed photo library) showed that the photographs were stored at the direction of the “user.”

According to the court, the word “user” in the DMCA should be interpreted according to its plain meaning, to describe “a person or entity who avails itself of the service provider’s system or network to store material.”  Notably, the court flatly rejected the plaintiff’s argument that the term “user” should exclude an ISP’s or provider’s employees and agents, or any individual who enters into a contract and receives compensation from a provider.  Refusing to place its own limitations on the meaning of “user,” the Tenth Circuit stated that a “user” is simply “anyone who uses a website — no class of individuals is inherently excluded,” even commenting that “simply because someone is an employee does not automatically disqualify him as a ‘user’ under § 512.”

To quell any fears that such a natural reading would create a “lawless no-man’s-land,” the court noted that the term “user” must be read in conjunction with the remainder of the safe harbor provision.  As such, a storage provider will only qualify for safe harbor protection when it can show, among other things, that the content was stored at the direction of a “user,” that the provider had no actual knowledge of the infringement, that there were no surrounding facts or circumstances making the infringement apparent, or that upon learning of the infringement, the provider acted expeditiously to take down the infringing material. See 17 U.S.C. § 512(c)(1)(A).  Thus, the relevant question isn’t who is the “user,” but rather, who directed the storage of the infringing content – as the court stressed, there is no protection under § 512 when the infringing material is on the system or network as a result of the provider’s “own acts or decisions”:

“When an ISP ‘actively encourag[es] infringement, by urging [its] users to both upload and download particular copyrighted works,’ it will not reap the benefits of § 512’s safe harbor. However, if the infringing content has merely gone through a screening or automated process, the ISP will generally benefit from the safe harbor’s protection.”

The opinion maintains the relatively robust protections of the DMCA safe harbor for storage providers that follow proper procedures.  In the court’s interpretation, the term “user” is not limited by any relationship with the provider, essentially removing the concept of the user from the safe harbor analysis and placing the emphasis on the remaining requirements of the statute (which, regardless, are frequently the subject of contention in litigation involving the DMCA safe harbor).

California Court Refuses to Dismiss Biometric Privacy Suit against Facebook

The District Court for the Northern District of California recently issued what could be a very significant decision on a number of important digital law issues.  These include: the enforceability of “clickwrap” as compared to “web wrap” website terms of use, the enforceability of a choice-of-law provision in such terms of use, and a preliminary interpretation of the Illinois Biometric Information Privacy Act (BIPA).  In its opinion, the court found Facebook’s terms of use to be enforceable, but declined to enforce the California choice of law provision and held that the plaintiffs stated a claim under BIPA.  (See In re Facebook Biometric Information Privacy Litig., No. 15-03747 (N.D. Cal. May 5, 2016)).

As a result, the ruling could affect cases involving the enforceability of terms of use generally, and certainly choice of law provisions commonly found in such terms.  The court’s interpretation of BIPA is likely to be a consideration in similar pending biometric privacy suits.  The decision should also prompt services to review their user agreements or otherwise reexamine their legal compliance regarding facial recognition data collection and retention.

As we noted in a prior post, Facebook has been named as a defendant in a number of lawsuits claiming that its facial recognition-based system of photo tagging violates BIPA.  Plaintiffs generally allege that Facebook’s Tag Suggestions program amassed users’ biometric data without notice and consent by using advanced facial recognition technology to extract biometric identifiers from user photographs uploaded to the service.  The various Illinois-based suits were eventually transferred to the Northern District of California and consolidated.

In its motion to dismiss the consolidated action, Facebook argued that the plaintiffs failed to state a claim under BIPA and that the California choice-of-law provision in its user agreement precluded the application of the Illinois statute.

As an initial matter, the court ruled that Facebook’s user agreement was enforceable because the plaintiffs assented to the terms when they initially signed up for Facebook, and also agreed to the current user agreement after having continued to use Facebook after receiving notice of the current terms.  Before reaching its conclusion, however, the court took some potshots at Facebook’s online contracting process. While the exact methods of electronic contracting for each of the multiple plaintiffs were slightly different, the court examined most closely the method in use for the plaintiff Licata: “By clicking Sign Up, you are indicating that you have read and agree to the Terms of Use and Privacy Policy,” with the terms of use presented by a conspicuous hyperlink. Expressing its skepticism of this relatively common method of online contracting, the court found that the use of a single “Sign Up” button to activate an account and accept the terms (as opposed to a separate clickbox to manifest the user’s assent to the terms that is distinct from the “Register” button) “raises concerns about contract formation.”   In the end, the court conceded that Ninth Circuit precedent “indicated a tolerance for the single-click ‘Sign Up’ and assent practice,” and that the Ninth Circuit itself had cited with approval a decision from the Southern District of New York that had found enforceable Facebook’s contracting process.  The court also commented that the dual-purpose box the plaintiff Licata had to click, located alongside hyperlinked terms, was “enough to create an enforceable agreement” – different enough from certain “web wrap” or “browsewrap” scenarios where a website owner attempts to impose terms upon users based upon mere passive viewing of a website.

However, despite upholding Facebook’s electronic contracting process, the court declined to enforce the California choice-of-law provision in the user agreement and applied Illinois law because it found that Illinois had a greater interest in the outcome of this BIPA-related dispute.

As to the substantive arguments, the court found Facebook’s contention that BIPA excludes from its scope all information involving photographs to be unpersuasive.  In essence, BIPA regulates the collection, retention, and disclosure of personal biometric identifiers and biometric information.  While the statute defines “biometric identifier” as “a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry,” it also specifically excludes photographs from that definition.  Facebook (and even Shutterfly in its attempt to dismiss a similar suit regarding its photo tagging practices) attempted to use this tension or apparent ambiguity within the statute to escape its reach.  However, viewing the statute as a whole, the court stated that the plaintiffs stated a claim under the plain language of BIPA:

“Read together, these provisions indicate that the Illinois legislature enacted BIPA to address emerging biometric technology, such as Facebook’s face recognition software as alleged by plaintiffs…. ‘Photographs’ is better understood to mean paper prints of photographs, not digitized images stored as a computer file and uploaded to the Internet. Consequently, the Court will not read the statute to categorically exclude from its scope all data collection processes that use images.”

The court also rejected Facebook’s argument that the statute’s reference to a “scan of hand or face geometry” only applied to in-person scans of a person’s actual face (such as during a security screening) and that creating faceprints from uploaded photographs does not constitute a “scan of face geometry” under the statute.  The court found this “cramped interpretation” to be against the statute’s focus and “antithetical to its broad purpose of protecting privacy in the face of emerging biometric technology.”

However, in allowing the suit to go forward, the court cautioned that discovery might elicit facts that could change the outcome:

“As the facts develop, it may be that “scan” and “photograph” with respect to Facebook’s practices take on technological dimensions that might affect the BIPA claims. Other fact issues may also inform the application of BIPA. But those are questions for another day.”

This makes the second court that has refused to shelve a BIPA-related case at the motion to dismiss stage (the first being the Illinois court in Norberg v. Shutterfly, a dispute that was settled this past April).  The Facebook decision is notable in that the court refused to categorically rule that photo tagging, a function offered by multiple tech companies, fell outside the ambit of BIPA.  Companies that offer online or mobile services that involve the collection of covered biometric information will ultimately have to decide how to react to this latest ruling, perhaps considering changes to their notice and consent practices, or deciding to not collect or store biometric data at all, or else take a wait and see approach as the Facebook litigation proceeds.

We will continue to closely watch the ongoing litigation, developments and best practices surrounding biometric privacy.

User of Free App May Be “Consumer” under the Video Privacy Protection Act

This past week, the First Circuit issued a notable opinion concerning the contours of liability under the Video Privacy Protection Act (VPPA) – a decision that stirs up further uncertainty as to where to draw the line regarding VPPA liability when it comes to mobile apps.  (See Yershov v. Gannett Satellite Information Network Inc., No. 15-1719 (1st Cir. Apr. 29, 2016)).  The opinion, which reversed the dismissal of the case by the district court, took a more generous view than the lower court as to who is a “consumer” under the statute.  The court’s reasoning also ran contrary to a decision from the Northern District of Georgia from last month. There, the district court ruled that a user of a free app was not a “consumer” under the VPPA and that the collection of the user’s anonymous mobile phone MAC address and associated video viewing history did not qualify as “personally identifiable information” that links an actual person to actual video materials. (See Perry v. Cable News Network, Inc., No. 14-02926 (Apr. 20, 2016)).

Subject to certain exceptions, the VPPA prohibits “video tape service providers” from knowingly disclosing, to a third-party, “personally identifiable information concerning any consumer.” 18 U.S.C. §2710(b).  Under the VPPA, the term “consumer” means any “renter, purchaser, or subscriber of goods or services from a video tape service provider.” 18 U.S.C. §2710(a)(1).  The term “personally identifiable information” includes “information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider.” 18 U.S.C. §2710(a)(3).

In Yershov, a user of the USA Today app alleged that each time he viewed a video clip, the app transmitted his mobile Android ID, GPS coordinates and identification of the watched video to a third-party analytics company to create user profiles for the purposes of targeted advertising, all in violation of the VPPA.  In dismissing the complaint, the lower court had found that while the information the app disclosed was “personally identifiable information” (PII) under the VPPA, the plaintiff, as the user of a free app, was not a consumer (i.e., a “renter, purchaser, or subscriber” of or to Gannett’s video content) protected by the VPPA.

Personally Identifiable Information

The First Circuit agreed with the district court that the individual’s information at issue was, in fact, PII.  As the appeals court noted, the statutory term “personally identifiable information” is “awkward and unclear.” As a result, courts are still grappling with whether unique device IDs and GPS data are PII under the statute.  The analysis has not been consistent. For example, last year a New York court ruled that an anonymized Roku device serial number was not PII because it did not necessarily identify a particular person as having accessed specific video materials.  In Yershov, however, the district court found, at the motion to dismiss stage, that the plaintiff plausibly alleged that the disclosed information (i.e., Android ID + GPS data + video viewing information) was PII under the VPPA.  The appeals court agreed, concluding that the transmittal of GPS information with a device identifier plausibly presented a “linkage” of information to identity (i.e., the plaintiff adequately alleged that “Gannett disclosed information reasonably and foreseeably likely to reveal which USA Today videos Yershov has obtained”).  While the court’s explanation was relatively scant, its reasoning seemed to hinge on the collection of a user’s GPS data that the court suggested could be simply processed to locate a user on a street map.

“Consumer” under the VPPA

The court of appeals next tackled whether the plaintiff was a “consumer” within the meaning of the statute.  The court had to determine whether to follow a sister court’s holding that a user of a free app was generally not a “consumer” under the Act (particularly if the user was not required to sign up for an account, make any payments, or receive periodic services, or was otherwise granted access to restricted content), or another older ruling that reached an opposite conclusion. In taking a broad reading of “consumer,” the First Circuit held that while the plaintiff paid no money nor opened an account, he was a “consumer” under the Act because “access was not free of a commitment to provide consideration in the form of that information, which was of value to Gannett.”   In asking the rhetorical question, “Why, after all, did Gannett develop and seek to induce downloading of the App?”, the court saw some form of value exchange in the relationship between app owner and user that rose to the level of a subscriber under the VPPA:

“And by installing the App on his phone, thereby establishing seamless access to an electronic version of USA Today, Yershov established a relationship with Gannett that is materially different from what would have been the case had USA Today simply remained one of millions of sites on the web that Yershov might have accessed through a web browser.”

Ultimately, the court summarized its holding this way:

“We need simply hold, and do hold, only that the transaction described in the complaint–whereby Yershov used the mobile device application that Gannett provided to him, which gave Gannett the GPS location of Yershov’s mobile device at the time he viewed a video, his device identifier, and the titles of the videos he viewed in return for access to Gannett’s video content–plausibly pleads a case that the VPPA’s prohibition on disclosure applies.”

Final Considerations

While the proceedings in this case are still preliminary and the case may yet falter based on other issues, video-based app providers should take notice, particularly with respect to the following questions:

  • When does the disclosure of a unique device number cross the line into PII under the VPPA? While there is certainly a point where such information is too remote or too dependent on what the court called “unforeseeable detective work,” mobile app owners should, in light of Yershov, reexamine practices that involve the disclosure of mobile geolocation data without express, informed consent.
  • When is the user of a free app a “consumer” under the VPPA? While the court reversed the lower court’s ruling on this issue, further discovery of the relationship between the app and the user and how it differs from the relationship between the USA Today website and its users may alter the court’s reasoning.  Also, in a future dispute in another circuit, a court might take a narrower position that a “consumer” or “subscriber” under the VPPA requires at least some of the following indicia of a subscription, such as payment, registration, user commitment, regular delivery, or access to restricted content.