Header graphic for print

New Media and Technology Law Blog

NTIA Multistakeholder Process Finalizes General Privacy Guidelines for Commercial Facial Recognition Use

Posted in Biometrics, Privacy

We’ve previously blogged about the National Telecommunications and Information Administration (NTIA) privacy multistakeholder process to address concerns associated with the emerging commercial use of facial recognition technology. Notably, last year, the self-regulatory initiative hit a stumbling block when nine consumer advocacy groups withdrew from the process due to a lack of consensus on a minimum standard of consent.  Regardless, the remaining participants continued on and last week, the stakeholders concluded the process and came to a consensus on final privacy guidelines, “Privacy Best Practice Recommendations For Commercial Facial Recognition Use.”

The guidelines generally apply to “covered entities,” or any person, including corporate affiliates, that collects, stores, or processes facial template data. The guidelines do not apply to the use of facial recognition for the purpose of aggregate or non-identifying analysis (e.g., the counting of unique visitors to a particular location), and is not applicable to certain governmental uses of the technology, such as for law enforcement or national security. Moreover, under the guidelines, data that has been “reasonably de-identified” is not facial template data and therefore not covered by the best practices.

The guidelines are generally broken down into several categories:

  • Transparency: Covered entities are encouraged to reasonably disclose to consumers their practices regarding the collection, storage and use of faceprints and update such policies in the event of material changes. Policies should generally describe the foreseeable purposes of data collection, the entity’s data retention and de-identification practices, and whether the entity offers the consumer the ability to review or delete any facial template data.  Where facial recognition technology is used at a physical location, the entity is encouraged to provide “concise notice” to consumers of such use.
  • Recommended Practices: Before implementing facial recognition technology, the guidelines suggest that entities consider certain important issues, including:
    • Voluntary or involuntary enrollment
    • Types of other sensitive data being captured and any other risks to the consumer
    • Whether faceprints will be used to determine certain eligibility for or access to certain activities covered under law (e.g., employment, healthcare)
    • Reasonable consumer expectations regarding the use of the data
  • Data Sharing: Covered entities that use facial recognition to determine an individual’s identity are encouraged to offer the individual the opportunity to control the sharing of such data with unaffiliated third parties (note: an unaffiliated third party does not include a covered entity’s vendor or supplier that provides a product or service related to the facial template data).
  • Data Security: Reasonable security measures should be used to safeguard collected data, consistent with the operator’s size, the nature and scope of the activities, and the sensitive nature of the data.
  • Redress: Covered entities are encouraged to offer consumers a process to submit concerns over the entity’s use of faceprints.

In the end, the recommendations are merely best practices for the emerging use of facial recognition technology, and they will certainly spark more debate on the issue.  Following the release, privacy advocates generally criticized the guidelines and had hoped that stronger notice and consent principles and additional guidance on how to handle certain privacy risks had been part of the final document.  It remains to be seen how many of the suggested guidelines will be implemented in practice, and whether consumers themselves will nudge the industry to erect additional privacy controls.

In the meantime, entities must still consider compliance issues surrounding the noteworthy Illinois biometric privacy law (the Biometric Information Privacy Act, or BIPA) enacted in 2008 and now the subject of much recent litigation, including:

We will continue to monitor the latest legal and important industry developments relating to biometric privacy.

FTC Prevails in Action against Amazon for Unlawfully Billing Parents for Children’s Unauthorized In-App Purchases

Posted in Internet, Mobile, Online Commerce

In the wake of thousands of parental complaints about unauthorized in-app purchases made by their children, resulting in millions of dollars in disputed charges, the Federal Trade Commission (“FTC”) brought suit against Amazon, Inc. (“Amazon”) in July 2014. The FTC sought a court order requiring refunds to consumers for unauthorized charges and permanently banning the company from billing parents and other account holders for in-app charges without their consent.  This past April, a Washington district court granted the FTC’s motion for summary judgment on liability, ruling that the billing of account holders for in-app purchases made by children without the account holders’ express informed consent constituted an unfair practice under Section 5 of the FTC Act.  (FTC v. Amazon.com, Inc., 2016 WL 1643973 (W.D. Wash. Apr. 26, 2016)).  Despite rejecting Amazon’s challenge to the FTC’s claims, the court denied the FTC request for injunctive relief to prevent future violations.  The ruling underscores the trend of FTC enforcement, as it moves its enforcement energies more toward the mobile industry, and the importance of building proper consent mechanisms when consumers, especially children, are charged for purchases.

Amazon’s in-app purchasing functionality began in November 2011 and despite regular parental complaints, no updates were made to the in-app charge framework until March 2012. While Amazon instituted some protections regarding in-app charges over $20 in 2012, and in 2013 offered users additional disclosures about in-app charges and options for prior consent in some circumstances, these updates did not prevent children from making unauthorized in-app purchases. Amazon did not make sufficient changes to its in-app purchasing methods until June 2014. At this time, in-app purchasing required account holders’ express informed consent prior to completing purchases on its newer devices. For example, before a user’s first in-app purchase is completed, users are prompted to answer whether they want to require a password for each future purchase or permit purchases without a password going forward; if users choose to require a password for future purchases, they are also prompted to set the parental controls to prevent app purchases by children or limit the amount of time children spend on these apps.

According to the FTC, children’s games often encouraged children to acquire virtual items in ways that blurred the lines between what cost virtual currency and what cost real money, and some children were even able to incur “real money” charges by clicking buttons at random during play.  Moreover, it appeared that many parents simply did not understand that several apps, particularly ones labeled “FREE,” allowed users to make in-app charges because such a notification was not conspicuously presented alongside the app’s price prior to download, but instead was buried within a longer description that users had to scroll down to read.

In granting summary judgment on liability under the FTC Act, the court rejected Amazon’s argument that its liberal refund practices sufficiently mitigated harm. The court reasoned that many customers may not have been aware of any unauthorized purchases and that the time spent pursuing refunds constitutes an additional injury.   The court also rejected Amazon’s argument that the injuries were reasonably avoidable, indicating that it is unreasonable to expect a consumer to be familiar with in-app purchases within apps labeled as “FREE.”  Lastly, Amazon’s policy stated they did not provide refunds for in-app purchases so it was entirely reasonable for a consumer to be unaware of the refund procedures for unauthorized in-app purchases.

In denying the FTC’s request for permanent injunctive relief, the court noted that Amazon had already implemented measures to protect consumers, and it found no “cognizable danger of recurring violation.” Since the implementation of an in-app charge framework requiring account holders’ express informed consent began in June 2014 the likelihood of future unlawful conduct is minimal, despite the fact that there still exists the possibility of in-app purchases under $1 without authorization on older, first generation Kindle devices (a device not sold since 2012).

With Amazon’s liability under Section 5 of the FTC Act having been established, further briefing will be required to determine monetary damages that ran, in general, from when in-app charges began in November 2011 up until June 2014 when Amazon’s revised in-app purchase prompt was instituted. It remains to be seen whether the amount of damages or any settlement will be in line with previous settlements the FTC has reached with other mobile platforms regarding issues involving similar in-app charges.

It appears, following the Amazon ruling, that best practices for a mobile platform or owner of an app that features in-app purchasing opportunities should include clear notice to the consumer and express informed consent from the account holder, for apps directed to both children and adults. Clear and conspicuous notice that in-app purchasing exists should be provided prior to downloading the app and express informed consent, in some form, should be provided before the in-app purchase is made. Further note, that while refund policies for unauthorized in-app purchases may reduce the number of consumer complaints, the Amazon court stressed that such policies may not be sufficient to prevent liability under Section 5 of the FTC Act as the process of acquiring these refunds has been deemed an additional injury to the consumers.

Craigslist Files Another Suit against Data Scraper

Posted in Contracts, Copyright, E-mail, Online Content, Screen Scraping

For years, craigslist has aggressively used technological and legal methods to prevent unauthorized parties from scraping, linking to or accessing user postings for their own commercial purposes.  In a prior post, we briefly discussed craigslist’s action against a certain aggregator that was scraping content from the craigslist site (despite having received a cease and desist letter informing it that it was no long permitted to access the site) and offering the data to outside developers through an API. (See generally Craigslist, Inc. v. 3Taps, Inc., 2013 WL 1819999 (N.D. Cal. Apr. 30, 2013)).  In 2015, craigslist subsequently settled the 3Taps lawsuit, with relief against various defendants that included monetary payments and a permanent injunction barring the defendants from accessing any craigslist content, circumventing any technological measures that prohibit spidering activity or otherwise representing that they were affiliated with craigslist.

This past April, the 3Taps saga, in a way, was resurrected.  Craigslist filed a complaint against the real estate listing site RadPad, an entity that had allegedly received, for a limited time period, scraped craigslist data from 3Taps that it used on its own website.  In its complaint, craigslist claims that after the 3Taps litigation was settled in June 2015, RadPad or its agents began their own independent efforts to scrape craigslist site.  Craigslist alleges that RadPad used sophisticated techniques to evade detection and scrape thousands of user postings and thereafter harvested users’ contact information to send spam over the site’s messaging system in an effort to entice users to switch to RadPad’s services.  (See Craigslist, Inc. v. RadPad, Inc., No. 16-1856 (N.D. Cal. filed Apr. 8, 2016)).  In its complaint seeking compensatory damages and injunctive relief, craigslist brought several causes of action, including:

  • Breach of Contract: The complaint alleges that as a user of the site, RadPad was presented with and agreed to the site’s Terms of Use, which prohibits scraping and spidering activity, collection of user contact information, as well as unsolicited spam.
  • CAN-SPAM (and California state spam law): RadPad allegedly initiated the transmission of commercial email messages with misleading subject headings and a non-functioning opt-out mechanism, among other violations, and also had allegedly collected email addresses using email harvesting software.  Craigslist asserts that it was adversely affected and incurred expenses to combat the spam messages and is entitled to statutory damages.
  • Computer Fraud and Abuse Act (CFAA) (and California state law equivalent): The complaint alleges that RadPad accessed craigslist’s site in contravention of the Terms of Use and thereby gained unauthorized access to craigslist’s servers and obtained valuable user data.  Websites seeking to deter unauthorized screen scraping frequently advance this federal cause of action, with mixed results.
  • Copyright Infringement: Craigslist claims that RadPad is liable for secondary copyright infringement for inducing 3Taps’ prior copyright infringement, by allegedly assisting 3Taps in shaping the “data feed” and advising on how to circumvent the site’s technological blocks.

In response, RadPad filed its answer late last month, arguing that craigslist is attempting to exclude RadPad from accessing publicly-available information that would allow it to compete in the classified-ad market for real estate rentals.  In its counterclaim, RadPad claims that, in its efforts to block RadPad, craigslist has prevented email messages containing the word “RadPad” from being delivered to landlords in response to craigslist listings, an act that, it alleges, constitutes unfair competition.  RadPad is also seeking a declaration that craigslist is wrongfully asserting copyright claims over rental listings that are not copyrightable subject matter.

Outside of the dispute, the debate continues.  Digital rights advocates have argued that content on publicly-available websites is implicitly free to disseminate across the web, while web services hosting valuable user-generated content or other data typically wish to exercise control over which parties can access and use it for commercial purposes.  While the law surrounding scraping remains unsettled, craigslist has notched some notable litigation successes in recent years, including, in the prior 3Taps case.  In that case, a California district court ruled, among other things, that an owner of a publicly-accessible website may, through a cease-and-desist letter and use of IP address blocking technology, revoke a specific user’s authorization to access that website. Such lack of “authorization” could form the basis of a viable claim under the federal Computer Fraud and Abuse Act and state law counterpart. See Craigslist, Inc. v. 3Taps, Inc., 2013 WL 4447520 (N.D. Cal. Aug. 16, 2013).

It remains to be seen whether the court will consider any of the issues in the RadPad dispute on the merits or whether the parties will resolve the matter with some agreed-upon restrictions limiting or barring RadPad’s access to craigslist’s site.  We will continue to watch this case and similar data scraping disputes carefully.

Proposed Amendment to Illinois Law Would Have Changed Shape of Biometric Privacy Litigation

Posted in Biometrics, Internet, Mobile, Privacy

Late last week, the Illinois state senate considered an amendment tacked onto to an unrelated bill that would have revised the Illinois’ Biometric Information Privacy Act, a law that has been the subject of much debate and litigation in the past year.  This amendment had the potential to drastically affect the current litany of lawsuits lodged against technology companies over their photo-tagging services.  In the wake of heavy lobbying against the amendment by opponents to the change, and as the legislative session neared a close, the senator who proposed the amendment announced that it would, for now, be put on hold.

The use of facial recognition technology by certain web and mobile services with social aspects has become an emerging concern in the past year.  Multiple social media companies have been ensnared in litigation over their use of facial recognition technology to provide certain photo-tagging and other related services, with plaintiffs seeking statutory damages under Illinois’ Biometric Information Privacy Act (BIPA).

Plaintiffs alleging violations of BIPA generally assert that certain web and mobile services have amassed users’ faceprints without the requisite notice and consent by using advanced facial recognition technology to extract biometric identifiers from uploaded user photographs.  Defendants, in turn, have argued that BIPA expressly excludes from its coverage “photographs” and “any information derived from photographs” and that the statute’s use of the term “scan of hand or face geometry” was only meant to cover in-person scans of a person’s actual hand or face (not the scan of an uploaded photograph).  Thus far, such defense arguments have been rejected at the early motion to dismiss phase of several ongoing disputes.

The amendment to BIPA (attached to a pending bill regarding unclaimed property) would shorten the reach of the law, echoing the interpretations of BIPA advanced by the various defendants in ongoing litigation.  The BIPA amendment expressly excludes both physical and digital photographs from the definition of “biometric identifier” and most notably, limits the definition of “scan of hand or face geometry” to in-person scans (“data resulting from an in-person process whereby a part of the body is traversed by a detector or an electronic beam”).  Such an amendment would effectively abrogate BIPA claims related to the collection of user faceprints by online services.

It is unclear whether this revision to the Illinois biometric privacy law will be taken up and debated when the Illinois legislature reconvenes.

Tenth Circuit Affirms Lower Court Ruling on Meaning of “User” in DMCA §512(c) Safe Harbor

Posted in Copyright, Online Content

Title II of the Digital Millennium Copyright Act (DMCA) offers safe harbors for qualifying service providers to limit their liability for claims of copyright infringement. To benefit from the Section 512(c) safe harbor, a storage provider must establish that the infringing content was stored “at the direction of the user.”  17 U.S.C. § 512(c)(1).  The statute does not define “user” and until recently, no court had interpreted the term.

Last May, we wrote about a Colorado district court decision that interpreted what “storage at the direction of a user” means in the context of online media — specifically, the business model of Examiner.com, a “content farm” style site which posts articles written by independent contractors on popular topics of the day.  The dispute before the lower court centered on whether Examiner.com was entitled to protection under the § 512(c) safe harbor.  More specifically, the question became whether the contributors to the Examiner (who had to sign an “Examiners Independent Contractor Agreement and License” before receiving permission to post to the site) were “users” under § 512(c), that is, were the plaintiffs’ photographs stored on defendant’s system at the direction of the site’s contributors or stored at the direction of the defendant.

In BWP Media USA, Inc. v. Clarity Digital Group, LLC, 2016 WL 1622399 (10th Cir. Apr. 25, 2016), the appeals court affirmed the lower court’s holding that the infringing photographs were not uploaded at the direction of the defendant and Examiner.com was protected under the DMCA safe harbor.  The Tenth Circuit found that, in the absence of evidence that the defendant directed the contributors to upload the plaintiffs’ photographs to the site, the defendant’s policies (e.g., prohibiting use of infringing content in the user agreement, having a repeat infringer policy and offering contributors free access to a licensed photo library) showed that the photographs were stored at the direction of the “user.”

According to the court, the word “user” in the DMCA should be interpreted according to its plain meaning, to describe “a person or entity who avails itself of the service provider’s system or network to store material.”  Notably, the court flatly rejected the plaintiff’s argument that the term “user” should exclude an ISP’s or provider’s employees and agents, or any individual who enters into a contract and receives compensation from a provider.  Refusing to place its own limitations on the meaning of “user,” the Tenth Circuit stated that a “user” is simply “anyone who uses a website — no class of individuals is inherently excluded,” even commenting that “simply because someone is an employee does not automatically disqualify him as a ‘user’ under § 512.”

To quell any fears that such a natural reading would create a “lawless no-man’s-land,” the court noted that the term “user” must be read in conjunction with the remainder of the safe harbor provision.  As such, a storage provider will only qualify for safe harbor protection when it can show, among other things, that the content was stored at the direction of a “user,” that the provider had no actual knowledge of the infringement, that there were no surrounding facts or circumstances making the infringement apparent, or that upon learning of the infringement, the provider acted expeditiously to take down the infringing material. See 17 U.S.C. § 512(c)(1)(A).  Thus, the relevant question isn’t who is the “user,” but rather, who directed the storage of the infringing content – as the court stressed, there is no protection under § 512 when the infringing material is on the system or network as a result of the provider’s “own acts or decisions”:

“When an ISP ‘actively encourag[es] infringement, by urging [its] users to both upload and download particular copyrighted works,’ it will not reap the benefits of § 512’s safe harbor. However, if the infringing content has merely gone through a screening or automated process, the ISP will generally benefit from the safe harbor’s protection.”

The opinion maintains the relatively robust protections of the DMCA safe harbor for storage providers that follow proper procedures.  In the court’s interpretation, the term “user” is not limited by any relationship with the provider, essentially removing the concept of the user from the safe harbor analysis and placing the emphasis on the remaining requirements of the statute (which, regardless, are frequently the subject of contention in litigation involving the DMCA safe harbor).

California Court Refuses to Dismiss Biometric Privacy Suit against Facebook

Posted in Biometrics, Contracts, Internet, Privacy

The District Court for the Northern District of California recently issued what could be a very significant decision on a number of important digital law issues.  These include: the enforceability of “clickwrap” as compared to “web wrap” website terms of use, the enforceability of a choice-of-law provision in such terms of use, and a preliminary interpretation of the Illinois Biometric Information Privacy Act (BIPA).  In its opinion, the court found Facebook’s terms of use to be enforceable, but declined to enforce the California choice of law provision and held that the plaintiffs stated a claim under BIPA.  (See In re Facebook Biometric Information Privacy Litig., No. 15-03747 (N.D. Cal. May 5, 2016)).

As a result, the ruling could affect cases involving the enforceability of terms of use generally, and certainly choice of law provisions commonly found in such terms.  The court’s interpretation of BIPA is likely to be a consideration in similar pending biometric privacy suits.  The decision should also prompt services to review their user agreements or otherwise reexamine their legal compliance regarding facial recognition data collection and retention.

As we noted in a prior post, Facebook has been named as a defendant in a number of lawsuits claiming that its facial recognition-based system of photo tagging violates BIPA.  Plaintiffs generally allege that Facebook’s Tag Suggestions program amassed users’ biometric data without notice and consent by using advanced facial recognition technology to extract biometric identifiers from user photographs uploaded to the service.  The various Illinois-based suits were eventually transferred to the Northern District of California and consolidated.

In its motion to dismiss the consolidated action, Facebook argued that the plaintiffs failed to state a claim under BIPA and that the California choice-of-law provision in its user agreement precluded the application of the Illinois statute.

As an initial matter, the court ruled that Facebook’s user agreement was enforceable because the plaintiffs assented to the terms when they initially signed up for Facebook, and also agreed to the current user agreement after having continued to use Facebook after receiving notice of the current terms.  Before reaching its conclusion, however, the court took some potshots at Facebook’s online contracting process. While the exact methods of electronic contracting for each of the multiple plaintiffs were slightly different, the court examined most closely the method in use for the plaintiff Licata: “By clicking Sign Up, you are indicating that you have read and agree to the Terms of Use and Privacy Policy,” with the terms of use presented by a conspicuous hyperlink. Expressing its skepticism of this relatively common method of online contracting, the court found that the use of a single “Sign Up” button to activate an account and accept the terms (as opposed to a separate clickbox to manifest the user’s assent to the terms that is distinct from the “Register” button) “raises concerns about contract formation.”   In the end, the court conceded that Ninth Circuit precedent “indicated a tolerance for the single-click ‘Sign Up’ and assent practice,” and that the Ninth Circuit itself had cited with approval a decision from the Southern District of New York that had found enforceable Facebook’s contracting process.  The court also commented that the dual-purpose box the plaintiff Licata had to click, located alongside hyperlinked terms, was “enough to create an enforceable agreement” – different enough from certain “web wrap” or “browsewrap” scenarios where a website owner attempts to impose terms upon users based upon mere passive viewing of a website.

However, despite upholding Facebook’s electronic contracting process, the court declined to enforce the California choice-of-law provision in the user agreement and applied Illinois law because it found that Illinois had a greater interest in the outcome of this BIPA-related dispute.

As to the substantive arguments, the court found Facebook’s contention that BIPA excludes from its scope all information involving photographs to be unpersuasive.  In essence, BIPA regulates the collection, retention, and disclosure of personal biometric identifiers and biometric information.  While the statute defines “biometric identifier” as “a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry,” it also specifically excludes photographs from that definition.  Facebook (and even Shutterfly in its attempt to dismiss a similar suit regarding its photo tagging practices) attempted to use this tension or apparent ambiguity within the statute to escape its reach.  However, viewing the statute as a whole, the court stated that the plaintiffs stated a claim under the plain language of BIPA:

“Read together, these provisions indicate that the Illinois legislature enacted BIPA to address emerging biometric technology, such as Facebook’s face recognition software as alleged by plaintiffs…. ‘Photographs’ is better understood to mean paper prints of photographs, not digitized images stored as a computer file and uploaded to the Internet. Consequently, the Court will not read the statute to categorically exclude from its scope all data collection processes that use images.”

The court also rejected Facebook’s argument that the statute’s reference to a “scan of hand or face geometry” only applied to in-person scans of a person’s actual face (such as during a security screening) and that creating faceprints from uploaded photographs does not constitute a “scan of face geometry” under the statute.  The court found this “cramped interpretation” to be against the statute’s focus and “antithetical to its broad purpose of protecting privacy in the face of emerging biometric technology.”

However, in allowing the suit to go forward, the court cautioned that discovery might elicit facts that could change the outcome:

“As the facts develop, it may be that “scan” and “photograph” with respect to Facebook’s practices take on technological dimensions that might affect the BIPA claims. Other fact issues may also inform the application of BIPA. But those are questions for another day.”

This makes the second court that has refused to shelve a BIPA-related case at the motion to dismiss stage (the first being the Illinois court in Norberg v. Shutterfly, a dispute that was settled this past April).  The Facebook decision is notable in that the court refused to categorically rule that photo tagging, a function offered by multiple tech companies, fell outside the ambit of BIPA.  Companies that offer online or mobile services that involve the collection of covered biometric information will ultimately have to decide how to react to this latest ruling, perhaps considering changes to their notice and consent practices, or deciding to not collect or store biometric data at all, or else take a wait and see approach as the Facebook litigation proceeds.

We will continue to closely watch the ongoing litigation, developments and best practices surrounding biometric privacy.

User of Free App May Be “Consumer” under the Video Privacy Protection Act

Posted in Mobile, Privacy, Video, Video Privacy Protection Act

This past week, the First Circuit issued a notable opinion concerning the contours of liability under the Video Privacy Protection Act (VPPA) – a decision that stirs up further uncertainty as to where to draw the line regarding VPPA liability when it comes to mobile apps.  (See Yershov v. Gannett Satellite Information Network Inc., No. 15-1719 (1st Cir. Apr. 29, 2016)).  The opinion, which reversed the dismissal of the case by the district court, took a more generous view than the lower court as to who is a “consumer” under the statute.  The court’s reasoning also ran contrary to a decision from the Northern District of Georgia from last month. There, the district court ruled that a user of a free app was not a “consumer” under the VPPA and that the collection of the user’s anonymous mobile phone MAC address and associated video viewing history did not qualify as “personally identifiable information” that links an actual person to actual video materials. (See Perry v. Cable News Network, Inc., No. 14-02926 (Apr. 20, 2016)).

Subject to certain exceptions, the VPPA prohibits “video tape service providers” from knowingly disclosing, to a third-party, “personally identifiable information concerning any consumer.” 18 U.S.C. §2710(b).  Under the VPPA, the term “consumer” means any “renter, purchaser, or subscriber of goods or services from a video tape service provider.” 18 U.S.C. §2710(a)(1).  The term “personally identifiable information” includes “information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider.” 18 U.S.C. §2710(a)(3).

In Yershov, a user of the USA Today app alleged that each time he viewed a video clip, the app transmitted his mobile Android ID, GPS coordinates and identification of the watched video to a third-party analytics company to create user profiles for the purposes of targeted advertising, all in violation of the VPPA.  In dismissing the complaint, the lower court had found that while the information the app disclosed was “personally identifiable information” (PII) under the VPPA, the plaintiff, as the user of a free app, was not a consumer (i.e., a “renter, purchaser, or subscriber” of or to Gannett’s video content) protected by the VPPA.

Personally Identifiable Information

The First Circuit agreed with the district court that the individual’s information at issue was, in fact, PII.  As the appeals court noted, the statutory term “personally identifiable information” is “awkward and unclear.” As a result, courts are still grappling with whether unique device IDs and GPS data are PII under the statute.  The analysis has not been consistent. For example, last year a New York court ruled that an anonymized Roku device serial number was not PII because it did not necessarily identify a particular person as having accessed specific video materials.  In Yershov, however, the district court found, at the motion to dismiss stage, that the plaintiff plausibly alleged that the disclosed information (i.e., Android ID + GPS data + video viewing information) was PII under the VPPA.  The appeals court agreed, concluding that the transmittal of GPS information with a device identifier plausibly presented a “linkage” of information to identity (i.e., the plaintiff adequately alleged that “Gannett disclosed information reasonably and foreseeably likely to reveal which USA Today videos Yershov has obtained”).  While the court’s explanation was relatively scant, its reasoning seemed to hinge on the collection of a user’s GPS data that the court suggested could be simply processed to locate a user on a street map.

“Consumer” under the VPPA

The court of appeals next tackled whether the plaintiff was a “consumer” within the meaning of the statute.  The court had to determine whether to follow a sister court’s holding that a user of a free app was generally not a “consumer” under the Act (particularly if the user was not required to sign up for an account, make any payments, or receive periodic services, or was otherwise granted access to restricted content), or another older ruling that reached an opposite conclusion. In taking a broad reading of “consumer,” the First Circuit held that while the plaintiff paid no money nor opened an account, he was a “consumer” under the Act because “access was not free of a commitment to provide consideration in the form of that information, which was of value to Gannett.”   In asking the rhetorical question, “Why, after all, did Gannett develop and seek to induce downloading of the App?”, the court saw some form of value exchange in the relationship between app owner and user that rose to the level of a subscriber under the VPPA:

“And by installing the App on his phone, thereby establishing seamless access to an electronic version of USA Today, Yershov established a relationship with Gannett that is materially different from what would have been the case had USA Today simply remained one of millions of sites on the web that Yershov might have accessed through a web browser.”

Ultimately, the court summarized its holding this way:

“We need simply hold, and do hold, only that the transaction described in the complaint–whereby Yershov used the mobile device application that Gannett provided to him, which gave Gannett the GPS location of Yershov’s mobile device at the time he viewed a video, his device identifier, and the titles of the videos he viewed in return for access to Gannett’s video content–plausibly pleads a case that the VPPA’s prohibition on disclosure applies.”

Final Considerations

While the proceedings in this case are still preliminary and the case may yet falter based on other issues, video-based app providers should take notice, particularly with respect to the following questions:

  • When does the disclosure of a unique device number cross the line into PII under the VPPA? While there is certainly a point where such information is too remote or too dependent on what the court called “unforeseeable detective work,” mobile app owners should, in light of Yershov, reexamine practices that involve the disclosure of mobile geolocation data without express, informed consent.
  • When is the user of a free app a “consumer” under the VPPA? While the court reversed the lower court’s ruling on this issue, further discovery of the relationship between the app and the user and how it differs from the relationship between the USA Today website and its users may alter the court’s reasoning.  Also, in a future dispute in another circuit, a court might take a narrower position that a “consumer” or “subscriber” under the VPPA requires at least some of the following indicia of a subscription, such as payment, registration, user commitment, regular delivery, or access to restricted content.

Self-Publishing Platforms Deemed Distributors, Not Publishers in Privacy Suit over Unauthorized Book Cover

Posted in Contracts, Internet, Online Commerce, Privacy

We live in a world that has rapidly redefined and blurred the roles of the “creator” of content, as compared to the roles of the “publisher” and “distributor” of such content.  A recent case touches on some of the important legal issues associated with such change.  Among other things, the case illustrates the importance of service providers maintaining clear and appropriate terms and conditions that relate directly to the role they serve in the expression of content over online media.

The case involves a number of online self-publishing services. For those authors who have struggled to find a publisher or who would otherwise prefer to keep control of their IP rights in their books, there are many such businesses.  Such services allow authors to upload works and pay to transform those manuscripts into paperbacks via a print on demand model or make them available in ebook form for sale on the sites of major e-booksellers. Unlike a traditional publisher, however, self-publishing services do not fact check or edit materials (though, users may take advantage of unaffiliated paid services that do just that) and do not use a vetting process that might catch potentially defamatory or infringing content prior to publishing.  Indeed, beyond automated reviews for things like pornography or plagiarism, these platforms do not review submissions for content and rely on user agreements that contain certain contractual representations about the propriety of the uploaded content.

But what happens when a self-published book offered for sale contains content that may violate a third-party’s right of publicity or privacy rights? Should the self-publishing platforms be treated like traditional “publishers” or more like distributors or booksellers?  This past month, an Ohio district court ruled that several online self-publishing services were not liable for right of publicity or privacy claims for distributing an erotic (and so-called “less than tasteful”) book whose cover contained an unauthorized copy of the plaintiffs’ engagement photo because such services are not publishers. (See Roe v. Amazon.com, 2016 WL 1028265 (S.D. Ohio Mar. 15, 2016)).

Background of the Dispute

The dispute began with the unauthorized publication of the plaintiffs’ engagement photograph on the cover of an erotic book authored by Greg McKenna (under a pseudonym).  The book was uploaded using several online self-publishing platforms and offered for sale on the major ebook sites (as well as being offered in paperback form via print-on-demand).  The alleged privacy violations were aggravated when the book was displayed in nationwide media, including in jokes on some late night TV talk shows.  Less than a month after publication, the author received a letter from plaintiffs’ counsel and contacted the ebook vendors to remove the offending book cover and replace it with a stock image.

The plaintiffs subsequently brought suit against the author McKenna and the self-publishing vendors used by the author (i.e., Amazon’s Kindle Digital Publishing, Barnes & Noble Nook Press and Smashwords), asserting right of publicity and invasion of privacy claims.  Liability against McKenna was sought based upon the allegation that he authored the work in question, and claims against the self-publishing vendors on the theory that they “published” the work.  The court easily ruled that the plaintiffs could proceed against the author because they sufficiently alleged that their likenesses were expropriated for commercial benefit and that they suffered “humiliation and ridicule.”

The self-publishing vendors sought summary judgment asserting that they were not publishers of the book but merely allowed the author to use their systems to distribute it, and that were protected from any liability for third-party content by CDA Section 230.  In opposing dismissal, the plaintiffs argued that the vendors worked in concert with the author to provide a platform for publishing books the same way a traditional publishing house does.

Examination of the Service Providers’ Terms and Conditions

Siding with the defendants, the court dismissed the claims against the self-publishing vendors, finding that their services are not “publishing,” as that word is known in the book industry. The court pointed to the terms of service that the author agreed to when registering for defendants’ services.  For example, the terms of the Kindle agreement contained representations that the uploader owned all rights to the material and that no rights were being violated.  In the Nook agreement, the author represented and warranted to Barnes & Noble that he held “the necessary rights, including all intellectual property rights, in and to the [book] and related content” and that the book could be “sold, marketed, displayed, distributed and promoted [by Barnes & Noble] without violating or infringing the rights of any other person or entity, including, without limitation, infringing any copyright, patent, trademark or right of privacy….”   Moreover, the Smashwords agreement stressed that: “Smashwords does not… undertake[] any editorial review of the books that authors and publishers publish using its service.”

Dismissal of Claims against Self-Publishing Services

Ultimately, the court concluded:

“For now, this Court will apply the old standards to the new technology, treating the [self-publishing vendors’] process as if it were next logical step after the photocopier. Just as Xerox would not be considered a publisher and held responsible for an invasion of privacy tort carried out with a photocopier, [the Defendants] will not be liable as publishers for the tort allegedly committed using their technology.”

Because the court based its ruling on the publisher-distributor issue, it declined to take up the issue of whether the defendants were shielded from liability by the CDA Section 230.

Implications from the Ruling

The decision is notable because it is not often that a court has had the opportunity to interpret the potential liabilities of print-on-demand and online self-publishing platforms in the defamation or privacy context.  The outcome is certainly welcome for online vendors that assist in the distribution and commercial “publication” of user-generated content, at least as another backstop to the protections already afforded by CDA Section 230.  The ruling might also serve as a reminder for providers to reexamine user agreements and terms of service to ensure that certain author representations about the non-infringing nature of uploaded content are clearly worded and that electronic contracting best practices are followed to ensure enforceability. Interestingly, the court’s language also touched on the free speech implications of an adverse ruling, suggesting that if liability for failure to inspect content were imposed on print-on-demand publishers or self-publishing platforms, they might become censors and their services would become more expensive, precluding the publication of low-budget works or controversial opinions from independent authors.

Google Is the Latest Online Provider to Face Class Action over Collection of Faceprints

Posted in Biometrics, Internet, Mobile, Privacy, Social Media

As we have previously written about, there are several ongoing biometric privacy-related lawsuits alleging that facial recognition-based systems of photo tagging violate the Illinois Biometric Information Privacy Act (BIPA).  Add one more to the list.  A Chicago resident brought a putative class action against Google for allegedly collecting, storing and using, without consent and in violation of BIPA, the faceprints of non-users of the Google Photos service, a cloud-based photo and video storage and organization app (Rivera v. Google, Inc., No. 16-02714 (N.D. Ill. filed Mar. 1, 2016)).

Under BIPA, an entity cannot collect, capture, purchase, or otherwise obtain a person’s “biometric identifier” or “biometric information,” unless it first:

(1) informs the subject in writing that a biometric identifier is being collected;

(2) informs the subject in writing of the specific purpose and length of term for which a biometric identifier or biometric information is being collected, stored, and used; and

(3) receives a written release executed by the subject.

The statute contains defined terms and limitations, and parties in other suits are currently litigating what “biometric identifiers” and “biometric information” mean under the statute and whether the collection of facial templates from uploaded photographs using sophisticated facial recognition technology fits within the ambit of the statute.

The statute also provides that entities in possession of certain collected biometric data post a written policy establishing a retention schedule and guidelines for deleting data when the initial purpose for collection has been satisfied.  Notably, BIPA provides for a private right of action, and potential awards of $1,000 in statutory damages for each negligent violation ($5,000 for each intentional or reckless violation), as well as injunctive relief and attorney’s fees.

In the suit against Google, the plaintiff alleges that the Google Photos service created, collected and stored millions of faceprints from Illinois users who uploaded photos (and, like the plaintiff, the faceprints of non-users whose faceprints were collected merely because their images appeared in users’ uploaded photos). The plaintiff claims that, in violation of BIPA, Google failed to inform “unwitting non-users who had their face templates collected” of the specific purpose and length of term of collection, failed to obtain written consent from individuals prior to collection, or otherwise post publicly available policies identifying their face template retention schedules.  Plaintiff seeks injunctive relief compelling Google to comply with BIPA, and an award of statutory damages.

Since the named plaintiff claims to be a non-user of the Google Photos service, Google may not be able to transfer the matter to California based upon the forum selection clause in its terms of service.  Yet, as with the prior suits against other providers, Google will likely invoke jurisdictional defenses along with multiple arguments about how the Illinois statute is inapplicable to its activities based upon certain statutory exceptions.

We will continue to follow this dispute, along with the other existing biometric privacy-related litigation.  Indeed, this past week, the photo storage service Shutterfly, which is facing a similar suit to Google, is seeking to send the matter to arbitration based upon allegations that the unnamed Shutterfly user who uploaded a photo depicting the plaintiff was actually his fiancée (and current wife).

Website HTML Is Copyrightable, Even If Look and Feel Is Not

Posted in Copyright, Internet, Online Commerce

In a notable ruling last month, a California district court ruled that the HTML underlying a custom search results page of an online advertising creation platform is copyrightable.

In Media.net Advertising FZ-LLC v. Netseer Inc., No. 14-3883, 2016 U.S. Dist. LEXIS 3784 (N.D. Cal. Jan. 12, 2016), the plaintiff, an online contextual-advertising service provider, brought copyright infringement claims against a competitor for allegedly copying the HTML from a custom-created search results page, for the purpose of creating its own custom online advertising offering.  Plaintiff argued that its copyright claim is supported by the guidance published in the revised edition of the Compendium of U.S. Copyright Office Practices (Third Edition) (Dec. 2014) (“Compendium”).

The Compendium states that while a website’s layout or look and feel is not copyrightable subject matter, its HTML may be copyrightable.  [Note: As discussed in a prior post, the look and feel of a webpage might, in certain circumstances, be protectable trade dress under the Lanham Act.]

The defendant countered that plaintiff’s HTML consists solely of uncopyrightable Cascading Style Sheets (CSS), which renders plaintiff’s copyright registrations invalid.

Generally speaking, HTML is the standard markup language used in the design of websites and establishes the format and layout of text, content and graphics when a user views a website by instructing his or her browser to present material in a specified manner.  Anyone who has clicked on their browser’s dropdown menu to reveal the elements of a web page has seen the array of instructions contained between the start tag <html> and closing tag </html>.  Web developers also use CSS, which, according to the court, are merely methods of formatting and laying out the organization of documents written in a markup language, such as HTML. There are different ways to build CSS into HTML, and although CSS is often used with HTML, CSS have their own specifications.

The Copyright Office has stated that because procedures, processes, and methods of operation are not copyrightable, the Office generally will refuse to register claims based solely on CSS. See Compendium, §1007.4.   However, the Copyright Office will register HTML as a literary work (but not as a computer program because HTML is not source code), as long as the HTML was created by a human being and contains a sufficient amount of creative expression. See Compendium § 1006.1(A).  As the Media.net court explained, the fact that HTML code produces a web page (the look and feel of which is not subject to copyright protection) does not preclude its registration because “there are multiple ways of writing the HTML code to produce the same ultimate appearance of the webpage.”  The court held that portions of plaintiff’s HTML code minimally met the requisite level of creativity to be copyrightable.

Ultimately, however, the court granted the defendant’s motion to dismiss the copyright claims on procedural grounds based upon the plaintiff’s failure to properly assert, beyond conclusory allegations in its complaint, how the defendant accessed plaintiff’s HTML code.  The court also found that the plaintiff’s complaint also failed to list every portion of the HTML code that the defendant allegedly infringed.

As noted above, a website’s HTML is readily viewable through standard browsers. Thus, it is not uncommon for a developer to “take a peek” at the HTML of other sites.  This case suggests that even though a website’s look and feel may not be copyrightable, in some cases the underlying HTML may be. Thus, web developers should be careful as they are building sites to avoid copying copyrightable subject matter.

As the court granted plaintiff leave to amend its claim, we will continue to watch this case as it presents important copyright issues for e-commerce providers.