Header graphic for print

New Media and Technology Law Blog

Tenth Circuit Affirms Lower Court Ruling on Meaning of “User” in DMCA §512(c) Safe Harbor

Posted in Copyright, Online Content

Title II of the Digital Millennium Copyright Act (DMCA) offers safe harbors for qualifying service providers to limit their liability for claims of copyright infringement. To benefit from the Section 512(c) safe harbor, a storage provider must establish that the infringing content was stored “at the direction of the user.”  17 U.S.C. § 512(c)(1).  The statute does not define “user” and until recently, no court had interpreted the term.

Last May, we wrote about a Colorado district court decision that interpreted what “storage at the direction of a user” means in the context of online media — specifically, the business model of Examiner.com, a “content farm” style site which posts articles written by independent contractors on popular topics of the day.  The dispute before the lower court centered on whether Examiner.com was entitled to protection under the § 512(c) safe harbor.  More specifically, the question became whether the contributors to the Examiner (who had to sign an “Examiners Independent Contractor Agreement and License” before receiving permission to post to the site) were “users” under § 512(c), that is, were the plaintiffs’ photographs stored on defendant’s system at the direction of the site’s contributors or stored at the direction of the defendant.

In BWP Media USA, Inc. v. Clarity Digital Group, LLC, 2016 WL 1622399 (10th Cir. Apr. 25, 2016), the appeals court affirmed the lower court’s holding that the infringing photographs were not uploaded at the direction of the defendant and Examiner.com was protected under the DMCA safe harbor.  The Tenth Circuit found that, in the absence of evidence that the defendant directed the contributors to upload the plaintiffs’ photographs to the site, the defendant’s policies (e.g., prohibiting use of infringing content in the user agreement, having a repeat infringer policy and offering contributors free access to a licensed photo library) showed that the photographs were stored at the direction of the “user.”

According to the court, the word “user” in the DMCA should be interpreted according to its plain meaning, to describe “a person or entity who avails itself of the service provider’s system or network to store material.”  Notably, the court flatly rejected the plaintiff’s argument that the term “user” should exclude an ISP’s or provider’s employees and agents, or any individual who enters into a contract and receives compensation from a provider.  Refusing to place its own limitations on the meaning of “user,” the Tenth Circuit stated that a “user” is simply “anyone who uses a website — no class of individuals is inherently excluded,” even commenting that “simply because someone is an employee does not automatically disqualify him as a ‘user’ under § 512.”

To quell any fears that such a natural reading would create a “lawless no-man’s-land,” the court noted that the term “user” must be read in conjunction with the remainder of the safe harbor provision.  As such, a storage provider will only qualify for safe harbor protection when it can show, among other things, that the content was stored at the direction of a “user,” that the provider had no actual knowledge of the infringement, that there were no surrounding facts or circumstances making the infringement apparent, or that upon learning of the infringement, the provider acted expeditiously to take down the infringing material. See 17 U.S.C. § 512(c)(1)(A).  Thus, the relevant question isn’t who is the “user,” but rather, who directed the storage of the infringing content – as the court stressed, there is no protection under § 512 when the infringing material is on the system or network as a result of the provider’s “own acts or decisions”:

“When an ISP ‘actively encourag[es] infringement, by urging [its] users to both upload and download particular copyrighted works,’ it will not reap the benefits of § 512’s safe harbor. However, if the infringing content has merely gone through a screening or automated process, the ISP will generally benefit from the safe harbor’s protection.”

The opinion maintains the relatively robust protections of the DMCA safe harbor for storage providers that follow proper procedures.  In the court’s interpretation, the term “user” is not limited by any relationship with the provider, essentially removing the concept of the user from the safe harbor analysis and placing the emphasis on the remaining requirements of the statute (which, regardless, are frequently the subject of contention in litigation involving the DMCA safe harbor).

California Court Refuses to Dismiss Biometric Privacy Suit against Facebook

Posted in Biometrics, Contracts, Internet, Privacy, Video Privacy Protection Act

The District Court for the Northern District of California recently issued what could be a very significant decision on a number of important digital law issues.  These include: the enforceability of “clickwrap” as compared to “web wrap” website terms of use, the enforceability of a choice-of-law provision in such terms of use, and a preliminary interpretation of the Illinois Biometric Information Privacy Act (BIPA).  In its opinion, the court found Facebook’s terms of use to be enforceable, but declined to enforce the California choice of law provision and held that the plaintiffs stated a claim under BIPA.  (See In re Facebook Biometric Information Privacy Litig., No. 15-03747 (N.D. Cal. May 5, 2016)).

As a result, the ruling could affect cases involving the enforceability of terms of use generally, and certainly choice of law provisions commonly found in such terms.  The court’s interpretation of BIPA is likely to be a consideration in similar pending biometric privacy suits.  The decision should also prompt services to review their user agreements or otherwise reexamine their legal compliance regarding facial recognition data collection and retention.

As we noted in a prior post, Facebook has been named as a defendant in a number of lawsuits claiming that its facial recognition-based system of photo tagging violates BIPA.  Plaintiffs generally allege that Facebook’s Tag Suggestions program amassed users’ biometric data without notice and consent by using advanced facial recognition technology to extract biometric identifiers from user photographs uploaded to the service.  The various Illinois-based suits were eventually transferred to the Northern District of California and consolidated.

In its motion to dismiss the consolidated action, Facebook argued that the plaintiffs failed to state a claim under BIPA and that the California choice-of-law provision in its user agreement precluded the application of the Illinois statute.

As an initial matter, the court ruled that Facebook’s user agreement was enforceable because the plaintiffs assented to the terms when they initially signed up for Facebook, and also agreed to the current user agreement after having continued to use Facebook after receiving notice of the current terms.  Before reaching its conclusion, however, the court took some potshots at Facebook’s online contracting process. While the exact methods of electronic contracting for each of the multiple plaintiffs were slightly different, the court examined most closely the method in use for the plaintiff Licata: “By clicking Sign Up, you are indicating that you have read and agree to the Terms of Use and Privacy Policy,” with the terms of use presented by a conspicuous hyperlink. Expressing its skepticism of this relatively common method of online contracting, the court found that the use of a single “Sign Up” button to activate an account and accept the terms (as opposed to a separate clickbox to manifest the user’s assent to the terms that is distinct from the “Register” button) “raises concerns about contract formation.”   In the end, the court conceded that Ninth Circuit precedent “indicated a tolerance for the single-click ‘Sign Up’ and assent practice,” and that the Ninth Circuit itself had cited with approval a decision from the Southern District of New York that had found enforceable Facebook’s contracting process.  The court also commented that the dual-purpose box the plaintiff Licata had to click, located alongside hyperlinked terms, was “enough to create an enforceable agreement” – different enough from certain “web wrap” or “browsewrap” scenarios where a website owner attempts to impose terms upon users based upon mere passive viewing of a website.

However, despite upholding Facebook’s electronic contracting process, the court declined to enforce the California choice-of-law provision in the user agreement and applied Illinois law because it found that Illinois had a greater interest in the outcome of this BIPA-related dispute.

As to the substantive arguments, the court found Facebook’s contention that BIPA excludes from its scope all information involving photographs to be unpersuasive.  In essence, BIPA regulates the collection, retention, and disclosure of personal biometric identifiers and biometric information.  While the statute defines “biometric identifier” as “a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry,” it also specifically excludes photographs from that definition.  Facebook (and even Shutterfly in its attempt to dismiss a similar suit regarding its photo tagging practices) attempted to use this tension or apparent ambiguity within the statute to escape its reach.  However, viewing the statute as a whole, the court stated that the plaintiffs stated a claim under the plain language of BIPA:

“Read together, these provisions indicate that the Illinois legislature enacted BIPA to address emerging biometric technology, such as Facebook’s face recognition software as alleged by plaintiffs…. ‘Photographs’ is better understood to mean paper prints of photographs, not digitized images stored as a computer file and uploaded to the Internet. Consequently, the Court will not read the statute to categorically exclude from its scope all data collection processes that use images.”

The court also rejected Facebook’s argument that the statute’s reference to a “scan of hand or face geometry” only applied to in-person scans of a person’s actual face (such as during a security screening) and that creating faceprints from uploaded photographs does not constitute a “scan of face geometry” under the statute.  The court found this “cramped interpretation” to be against the statute’s focus and “antithetical to its broad purpose of protecting privacy in the face of emerging biometric technology.”

However, in allowing the suit to go forward, the court cautioned that discovery might elicit facts that could change the outcome:

“As the facts develop, it may be that “scan” and “photograph” with respect to Facebook’s practices take on technological dimensions that might affect the BIPA claims. Other fact issues may also inform the application of BIPA. But those are questions for another day.”

This makes the second court that has refused to shelve a BIPA-related case at the motion to dismiss stage (the first being the Illinois court in Norberg v. Shutterfly, a dispute that was settled this past April).  The Facebook decision is notable in that the court refused to categorically rule that photo tagging, a function offered by multiple tech companies, fell outside the ambit of BIPA.  Companies that offer online or mobile services that involve the collection of covered biometric information will ultimately have to decide how to react to this latest ruling, perhaps considering changes to their notice and consent practices, or deciding to not collect or store biometric data at all, or else take a wait and see approach as the Facebook litigation proceeds.

We will continue to closely watch the ongoing litigation, developments and best practices surrounding biometric privacy.

User of Free App May Be “Consumer” under the Video Privacy Protection Act

Posted in Mobile, Privacy, Video, Video Privacy Protection Act

This past week, the First Circuit issued a notable opinion concerning the contours of liability under the Video Privacy Protection Act (VPPA) – a decision that stirs up further uncertainty as to where to draw the line regarding VPPA liability when it comes to mobile apps.  (See Yershov v. Gannett Satellite Information Network Inc., No. 15-1719 (1st Cir. Apr. 29, 2016)).  The opinion, which reversed the dismissal of the case by the district court, took a more generous view than the lower court as to who is a “consumer” under the statute.  The court’s reasoning also ran contrary to a decision from the Northern District of Georgia from last month. There, the district court ruled that a user of a free app was not a “consumer” under the VPPA and that the collection of the user’s anonymous mobile phone MAC address and associated video viewing history did not qualify as “personally identifiable information” that links an actual person to actual video materials. (See Perry v. Cable News Network, Inc., No. 14-02926 (Apr. 20, 2016)).

Subject to certain exceptions, the VPPA prohibits “video tape service providers” from knowingly disclosing, to a third-party, “personally identifiable information concerning any consumer.” 18 U.S.C. §2710(b).  Under the VPPA, the term “consumer” means any “renter, purchaser, or subscriber of goods or services from a video tape service provider.” 18 U.S.C. §2710(a)(1).  The term “personally identifiable information” includes “information which identifies a person as having requested or obtained specific video materials or services from a video tape service provider.” 18 U.S.C. §2710(a)(3).

In Yershov, a user of the USA Today app alleged that each time he viewed a video clip, the app transmitted his mobile Android ID, GPS coordinates and identification of the watched video to a third-party analytics company to create user profiles for the purposes of targeted advertising, all in violation of the VPPA.  In dismissing the complaint, the lower court had found that while the information the app disclosed was “personally identifiable information” (PII) under the VPPA, the plaintiff, as the user of a free app, was not a consumer (i.e., a “renter, purchaser, or subscriber” of or to Gannett’s video content) protected by the VPPA.

Personally Identifiable Information

The First Circuit agreed with the district court that the individual’s information at issue was, in fact, PII.  As the appeals court noted, the statutory term “personally identifiable information” is “awkward and unclear.” As a result, courts are still grappling with whether unique device IDs and GPS data are PII under the statute.  The analysis has not been consistent. For example, last year a New York court ruled that an anonymized Roku device serial number was not PII because it did not necessarily identify a particular person as having accessed specific video materials.  In Yershov, however, the district court found, at the motion to dismiss stage, that the plaintiff plausibly alleged that the disclosed information (i.e., Android ID + GPS data + video viewing information) was PII under the VPPA.  The appeals court agreed, concluding that the transmittal of GPS information with a device identifier plausibly presented a “linkage” of information to identity (i.e., the plaintiff adequately alleged that “Gannett disclosed information reasonably and foreseeably likely to reveal which USA Today videos Yershov has obtained”).  While the court’s explanation was relatively scant, its reasoning seemed to hinge on the collection of a user’s GPS data that the court suggested could be simply processed to locate a user on a street map.

“Consumer” under the VPPA

The court of appeals next tackled whether the plaintiff was a “consumer” within the meaning of the statute.  The court had to determine whether to follow a sister court’s holding that a user of a free app was generally not a “consumer” under the Act (particularly if the user was not required to sign up for an account, make any payments, or receive periodic services, or was otherwise granted access to restricted content), or another older ruling that reached an opposite conclusion. In taking a broad reading of “consumer,” the First Circuit held that while the plaintiff paid no money nor opened an account, he was a “consumer” under the Act because “access was not free of a commitment to provide consideration in the form of that information, which was of value to Gannett.”   In asking the rhetorical question, “Why, after all, did Gannett develop and seek to induce downloading of the App?”, the court saw some form of value exchange in the relationship between app owner and user that rose to the level of a subscriber under the VPPA:

“And by installing the App on his phone, thereby establishing seamless access to an electronic version of USA Today, Yershov established a relationship with Gannett that is materially different from what would have been the case had USA Today simply remained one of millions of sites on the web that Yershov might have accessed through a web browser.”

Ultimately, the court summarized its holding this way:

“We need simply hold, and do hold, only that the transaction described in the complaint–whereby Yershov used the mobile device application that Gannett provided to him, which gave Gannett the GPS location of Yershov’s mobile device at the time he viewed a video, his device identifier, and the titles of the videos he viewed in return for access to Gannett’s video content–plausibly pleads a case that the VPPA’s prohibition on disclosure applies.”

Final Considerations

While the proceedings in this case are still preliminary and the case may yet falter based on other issues, video-based app providers should take notice, particularly with respect to the following questions:

  • When does the disclosure of a unique device number cross the line into PII under the VPPA? While there is certainly a point where such information is too remote or too dependent on what the court called “unforeseeable detective work,” mobile app owners should, in light of Yershov, reexamine practices that involve the disclosure of mobile geolocation data without express, informed consent.
  • When is the user of a free app a “consumer” under the VPPA? While the court reversed the lower court’s ruling on this issue, further discovery of the relationship between the app and the user and how it differs from the relationship between the USA Today website and its users may alter the court’s reasoning.  Also, in a future dispute in another circuit, a court might take a narrower position that a “consumer” or “subscriber” under the VPPA requires at least some of the following indicia of a subscription, such as payment, registration, user commitment, regular delivery, or access to restricted content.

Self-Publishing Platforms Deemed Distributors, Not Publishers in Privacy Suit over Unauthorized Book Cover

Posted in Contracts, Internet, Online Commerce, Privacy

We live in a world that has rapidly redefined and blurred the roles of the “creator” of content, as compared to the roles of the “publisher” and “distributor” of such content.  A recent case touches on some of the important legal issues associated with such change.  Among other things, the case illustrates the importance of service providers maintaining clear and appropriate terms and conditions that relate directly to the role they serve in the expression of content over online media.

The case involves a number of online self-publishing services. For those authors who have struggled to find a publisher or who would otherwise prefer to keep control of their IP rights in their books, there are many such businesses.  Such services allow authors to upload works and pay to transform those manuscripts into paperbacks via a print on demand model or make them available in ebook form for sale on the sites of major e-booksellers. Unlike a traditional publisher, however, self-publishing services do not fact check or edit materials (though, users may take advantage of unaffiliated paid services that do just that) and do not use a vetting process that might catch potentially defamatory or infringing content prior to publishing.  Indeed, beyond automated reviews for things like pornography or plagiarism, these platforms do not review submissions for content and rely on user agreements that contain certain contractual representations about the propriety of the uploaded content.

But what happens when a self-published book offered for sale contains content that may violate a third-party’s right of publicity or privacy rights? Should the self-publishing platforms be treated like traditional “publishers” or more like distributors or booksellers?  This past month, an Ohio district court ruled that several online self-publishing services were not liable for right of publicity or privacy claims for distributing an erotic (and so-called “less than tasteful”) book whose cover contained an unauthorized copy of the plaintiffs’ engagement photo because such services are not publishers. (See Roe v. Amazon.com, 2016 WL 1028265 (S.D. Ohio Mar. 15, 2016)).

Background of the Dispute

The dispute began with the unauthorized publication of the plaintiffs’ engagement photograph on the cover of an erotic book authored by Greg McKenna (under a pseudonym).  The book was uploaded using several online self-publishing platforms and offered for sale on the major ebook sites (as well as being offered in paperback form via print-on-demand).  The alleged privacy violations were aggravated when the book was displayed in nationwide media, including in jokes on some late night TV talk shows.  Less than a month after publication, the author received a letter from plaintiffs’ counsel and contacted the ebook vendors to remove the offending book cover and replace it with a stock image.

The plaintiffs subsequently brought suit against the author McKenna and the self-publishing vendors used by the author (i.e., Amazon’s Kindle Digital Publishing, Barnes & Noble Nook Press and Smashwords), asserting right of publicity and invasion of privacy claims.  Liability against McKenna was sought based upon the allegation that he authored the work in question, and claims against the self-publishing vendors on the theory that they “published” the work.  The court easily ruled that the plaintiffs could proceed against the author because they sufficiently alleged that their likenesses were expropriated for commercial benefit and that they suffered “humiliation and ridicule.”

The self-publishing vendors sought summary judgment asserting that they were not publishers of the book but merely allowed the author to use their systems to distribute it, and that were protected from any liability for third-party content by CDA Section 230.  In opposing dismissal, the plaintiffs argued that the vendors worked in concert with the author to provide a platform for publishing books the same way a traditional publishing house does.

Examination of the Service Providers’ Terms and Conditions

Siding with the defendants, the court dismissed the claims against the self-publishing vendors, finding that their services are not “publishing,” as that word is known in the book industry. The court pointed to the terms of service that the author agreed to when registering for defendants’ services.  For example, the terms of the Kindle agreement contained representations that the uploader owned all rights to the material and that no rights were being violated.  In the Nook agreement, the author represented and warranted to Barnes & Noble that he held “the necessary rights, including all intellectual property rights, in and to the [book] and related content” and that the book could be “sold, marketed, displayed, distributed and promoted [by Barnes & Noble] without violating or infringing the rights of any other person or entity, including, without limitation, infringing any copyright, patent, trademark or right of privacy….”   Moreover, the Smashwords agreement stressed that: “Smashwords does not… undertake[] any editorial review of the books that authors and publishers publish using its service.”

Dismissal of Claims against Self-Publishing Services

Ultimately, the court concluded:

“For now, this Court will apply the old standards to the new technology, treating the [self-publishing vendors’] process as if it were next logical step after the photocopier. Just as Xerox would not be considered a publisher and held responsible for an invasion of privacy tort carried out with a photocopier, [the Defendants] will not be liable as publishers for the tort allegedly committed using their technology.”

Because the court based its ruling on the publisher-distributor issue, it declined to take up the issue of whether the defendants were shielded from liability by the CDA Section 230.

Implications from the Ruling

The decision is notable because it is not often that a court has had the opportunity to interpret the potential liabilities of print-on-demand and online self-publishing platforms in the defamation or privacy context.  The outcome is certainly welcome for online vendors that assist in the distribution and commercial “publication” of user-generated content, at least as another backstop to the protections already afforded by CDA Section 230.  The ruling might also serve as a reminder for providers to reexamine user agreements and terms of service to ensure that certain author representations about the non-infringing nature of uploaded content are clearly worded and that electronic contracting best practices are followed to ensure enforceability. Interestingly, the court’s language also touched on the free speech implications of an adverse ruling, suggesting that if liability for failure to inspect content were imposed on print-on-demand publishers or self-publishing platforms, they might become censors and their services would become more expensive, precluding the publication of low-budget works or controversial opinions from independent authors.

Google Is the Latest Online Provider to Face Class Action over Collection of Faceprints

Posted in Biometrics, Internet, Mobile, Privacy, Social Media

As we have previously written about, there are several ongoing biometric privacy-related lawsuits alleging that facial recognition-based systems of photo tagging violate the Illinois Biometric Information Privacy Act (BIPA).  Add one more to the list.  A Chicago resident brought a putative class action against Google for allegedly collecting, storing and using, without consent and in violation of BIPA, the faceprints of non-users of the Google Photos service, a cloud-based photo and video storage and organization app (Rivera v. Google, Inc., No. 16-02714 (N.D. Ill. filed Mar. 1, 2016)).

Under BIPA, an entity cannot collect, capture, purchase, or otherwise obtain a person’s “biometric identifier” or “biometric information,” unless it first:

(1) informs the subject in writing that a biometric identifier is being collected;

(2) informs the subject in writing of the specific purpose and length of term for which a biometric identifier or biometric information is being collected, stored, and used; and

(3) receives a written release executed by the subject.

The statute contains defined terms and limitations, and parties in other suits are currently litigating what “biometric identifiers” and “biometric information” mean under the statute and whether the collection of facial templates from uploaded photographs using sophisticated facial recognition technology fits within the ambit of the statute.

The statute also provides that entities in possession of certain collected biometric data post a written policy establishing a retention schedule and guidelines for deleting data when the initial purpose for collection has been satisfied.  Notably, BIPA provides for a private right of action, and potential awards of $1,000 in statutory damages for each negligent violation ($5,000 for each intentional or reckless violation), as well as injunctive relief and attorney’s fees.

In the suit against Google, the plaintiff alleges that the Google Photos service created, collected and stored millions of faceprints from Illinois users who uploaded photos (and, like the plaintiff, the faceprints of non-users whose faceprints were collected merely because their images appeared in users’ uploaded photos). The plaintiff claims that, in violation of BIPA, Google failed to inform “unwitting non-users who had their face templates collected” of the specific purpose and length of term of collection, failed to obtain written consent from individuals prior to collection, or otherwise post publicly available policies identifying their face template retention schedules.  Plaintiff seeks injunctive relief compelling Google to comply with BIPA, and an award of statutory damages.

Since the named plaintiff claims to be a non-user of the Google Photos service, Google may not be able to transfer the matter to California based upon the forum selection clause in its terms of service.  Yet, as with the prior suits against other providers, Google will likely invoke jurisdictional defenses along with multiple arguments about how the Illinois statute is inapplicable to its activities based upon certain statutory exceptions.

We will continue to follow this dispute, along with the other existing biometric privacy-related litigation.  Indeed, this past week, the photo storage service Shutterfly, which is facing a similar suit to Google, is seeking to send the matter to arbitration based upon allegations that the unnamed Shutterfly user who uploaded a photo depicting the plaintiff was actually his fiancée (and current wife).

Website HTML Is Copyrightable, Even If Look and Feel Is Not

Posted in Copyright, Internet, Online Commerce

In a notable ruling last month, a California district court ruled that the HTML underlying a custom search results page of an online advertising creation platform is copyrightable.

In Media.net Advertising FZ-LLC v. Netseer Inc., No. 14-3883, 2016 U.S. Dist. LEXIS 3784 (N.D. Cal. Jan. 12, 2016), the plaintiff, an online contextual-advertising service provider, brought copyright infringement claims against a competitor for allegedly copying the HTML from a custom-created search results page, for the purpose of creating its own custom online advertising offering.  Plaintiff argued that its copyright claim is supported by the guidance published in the revised edition of the Compendium of U.S. Copyright Office Practices (Third Edition) (Dec. 2014) (“Compendium”).

The Compendium states that while a website’s layout or look and feel is not copyrightable subject matter, its HTML may be copyrightable.  [Note: As discussed in a prior post, the look and feel of a webpage might, in certain circumstances, be protectable trade dress under the Lanham Act.]

The defendant countered that plaintiff’s HTML consists solely of uncopyrightable Cascading Style Sheets (CSS), which renders plaintiff’s copyright registrations invalid.

Generally speaking, HTML is the standard markup language used in the design of websites and establishes the format and layout of text, content and graphics when a user views a website by instructing his or her browser to present material in a specified manner.  Anyone who has clicked on their browser’s dropdown menu to reveal the elements of a web page has seen the array of instructions contained between the start tag <html> and closing tag </html>.  Web developers also use CSS, which, according to the court, are merely methods of formatting and laying out the organization of documents written in a markup language, such as HTML. There are different ways to build CSS into HTML, and although CSS is often used with HTML, CSS have their own specifications.

The Copyright Office has stated that because procedures, processes, and methods of operation are not copyrightable, the Office generally will refuse to register claims based solely on CSS. See Compendium, §1007.4.   However, the Copyright Office will register HTML as a literary work (but not as a computer program because HTML is not source code), as long as the HTML was created by a human being and contains a sufficient amount of creative expression. See Compendium § 1006.1(A).  As the Media.net court explained, the fact that HTML code produces a web page (the look and feel of which is not subject to copyright protection) does not preclude its registration because “there are multiple ways of writing the HTML code to produce the same ultimate appearance of the webpage.”  The court held that portions of plaintiff’s HTML code minimally met the requisite level of creativity to be copyrightable.

Ultimately, however, the court granted the defendant’s motion to dismiss the copyright claims on procedural grounds based upon the plaintiff’s failure to properly assert, beyond conclusory allegations in its complaint, how the defendant accessed plaintiff’s HTML code.  The court also found that the plaintiff’s complaint also failed to list every portion of the HTML code that the defendant allegedly infringed.

As noted above, a website’s HTML is readily viewable through standard browsers. Thus, it is not uncommon for a developer to “take a peek” at the HTML of other sites.  This case suggests that even though a website’s look and feel may not be copyrightable, in some cases the underlying HTML may be. Thus, web developers should be careful as they are building sites to avoid copying copyrightable subject matter.

As the court granted plaintiff leave to amend its claim, we will continue to watch this case as it presents important copyright issues for e-commerce providers.

FTC Releases Big Data Report Outlining Risks, Benefits and Legal Hurdles

Posted in Internet, Online Commerce, Privacy, Regulatory

The big data revolution is quietly chugging along:  devices, sensors, websites and networks are collecting and producing significant amounts of data, the cost of data storage continues to plummet, public and private sector interest in data mining is growing, data computational and statistical methods have advanced, and more and more data scientists are using new software and capabilities to make sense of it all.  The potential benefits of big data are now well-known, but what are some of the legal, ethical and compliance risks and when do modern data analytics produce unintended discriminatory effects? To explore these issues, the FTC held a workshop in September 2014, and earlier this month, released a report “Big Data: A Tool for Inclusion or Exclusion?  Understanding the Issues.”

Companies that use big data are likely already familiar with the myriad of privacy-related legal issues — data collection and online behavioral tracking, notice and consumer choice, data security, anonymization and de-identification, intra-company data sharing, retail consumer tracking, and many others.  But beyond these concerns, the FTC’s report discusses another set of issues surrounding big data.  The report outlines the risks created by the use of big data analytics with respect to consumer protection and equal opportunity laws.  The report also directs companies to attempt to minimize the risk that data inaccuracies and inherent biases might harm or exclude certain consumers (particularly with respect to credit offers, and educational and employment opportunities). The Report outlines a number of potential harms, including:

  • Individuals mistakenly being denied opportunities. Participants in the FTC’s workshop raised concerns that companies using big data to better know their customers may, at times, base their assumptions disproportionately on the comparison of a consumer with a generalized data set with which the consumer shares similar attributes.
  • Ad targeting practices that reinforce existing disparities.
  • The exposure of consumer’s sensitive information.
  • The targeting of vulnerable consumers for fraud.
  • The creation of new justifications for exclusion of certain populations from particular opportunities.
  • Offering higher-priced goods and services to lower income communities.

Consumer Protection Laws Potentially Applicable to Big Data

The Report mentions several federal laws that might apply to certain big data practices, including the Fair Credit Reporting Act, equal opportunity laws, and the FTC Act.

Fair Credit Reporting Act

As the report notes, the Fair Credit Reporting Act (FCRA) applies to companies, known as consumer reporting agencies or CRAs, that compile and sell consumer reports containing consumer information that is used or expected to be used for decisions about consumer eligibility for credit, employment, insurance, housing, or other covered transactions.  Among other things, CRAs must reasonably ensure accuracy of consumer reports and provide consumers with access to their own information, and the ability to correct any errors.  Traditionally, CRAs included credit bureaus and background screening companies, but the scope of the FCRA may extend beyond traditional credit bureaus.  See e.g., United States v. Instant Checkmate, Inc., No. 14-00675 (S.D. Cal. filed Mar. 24, 2014) (website that allowed users to search public records for information about anyone and which was marketed to be used for background checks was subject to the FCRA; entity settled FTC charges, paid a $550,000 civil fine and agreed to future compliance).

Companies that use consumer reports also have FCRA obligations, such as providing consumers with “adverse action” notices if the companies use the consumer report information to deny credit or other certain benefits. The Report notes, however, that the FCRA does not apply when companies use data derived from their own relationship with customers for purposes of making decisions about them. Big data has created a new twist on compliance, though.  The Report mentions a growing trend where companies purchase predictive analytics products for eligibility determinations, but instead of comparing a traditional credit characteristic (e.g., payment history), these new products may use non-traditional characteristics (e.g., zip code or social media usage) to evaluate creditworthiness as compared to an anonymized data set of groups that share the same characteristics.  The FTC states that if an outside analytics firm regularly evaluates a company’s own data and provides evaluations to the company for eligibility determinations, the outside firm would likely be acting as a CRA, the company would likely be a user of consumer reports, and both entities would be subject to Commission enforcement under the FCRA.  This new stance apparently runs counter to prior FTC policy which had made an exception for anonymized data. In a footnote, the agency explains that its prior interpretation was inaccurate and that “if a report is crafted for eligibility purposes with reference to a particular consumer or group of consumers…the Commission will consider the report a consumer report even if the identifying information of the consumer has been stripped.”

Equal Opportunity Laws

Certain federal equal opportunity laws might also apply to certain big data analytics, such as the Equal Credit Opportunity Act (ECOA), Title VII of the Civil Rights Act of 1964, the Americans with Disabilities Act, the Age Discrimination in Employment Act, the Fair Housing Act, and the Genetic Information Nondiscrimination Act.  Generally speaking, these laws prohibit discrimination based on protected characteristics. To prove a violation of such laws, plaintiffs typically must show “disparate treatment” or “disparate impact.”   The Report offers an example: if a company makes credit decisions based on zip codes, it may be violating ECOA if the decisions have a disparate impact on a protected class and are not justified by a legitimate business necessity.  The specific requirements of each federal statute are beyond the scope of this post, but the question of whether a practice is unlawful under equal opportunity laws is a fact-specific inquiry.

The FTC Act

Section 5 of the Federal Trade Commission Act prohibits unfair or deceptive acts or practices in or affecting commerce.  The agency advises companies using big data to consider whether they are violating any material promises to consumers involving data sharing, consumer choice or data security, or whether companies have otherwise failed to disclose material information to consumers.  Such violations of privacy promises have formed the basis of multiple FTC privacy-related enforcement actions in recent years.  The Report states that companies that maintain big data on consumers should reasonably secure the data. The FTC also notes that companies may not sell their big data analytics products to customers if they know or have reason to know that those customers will use the products for fraudulent or discriminatory purposes.

Questions for Legal Compliance

In light of the above federal laws, the Report outlines several questions that companies already using or considering engaging in big data analytics should ask to remain in compliance: „

  • If you compile big data for others who will use it for eligibility decisions, are you complying with the accuracy and privacy provisions of the FCRA?
  • If you receive big data products from another entity that you will use for eligibility decisions, are you complying with the provisions applicable to users of consumer reports?
  • If you are a creditor using big data analytics in a credit transaction, are you complying with the requirement to provide statements of specific reasons for adverse action under ECOA? „
  • If you use big data analytics in a way that might adversely affect people in their ability to obtain credit, housing, or employment, are you treating people differently based on a prohibited basis, or do your practices have an adverse effect or impact on a member of a protected class?
  • Are you honoring promises you make to consumers and providing consumers material information about your data practices? „
  • Are you maintaining reasonable security over consumer data? „
  • Are you undertaking reasonable measures to know the purposes for which your customers are using your data (e.g., fraud, discriminatory purposes)?

The Big Data Report also points to research that has shown how big data could potentially be used in the future to disadvantage underserved communities and adversely affect consumers on the basis of legally protected characteristics. To be sure, the potential risks of data mining are not new, but inherent in any statistical analysis.  To maximize the benefits and limit the harms, the Report suggests companies should consider the following questions raised by researchers as big data use increases:

  • How representative is your data set? The agency advises that it is important to consider the digital divide and other issues of under-representation and over-representation in data inputs before launching a product or service to avoid skewed results.
  • Does your data model account for biases? Companies should consider whether biases are being incorporated at both the collection and analytics stages of big data’s life cycle, and develop strategies to overcome any unintended impact on certain populations.
  • How accurate are your predictions? The Report advises that human oversight of data and algorithms may be worthwhile when big data tools are used to make important decisions, such as those implicating health, credit, and employment.
  • Does your reliance on big data raise ethical or fairness concerns? The Report states that companies should assess the factors that go into an analytics model and balance the predictive value of the model with fairness considerations.

Conclusion

With the issuance of its Big Data Report (and last year’s Data Broker Report), the FTC has signaled it will actively monitor areas where data collection and big data analytics could violate existing laws and will push for public-private cooperation to ensure the benefits of big data are maximized, the risks minimized. The Big Data Report is an important document for companies that provide big data analytic services or purchase such services for use in analyzing consumer behavior or aid in consumer eligibility decisions.  It remains to be seen how the FTC’s policy statement will be received by industry (or subsequently reviewed by the courts), particularly the FTC’s assertion that certain uses of anonymized consumer data might implicate the FCRA.  We have previously discussed the use of anonymized data for marketing and other purposes with respect to the Video Privacy Protection Act, and will continue to follow developments in this area closely to see how emerging practices mesh with privacy laws and regulations.

FTC Issues Enforcement Policy Statement on Native Advertising in New Media

Posted in Internet, Online Content, Regulatory

Digital media marketers are aggressively increasing the use of so-called sponsored content, or native advertising to reach new customers.  Particularly with the growing use of ad blockers on web and mobile browsers, marketers have sought to present advertising in a new form that can circumvent automated blocking and somehow capture the attention of users who may face a barrage of digital display ads everyday.

Generally speaking, natively formatted advertising attempts to match the design and style of the digital media it is embedded into. The ads can appear in a variety of settings, including in: the stream or display of regular content on news or news aggregations sites, videos, social media feeds, search results, infographics, images, animations, in-game modules, and playlists on streaming services.  Such ads can be placed directly by the publisher or inserted via ad networks, and be specifically targeted to the user.

However, the proliferation of native advertising in digital media has raised questions about whether such evolving formats deceive consumers by blurring the distinction between advertising and news or editorial content.

In 2013, the FTC hosted a workshop, “Blurred Lines: Advertising or Content? – An FTC Workshop on Native Advertising,” to examine the blending of advertisements with news, entertainment, and other editorial content in digital media.  Following up on its findings, in December 2015, the agency released its Enforcement Policy Statement on Deceptively Formatted Advertisements, which lays out the general principles the Commission considers in determining whether any particular ad format is deceptive and violates the FTC Act.

The Policy Statement notes that: “deception occurs when an advertisement misleads reasonable consumers as to its true nature or source, including that a party other than the sponsoring advertiser is the source of an advertising or promotional message, and such misleading representation is material.” According to the Policy Statement, under FTC principles, advertisers cannot use “deceptive door openers” to induce consumers to view advertising content.   Advertisers are responsible for ensuring that native ads are identifiable as advertising before consumers arrive at the main advertising page.  If the source of the content is clear, consumers can make informed decisions about whether to interact with the ad and the weight to give the information conveyed in the ad.  However, the FTC will find an ad’s format deceptive if the ad materially misleads consumers about its commercial nature, including through an express or implied misrepresentation that it comes from a party other than the sponsoring advertiser.

How is the format of a native advertisement evaluated? In determining whether an ad is deceptive, the FTC considers the “net impression” the ad conveys to consumers, that is, the overall context of the interaction, including what the ads says and the format it is presented in. According to the Policy Statement, the agency will examine such factors as its overall appearance, the similarity of its written, spoken, or visual style to non-advertising content offered on a publisher’s site, and the degree to which it is distinguishable from such other content.

Clarifying information that accompanies a native ad must be disclosed clearly and prominently to overcome any misleading impression.  Native ads may include disclosures such as text labels, audio disclosures, or visual cues distinguishing the ad from other non-commercial content.  The FTC declares that any disclosure must be “sufficiently prominent and unambiguous to change the apparent meaning of the claims and to leave an accurate impression,” and “made in simple, unequivocal language, so that consumers comprehend what it means.”

The Policy Statement advises that disclosures should be made in the same language as the predominant language in which ads are communicated.  In its accompanying guidance, Native Advertising: A Guide for Businesses, the FTC further notes that such disclosures should not be couched in technical or industry jargon, or displayed using unfamiliar icons or terminology that might have different meanings to consumers in other situations. The Guidance suggests that terms likely to be understood include “Ad,” “Advertisement,” “Paid Advertisement,” “Sponsored Advertising Content,” or some variation thereof, but that advertisers should not use terms such as “Promoted” or “Promoted Stories,” which the agency deems “at best ambiguous.” The Guidance also states, that, depending on the context, “consumers reasonably may interpret other terms, such as ‘Presented by [X],’ ‘Brought to You by [X],’ ‘Promoted by [X],’ or ‘Sponsored by [X]’ to mean that a sponsoring advertiser funded or ‘underwrote’ but did not create or influence the content.”

This is only a very brief summary of the FTC’s position on native advertising. We advise that all companies engaged, directly or indirectly, in the creation, placement or publishing of native advertisements read closely the FTC’s Policy Statement and Guidance document to aid their determination about what types of native ads could be found misleading.  As native advertising continues to appear in digital media, we will continue to follow regulatory and industry developments.

Photo Storage Service’s Collection of Faceprints May Violate Illinois Biometric Privacy Statute

Posted in Biometrics, Internet, Privacy, Social Media

As we have previously noted, there are several ongoing privacy-related lawsuits alleging that facial recognition-based systems of photo tagging violate the Illinois Biometric Information Privacy Act (BIPA). The photo storage service Shutterfly and the social network Facebook are both defending putative class action suits that, among other things, allege that such services created and stored faceprints without permission and in violation of BIPA.  In the suit against Shutterfly, plaintiff claims he is not a registered Shutterfly user, but that a friend had uploaded group photos depicting him and, upon prompting, tagged him in the photo, thereby adding his faceprint to the database (plaintiff, as a non-member of the service, had never formally consented to this collection of biometric data). Shutterfly had filed a motion to dismiss, arguing that scans of face geometry derived from uploaded photographs are not “biometric identifiers” under BIPA because the statute excludes information derived from photographs.

Last week, an Illinois district court denied Shutterfly’s motion to dismiss (Norberg v. Shutterfly, Inc., No. 15-05351 (N.D. Ill. Dec. 29, 2015)).  In a terse order, the court first found that there were sufficient minimum contacts to establish specific personal jurisdiction over Shutterfly in the Illinois forum.  Next, the court considered the claim under the Illinois biometric privacy law. Without engaging in a thorough analysis of the statute and its exceptions at this early stage in the litigation, the court ruled that the plaintiff could proceed with his claim under BIPA:

“Here, Plaintiff alleges that Defendants are using his personal face pattern to recognize and identify Plaintiff in photographs posted to Websites. Plaintiff avers that he is not now nor has he ever been a user of Websites, and that he was not presented with a written biometrics policy nor has he consented to have his biometric identifiers used by Defendants. As a result, the Court finds that Plaintiff has plausibly stated a claim for relief under the BIPA.”

We will continue to closely watch the ongoing litigation surrounding biometric privacy – particularly since the specific Illinois biometric privacy statute has yet to be interpreted in great depth by a court with respect to facial recognition technology.

Facebook Seeks Dismissal in Illinois Facial Recognition Biometric Privacy Suit

Posted in Biometrics, Privacy, Social Media, Technology

As we have previously noted, Facebook has been named as a defendant in a number of lawsuits claiming that its facial recognition-based system of photo tagging violates the Illinois Biometric Information Privacy Act (BIPA).  In a separate putative class action filed in Illinois federal court that involves the tagging of an “unwilling” non-user without his permission, Facebook seeks dismissal on grounds similar to the arguments Facebook made in those cases. (See Gullen v. Facebook, Inc., No. 15-07681 (N.D. Ill. filed Aug. 31, 2015)).  In short, the plaintiff non-user claims that another Facebook member manually tagged him in a photo using Facebook’s Tag Suggestion feature and that, as a result, Facebook allegedly created and stored a faceprint of the plaintiff without his permission and in violation of BIPA. In its motion to dismiss, Facebook argues that the Illinois court has no jurisdiction over Facebook in this matter, particularly since the plaintiff was a non-user of its service.  In addition, Facebook contends that, regardless, the plaintiff’s claim under BIPA must fail for several reasons: (1) Facebook does not create a face template and perform “corresponding name identification” for non-users who are manually tagged using Tag Suggestions; (2) BIPA expressly excludes from its coverage “photographs” and “any information derived from photographs” and that the statute’s use of the term “scan of hand or face geometry” was only meant to cover in-person scans of a person’s actual hand or face (not the scan of an uploaded photograph).

What has become clear from the pending claims under BIPA is that statutory interpretation will not be easy. We will continue to closely watch the ongoing litigation surrounding biometric privacy – particularly since the specific Illinois statute in question has yet to be interpreted by a court with respect to facial recognition technology.