Header graphic for print

New Media and Technology Law Blog

Website HTML Is Copyrightable, Even If Look and Feel Is Not

Posted in Copyright, Internet, Online Commerce

In a notable ruling last month, a California district court ruled that the HTML underlying a custom search results page of an online advertising creation platform is copyrightable.

In Media.net Advertising FZ-LLC v. Netseer Inc., No. 14-3883, 2016 U.S. Dist. LEXIS 3784 (N.D. Cal. Jan. 12, 2016), the plaintiff, an online contextual-advertising service provider, brought copyright infringement claims against a competitor for allegedly copying the HTML from a custom-created search results page, for the purpose of creating its own custom online advertising offering.  Plaintiff argued that its copyright claim is supported by the guidance published in the revised edition of the Compendium of U.S. Copyright Office Practices (Third Edition) (Dec. 2014) (“Compendium”).

The Compendium states that while a website’s layout or look and feel is not copyrightable subject matter, its HTML may be copyrightable.  [Note: As discussed in a prior post, the look and feel of a webpage might, in certain circumstances, be protectable trade dress under the Lanham Act.]

The defendant countered that plaintiff’s HTML consists solely of uncopyrightable Cascading Style Sheets (CSS), which renders plaintiff’s copyright registrations invalid.

Generally speaking, HTML is the standard markup language used in the design of websites and establishes the format and layout of text, content and graphics when a user views a website by instructing his or her browser to present material in a specified manner.  Anyone who has clicked on their browser’s dropdown menu to reveal the elements of a web page has seen the array of instructions contained between the start tag <html> and closing tag </html>.  Web developers also use CSS, which, according to the court, are merely methods of formatting and laying out the organization of documents written in a markup language, such as HTML. There are different ways to build CSS into HTML, and although CSS is often used with HTML, CSS have their own specifications.

The Copyright Office has stated that because procedures, processes, and methods of operation are not copyrightable, the Office generally will refuse to register claims based solely on CSS. See Compendium, §1007.4.   However, the Copyright Office will register HTML as a literary work (but not as a computer program because HTML is not source code), as long as the HTML was created by a human being and contains a sufficient amount of creative expression. See Compendium § 1006.1(A).  As the Media.net court explained, the fact that HTML code produces a web page (the look and feel of which is not subject to copyright protection) does not preclude its registration because “there are multiple ways of writing the HTML code to produce the same ultimate appearance of the webpage.”  The court held that portions of plaintiff’s HTML code minimally met the requisite level of creativity to be copyrightable.

Ultimately, however, the court granted the defendant’s motion to dismiss the copyright claims on procedural grounds based upon the plaintiff’s failure to properly assert, beyond conclusory allegations in its complaint, how the defendant accessed plaintiff’s HTML code.  The court also found that the plaintiff’s complaint also failed to list every portion of the HTML code that the defendant allegedly infringed.

As noted above, a website’s HTML is readily viewable through standard browsers. Thus, it is not uncommon for a developer to “take a peek” at the HTML of other sites.  This case suggests that even though a website’s look and feel may not be copyrightable, in some cases the underlying HTML may be. Thus, web developers should be careful as they are building sites to avoid copying copyrightable subject matter.

As the court granted plaintiff leave to amend its claim, we will continue to watch this case as it presents important copyright issues for e-commerce providers.

FTC Releases Big Data Report Outlining Risks, Benefits and Legal Hurdles

Posted in Internet, Online Commerce, Privacy, Regulatory

The big data revolution is quietly chugging along:  devices, sensors, websites and networks are collecting and producing significant amounts of data, the cost of data storage continues to plummet, public and private sector interest in data mining is growing, data computational and statistical methods have advanced, and more and more data scientists are using new software and capabilities to make sense of it all.  The potential benefits of big data are now well-known, but what are some of the legal, ethical and compliance risks and when do modern data analytics produce unintended discriminatory effects? To explore these issues, the FTC held a workshop in September 2014, and earlier this month, released a report “Big Data: A Tool for Inclusion or Exclusion?  Understanding the Issues.”

Companies that use big data are likely already familiar with the myriad of privacy-related legal issues — data collection and online behavioral tracking, notice and consumer choice, data security, anonymization and de-identification, intra-company data sharing, retail consumer tracking, and many others.  But beyond these concerns, the FTC’s report discusses another set of issues surrounding big data.  The report outlines the risks created by the use of big data analytics with respect to consumer protection and equal opportunity laws.  The report also directs companies to attempt to minimize the risk that data inaccuracies and inherent biases might harm or exclude certain consumers (particularly with respect to credit offers, and educational and employment opportunities). The Report outlines a number of potential harms, including:

  • Individuals mistakenly being denied opportunities. Participants in the FTC’s workshop raised concerns that companies using big data to better know their customers may, at times, base their assumptions disproportionately on the comparison of a consumer with a generalized data set with which the consumer shares similar attributes.
  • Ad targeting practices that reinforce existing disparities.
  • The exposure of consumer’s sensitive information.
  • The targeting of vulnerable consumers for fraud.
  • The creation of new justifications for exclusion of certain populations from particular opportunities.
  • Offering higher-priced goods and services to lower income communities.

Consumer Protection Laws Potentially Applicable to Big Data

The Report mentions several federal laws that might apply to certain big data practices, including the Fair Credit Reporting Act, equal opportunity laws, and the FTC Act.

Fair Credit Reporting Act

As the report notes, the Fair Credit Reporting Act (FCRA) applies to companies, known as consumer reporting agencies or CRAs, that compile and sell consumer reports containing consumer information that is used or expected to be used for decisions about consumer eligibility for credit, employment, insurance, housing, or other covered transactions.  Among other things, CRAs must reasonably ensure accuracy of consumer reports and provide consumers with access to their own information, and the ability to correct any errors.  Traditionally, CRAs included credit bureaus and background screening companies, but the scope of the FCRA may extend beyond traditional credit bureaus.  See e.g., United States v. Instant Checkmate, Inc., No. 14-00675 (S.D. Cal. filed Mar. 24, 2014) (website that allowed users to search public records for information about anyone and which was marketed to be used for background checks was subject to the FCRA; entity settled FTC charges, paid a $550,000 civil fine and agreed to future compliance).

Companies that use consumer reports also have FCRA obligations, such as providing consumers with “adverse action” notices if the companies use the consumer report information to deny credit or other certain benefits. The Report notes, however, that the FCRA does not apply when companies use data derived from their own relationship with customers for purposes of making decisions about them. Big data has created a new twist on compliance, though.  The Report mentions a growing trend where companies purchase predictive analytics products for eligibility determinations, but instead of comparing a traditional credit characteristic (e.g., payment history), these new products may use non-traditional characteristics (e.g., zip code or social media usage) to evaluate creditworthiness as compared to an anonymized data set of groups that share the same characteristics.  The FTC states that if an outside analytics firm regularly evaluates a company’s own data and provides evaluations to the company for eligibility determinations, the outside firm would likely be acting as a CRA, the company would likely be a user of consumer reports, and both entities would be subject to Commission enforcement under the FCRA.  This new stance apparently runs counter to prior FTC policy which had made an exception for anonymized data. In a footnote, the agency explains that its prior interpretation was inaccurate and that “if a report is crafted for eligibility purposes with reference to a particular consumer or group of consumers…the Commission will consider the report a consumer report even if the identifying information of the consumer has been stripped.”

Equal Opportunity Laws

Certain federal equal opportunity laws might also apply to certain big data analytics, such as the Equal Credit Opportunity Act (ECOA), Title VII of the Civil Rights Act of 1964, the Americans with Disabilities Act, the Age Discrimination in Employment Act, the Fair Housing Act, and the Genetic Information Nondiscrimination Act.  Generally speaking, these laws prohibit discrimination based on protected characteristics. To prove a violation of such laws, plaintiffs typically must show “disparate treatment” or “disparate impact.”   The Report offers an example: if a company makes credit decisions based on zip codes, it may be violating ECOA if the decisions have a disparate impact on a protected class and are not justified by a legitimate business necessity.  The specific requirements of each federal statute are beyond the scope of this post, but the question of whether a practice is unlawful under equal opportunity laws is a fact-specific inquiry.

The FTC Act

Section 5 of the Federal Trade Commission Act prohibits unfair or deceptive acts or practices in or affecting commerce.  The agency advises companies using big data to consider whether they are violating any material promises to consumers involving data sharing, consumer choice or data security, or whether companies have otherwise failed to disclose material information to consumers.  Such violations of privacy promises have formed the basis of multiple FTC privacy-related enforcement actions in recent years.  The Report states that companies that maintain big data on consumers should reasonably secure the data. The FTC also notes that companies may not sell their big data analytics products to customers if they know or have reason to know that those customers will use the products for fraudulent or discriminatory purposes.

Questions for Legal Compliance

In light of the above federal laws, the Report outlines several questions that companies already using or considering engaging in big data analytics should ask to remain in compliance: „

  • If you compile big data for others who will use it for eligibility decisions, are you complying with the accuracy and privacy provisions of the FCRA?
  • If you receive big data products from another entity that you will use for eligibility decisions, are you complying with the provisions applicable to users of consumer reports?
  • If you are a creditor using big data analytics in a credit transaction, are you complying with the requirement to provide statements of specific reasons for adverse action under ECOA? „
  • If you use big data analytics in a way that might adversely affect people in their ability to obtain credit, housing, or employment, are you treating people differently based on a prohibited basis, or do your practices have an adverse effect or impact on a member of a protected class?
  • Are you honoring promises you make to consumers and providing consumers material information about your data practices? „
  • Are you maintaining reasonable security over consumer data? „
  • Are you undertaking reasonable measures to know the purposes for which your customers are using your data (e.g., fraud, discriminatory purposes)?

The Big Data Report also points to research that has shown how big data could potentially be used in the future to disadvantage underserved communities and adversely affect consumers on the basis of legally protected characteristics. To be sure, the potential risks of data mining are not new, but inherent in any statistical analysis.  To maximize the benefits and limit the harms, the Report suggests companies should consider the following questions raised by researchers as big data use increases:

  • How representative is your data set? The agency advises that it is important to consider the digital divide and other issues of under-representation and over-representation in data inputs before launching a product or service to avoid skewed results.
  • Does your data model account for biases? Companies should consider whether biases are being incorporated at both the collection and analytics stages of big data’s life cycle, and develop strategies to overcome any unintended impact on certain populations.
  • How accurate are your predictions? The Report advises that human oversight of data and algorithms may be worthwhile when big data tools are used to make important decisions, such as those implicating health, credit, and employment.
  • Does your reliance on big data raise ethical or fairness concerns? The Report states that companies should assess the factors that go into an analytics model and balance the predictive value of the model with fairness considerations.

Conclusion

With the issuance of its Big Data Report (and last year’s Data Broker Report), the FTC has signaled it will actively monitor areas where data collection and big data analytics could violate existing laws and will push for public-private cooperation to ensure the benefits of big data are maximized, the risks minimized. The Big Data Report is an important document for companies that provide big data analytic services or purchase such services for use in analyzing consumer behavior or aid in consumer eligibility decisions.  It remains to be seen how the FTC’s policy statement will be received by industry (or subsequently reviewed by the courts), particularly the FTC’s assertion that certain uses of anonymized consumer data might implicate the FCRA.  We have previously discussed the use of anonymized data for marketing and other purposes with respect to the Video Privacy Protection Act, and will continue to follow developments in this area closely to see how emerging practices mesh with privacy laws and regulations.

FTC Issues Enforcement Policy Statement on Native Advertising in New Media

Posted in Internet, Online Content, Regulatory

Digital media marketers are aggressively increasing the use of so-called sponsored content, or native advertising to reach new customers.  Particularly with the growing use of ad blockers on web and mobile browsers, marketers have sought to present advertising in a new form that can circumvent automated blocking and somehow capture the attention of users who may face a barrage of digital display ads everyday.

Generally speaking, natively formatted advertising attempts to match the design and style of the digital media it is embedded into. The ads can appear in a variety of settings, including in: the stream or display of regular content on news or news aggregations sites, videos, social media feeds, search results, infographics, images, animations, in-game modules, and playlists on streaming services.  Such ads can be placed directly by the publisher or inserted via ad networks, and be specifically targeted to the user.

However, the proliferation of native advertising in digital media has raised questions about whether such evolving formats deceive consumers by blurring the distinction between advertising and news or editorial content.

In 2013, the FTC hosted a workshop, “Blurred Lines: Advertising or Content? – An FTC Workshop on Native Advertising,” to examine the blending of advertisements with news, entertainment, and other editorial content in digital media.  Following up on its findings, in December 2015, the agency released its Enforcement Policy Statement on Deceptively Formatted Advertisements, which lays out the general principles the Commission considers in determining whether any particular ad format is deceptive and violates the FTC Act.

The Policy Statement notes that: “deception occurs when an advertisement misleads reasonable consumers as to its true nature or source, including that a party other than the sponsoring advertiser is the source of an advertising or promotional message, and such misleading representation is material.” According to the Policy Statement, under FTC principles, advertisers cannot use “deceptive door openers” to induce consumers to view advertising content.   Advertisers are responsible for ensuring that native ads are identifiable as advertising before consumers arrive at the main advertising page.  If the source of the content is clear, consumers can make informed decisions about whether to interact with the ad and the weight to give the information conveyed in the ad.  However, the FTC will find an ad’s format deceptive if the ad materially misleads consumers about its commercial nature, including through an express or implied misrepresentation that it comes from a party other than the sponsoring advertiser.

How is the format of a native advertisement evaluated? In determining whether an ad is deceptive, the FTC considers the “net impression” the ad conveys to consumers, that is, the overall context of the interaction, including what the ads says and the format it is presented in. According to the Policy Statement, the agency will examine such factors as its overall appearance, the similarity of its written, spoken, or visual style to non-advertising content offered on a publisher’s site, and the degree to which it is distinguishable from such other content.

Clarifying information that accompanies a native ad must be disclosed clearly and prominently to overcome any misleading impression.  Native ads may include disclosures such as text labels, audio disclosures, or visual cues distinguishing the ad from other non-commercial content.  The FTC declares that any disclosure must be “sufficiently prominent and unambiguous to change the apparent meaning of the claims and to leave an accurate impression,” and “made in simple, unequivocal language, so that consumers comprehend what it means.”

The Policy Statement advises that disclosures should be made in the same language as the predominant language in which ads are communicated.  In its accompanying guidance, Native Advertising: A Guide for Businesses, the FTC further notes that such disclosures should not be couched in technical or industry jargon, or displayed using unfamiliar icons or terminology that might have different meanings to consumers in other situations. The Guidance suggests that terms likely to be understood include “Ad,” “Advertisement,” “Paid Advertisement,” “Sponsored Advertising Content,” or some variation thereof, but that advertisers should not use terms such as “Promoted” or “Promoted Stories,” which the agency deems “at best ambiguous.” The Guidance also states, that, depending on the context, “consumers reasonably may interpret other terms, such as ‘Presented by [X],’ ‘Brought to You by [X],’ ‘Promoted by [X],’ or ‘Sponsored by [X]’ to mean that a sponsoring advertiser funded or ‘underwrote’ but did not create or influence the content.”

This is only a very brief summary of the FTC’s position on native advertising. We advise that all companies engaged, directly or indirectly, in the creation, placement or publishing of native advertisements read closely the FTC’s Policy Statement and Guidance document to aid their determination about what types of native ads could be found misleading.  As native advertising continues to appear in digital media, we will continue to follow regulatory and industry developments.

Photo Storage Service’s Collection of Faceprints May Violate Illinois Biometric Privacy Statute

Posted in Biometrics, Internet, Privacy, Social Media

As we have previously noted, there are several ongoing privacy-related lawsuits alleging that facial recognition-based systems of photo tagging violate the Illinois Biometric Information Privacy Act (BIPA). The photo storage service Shutterfly and the social network Facebook are both defending putative class action suits that, among other things, allege that such services created and stored faceprints without permission and in violation of BIPA.  In the suit against Shutterfly, plaintiff claims he is not a registered Shutterfly user, but that a friend had uploaded group photos depicting him and, upon prompting, tagged him in the photo, thereby adding his faceprint to the database (plaintiff, as a non-member of the service, had never formally consented to this collection of biometric data). Shutterfly had filed a motion to dismiss, arguing that scans of face geometry derived from uploaded photographs are not “biometric identifiers” under BIPA because the statute excludes information derived from photographs.

Last week, an Illinois district court denied Shutterfly’s motion to dismiss (Norberg v. Shutterfly, Inc., No. 15-05351 (N.D. Ill. Dec. 29, 2015)).  In a terse order, the court first found that there were sufficient minimum contacts to establish specific personal jurisdiction over Shutterfly in the Illinois forum.  Next, the court considered the claim under the Illinois biometric privacy law. Without engaging in a thorough analysis of the statute and its exceptions at this early stage in the litigation, the court ruled that the plaintiff could proceed with his claim under BIPA:

“Here, Plaintiff alleges that Defendants are using his personal face pattern to recognize and identify Plaintiff in photographs posted to Websites. Plaintiff avers that he is not now nor has he ever been a user of Websites, and that he was not presented with a written biometrics policy nor has he consented to have his biometric identifiers used by Defendants. As a result, the Court finds that Plaintiff has plausibly stated a claim for relief under the BIPA.”

We will continue to closely watch the ongoing litigation surrounding biometric privacy – particularly since the specific Illinois biometric privacy statute has yet to be interpreted in great depth by a court with respect to facial recognition technology.

Facebook Seeks Dismissal in Illinois Facial Recognition Biometric Privacy Suit

Posted in Biometrics, Privacy, Social Media, Technology

As we have previously noted, Facebook has been named as a defendant in a number of lawsuits claiming that its facial recognition-based system of photo tagging violates the Illinois Biometric Information Privacy Act (BIPA).  In a separate putative class action filed in Illinois federal court that involves the tagging of an “unwilling” non-user without his permission, Facebook seeks dismissal on grounds similar to the arguments Facebook made in those cases. (See Gullen v. Facebook, Inc., No. 15-07681 (N.D. Ill. filed Aug. 31, 2015)).  In short, the plaintiff non-user claims that another Facebook member manually tagged him in a photo using Facebook’s Tag Suggestion feature and that, as a result, Facebook allegedly created and stored a faceprint of the plaintiff without his permission and in violation of BIPA. In its motion to dismiss, Facebook argues that the Illinois court has no jurisdiction over Facebook in this matter, particularly since the plaintiff was a non-user of its service.  In addition, Facebook contends that, regardless, the plaintiff’s claim under BIPA must fail for several reasons: (1) Facebook does not create a face template and perform “corresponding name identification” for non-users who are manually tagged using Tag Suggestions; (2) BIPA expressly excludes from its coverage “photographs” and “any information derived from photographs” and that the statute’s use of the term “scan of hand or face geometry” was only meant to cover in-person scans of a person’s actual hand or face (not the scan of an uploaded photograph).

What has become clear from the pending claims under BIPA is that statutory interpretation will not be easy. We will continue to closely watch the ongoing litigation surrounding biometric privacy – particularly since the specific Illinois statute in question has yet to be interpreted by a court with respect to facial recognition technology.

European Court Gives Bitcoin a Tax-Free Boost

Posted in Digital Currency, Online Commerce

In an important ruling for digital currency service providers, EU’s top court, the Court of Justice of the European Union (CJEU), ruled that transactions to exchange a traditional currency for bitcoin virtual currency, or vice versa, were not subject to value added tax (VAT), effectively treating such transactions like an exchange of cash. (Skatteverket v David Hedqvist (C-264/14) (22 October 2015)). The CJEU declined to construe the exemption in question to apply only to transactions involving traditional currency under the facts presented.

Digital currency advocates hailed the decision, as it removed some regulatory uncertainty surrounding bitcoin exchanges, perhaps spurring further development in this nascent industry.  We will see how this ruling impacts bitcoin service providers in the EU.

Video Privacy Protection Act Narrowed – App’s Transmission of Roku ID Not Disclosure of Personal Information

Posted in Privacy, Video, Video Privacy Protection Act

A New York district court opinion is the latest addition to our watch of ongoing VPPA-related disputes, a notable decision on the issue of what exactly is a disclosure of “personally identifiable information” (PII)  under the VPPA.  Does PII refer to information which must, without more, link an actual person to actual video materials?  Or are there circumstances where the disclosure of video viewing data and a unique device ID constitute disclosure of PII?

In Robinson v. Disney Online, No. 14-04146 (S.D.N.Y. Oct. 20, 2015), the plaintiff claimed that the Disney Channel app transmitted video viewing data and his Roku device serial number to a third-party analytics company for data profiling purposes each time he viewed a video clip, constituting a violation of the VPPA.  In particular, the plaintiff did not argue that the information disclosed by Disney constituted PII by itself, but rather that the disclosed information was PII because the analytics company could potentially identify him by “linking” these disclosures with “existing personal information” obtained elsewhere.  In dismissing the action, the court held that PII is information which itself identifies a particular person as having accessed specific video materials, and whereas names and addresses, as a statutory matter, identify a specific person, an anonymized Roku serial number merely identifies a device.

“Indeed, the most natural reading of PII suggests that it is the information actually disclosed by a ‘video tape service provider,’ which must itself do the identifying that is relevant for purposes of the VPPA…not information disclosed by a provider, plus other pieces of information collected elsewhere by non-defendant third parties.”

“Disney’s liability turns only on whether the information it disclosed itself identified a specific person. It did not. Thus, [the analytics company’s] ability to identify Robinson by linking this disclosure with other information is of little significance.”

Rejecting the plaintiff’s expansive definition of PII under the statute, the court noted that if nearly any piece of information could, with enough effort, be combined with other information to identify a person, “then the scope of PII would be limitless.”  Ultimately, the court settled on the definition of PII as being “information which itself identifies a particular person as having accessed specific video materials.”  Yet, the court noted that in certain circumstances, “context may matter,” to the extent other information disclosed by the provider permits a “mutual understanding that there has been a disclosure of PII.”  For example, according to the court, a provider could not evade liability if it disclosed video viewing data and a device ID, along with a code that enabled a third party to identify the specific device’s user. However, as the court found, while Disney may have disclosed the plaintiff’s Roku serial number, it did not disclose a correlated decryption table or other identifying information that would enable a third-party analytics company to decrypt the hashed Roku serial number and other information necessary to identify the specific device’s user.

The Robinson case is an important ruling for companies that deliver video to customers via digital streaming devices (or even via mobile devices), as the court made a narrow reading of the scope of liability under the VPPA.  However, with multiple VPPA suits currently before federal appeals courts (many of which concerning the disclosure of an anonymous device ID), the debate is far from over and we will continue to monitor the latest rulings in this emerging area.

Biometrics: Facebook Files Motion to Dismiss Privacy Suit over Facial Recognition Technology

Posted in Biometrics, Privacy, Social Media, Technology

As discussed in a previous post on facial recognition technology, a putative class action has been filed against Facebook over the collection of “faceprints” for its online photo tagging function, Tag Suggestions.  (See e.g., Licata v. Facebook, Inc., No. 2015CH05427 (Ill. Cir. Ct. Cook Cty. filed Apr. 1, 2015) (the case has been transferred to a San Francisco district court, Licata v. Facebook, Inc., No. 15-03748 (N.D. Cal. Consolidated Class Action Complaint filed Aug. 28, 2015)).

The plaintiffs claim that Facebook’s use of facial recognition technology to scan user-uploaded photos for its Tag Suggestions feature violates Illinois’s Biometric Information Privacy Act (BIPA), 740 ILCS 14/1, and has been used to create, what the plaintiffs allege, is “the world’s largest privately held database of consumer biometrics data.”

Plaintiffs allege that Facebook extracts face geometry data (or faceprints) from user-uploaded photographs and retains such “biometric identifiers” within the meaning of the BIPA. The complaint alleges, among other things, that Facebook collected and stored biometric data without adequate consent.  The complaint seeks an injunction and statutory damages for each violation (note: BIPA provides for $1,000 in statutory damages for each negligent violation, and $5,000 for intentional violations, plus attorney’s fees).

Last week, Facebook filed its motion to dismiss, arguing, among other things, that based on the choice of law provision in its terms of service, California, not Illinois, law should apply (thereby precluding users from bringing a claim under BIPA), and that, regardless, Section 10 of BIPA expressly “excludes both ‘photographs’ and ‘information derived from photographs’ from its reach.”

Those wanting a preview of the plaintiffs’ response to Facebook’s motion should look to a similar privacy action against Shutterfly currently being litigated in Illinois federal court.  (See Norberg v. Shutterfly, Inc., No. 15-05351 (N.D. Ill. filed June 17, 2015)).  There, the plaintiff brought claims under BIPA against the photo storage service Shutterfly for allegedly collecting faceprints from user-upload photos for a tag suggestion feature without express written consent and “without consideration for whether a particular face belongs to a Shutterfly user or unwitting nonuser.”  In its motion to dismiss, Shutterfly, like Facebook, argued that scans of face geometry derived from uploaded photographs are not “biometric identifiers” under BIPA because the statute excludes information derived from photographs.

In his rebuttal, the plaintiff Norberg claimed if the intermediation of a photograph before processing face geometry excluded such data from the definition of a biometric identifier, then the statute would be meaningless:

“Defendants’ interpretation of the BIPA as inapplicable to face scans of photographs is contrary to the very nature of biometric technology and thus would undermine the statute’s core purpose. A photograph of a face is exactly what is scanned to map out the unique geometric patterns that establish an individual’s identity. Taken to its logical conclusion, Defendants’ argument would exclude all the biometric identifiers from the definition of biometric identifiers, because they are all based on the initial capture of a photograph or recording.”

We will be watching both disputes closely – if the suits are not dismissed on procedural or contractual grounds, this will be the first time a court will have the opportunity to interpret the contours of the Illinois biometric privacy statute with respect to facial recognition technology.

Important Circuit Court Ruling Limits Scope of VPPA Liability

Posted in Geofencing, Mobile, Privacy, Video, Video Privacy Protection Act

The Eleventh Circuit issued a notable ruling this week limiting a mobile app’s liability under the Video Privacy Protection Act (VPPA), 18 U.S.C. § 2710, a law enacted in 1988 to preserve “consumer” personal privacy with respect to the rental or purchase of movies on VHS videotape, and which has been regularly applied to streaming video sites and apps.  However, in a significant decision which potentially limits the applicability of the VPPA, the Eleventh Circuit held in Ellis v. The Cartoon Network, Inc., 2015 WL 5904760 (11th Cir. Oct. 9, 2015), that a person who downloads and uses a free mobile app to view freely available content, without more, is not a “subscriber” (and therefore not a “consumer”) under the VPPA.

Subject to certain exceptions, the VPPA generally prohibits “video tape service providers” from knowingly disclosing, to a third-party, “personally identifiable information concerning any consumer.” 18 U.S.C. §2710(b).  Under the VPPA, the term “consumer” means any “renter, purchaser, or subscriber of goods or services from a video tape service provider.” 18 U.S.C. §2710(a)(1).

In Ellis, a user who watched video clips on a free app claimed a violation of the VPPA when the app allegedly disclosed his personally identifiable information – his Android ID and video viewing records – to a third-party analytics company for digital tracking and advertising purposes.  The plaintiff claims that the analytics company identifies and tracks specific users across multiple devices and applications and can “automatically” link an Android ID to a particular person by using information previously collected from other sources.  The lower court had ruled that Ellis was a “subscriber” and therefore a “consumer” under the Act able to bring a cause of action, but that Ellis’s Android ID was not “personally identifiable information” under the VPPA.  The Eleventh Circuit affirmed the dismissal of the action, but under different reasoning.

While the Eleventh Circuit agreed with the district court that payment is not a necessary element of subscription, it took a narrower view of the definition of a “subscriber” under the Act.

“Payment, therefore, is only one factor a court should consider when determining whether an individual is a “subscriber” under the VPPA. […] But his merely downloading the CN app for free and watching videos at no cost does not make him a ‘subscriber’ either.”

As the court pointed out, the plaintiff Ellis did not sign up for or establish an account with Cartoon Network, did not provide any personal information to CN, did not make any payments to use the CN app and did not make any commitment or establish any relationship that would allow him to have access to exclusive or restricted content.  Indeed, CN app users can log in with their television provider information to view additional content, but it is not required – if a user simply wants to view freely available content, he or she does not have to create an account.  Thus, the court dismissed the action because it concluded that the plaintiff Ellis, as merely a user of a free app, was not a subscriber under the Act:

“[D]ownloading an app for free and using it to view content at no cost is not enough to make a user of the app a ‘subscriber’ under the VPPA. The downloading of an app, we think, is the equivalent of adding a particular website to one’s Internet browser as a favorite, allowing quicker access to the website’s content.”

In deciding the case on the “subscriber” issue, the appeals court offered no opinion on whether an Android ID was “personally identifiable information” under the VPPA, an issue that continues to be litigated.

Ellis, an appellate-level decision, could be an important ruling for companies that develop mobile apps that feature video and collect data for targeted advertising purposes.  The 15-page decision deserves a close reading for companies deciding on a business model for mobile apps and offers a level of clarity on what features might allow a free app or an app with a “freemium” pricing strategy to remain outside the scope of the VPPA.

There are still a number of pending VPPA cases focused on the intersection of online video and consumer privacy.  In fact, Ellis is merely one decision in a series of appeals court VPPA-related rulings that are expected in the coming year. Stay tuned!

Section 230 of the Communications Decency Act: More Lessons to Be Learned

Posted in Internet, Online Content

Courts continue to struggle with the application of CDA immunity to shield service provider defendants from liability in extreme cases. In this case, the Washington Supreme Court, in a 6-3 decision, affirmed the lower court’s decision to allow a suit to proceed against classified service Backpage.com surrounding the sexual assault of several minors by adult customers who responded to advertisements placed in the “Escorts” section of the website. (See J.S. v. Village Voice Media Holdings, L.L.C., 2015 WL 5164599 (Wash. Sept. 3, 2015).  This opinion is notable in that courts have usually (although sometimes reluctantly) resolved these struggles by extending broad immunity, even when the facts presented are unsympathetic, or, as the dissent in J.S. noted, “repulsive.”   Indeed, in a case from earlier this year, Backpage was granted CDA immunity in a dispute resting on similar facts. (See Doe No. 1 v. Backpage.com, LLC, No. 14-13870 (D. Mass. May 15, 2015)).  Why was this case decided differently?

This issue in this case turns on whether Backpage merely hosted the advertisements that featured the minor plaintiffs, in which case Backpage is protected by CDA immunity, or whether Backpage also helped develop the content of those advertisements, in which case Backpage is not protected by CDA immunity.  When viewing the plaintiffs’ allegations in a favorable light at this early stage of the litigation, the majority of the court found that the plaintiffs alleged facts that, if proved true, would show that Backpage did more than simply maintain neutral policies prohibiting or limiting certain content and acted as an “information content provider” in surreptitiously guiding pimps on how to post illegal, exploitative ads.

The dissenting justices would have ruled that Backpage qualified for CDA immunity because a person or entity does not qualify as an information content provider merely by facilitating a user’s posting of content, if it is the user alone who selects the content.  In the dissent’s view, the plaintiffs are seeking to hold Backpage liable for its publication of third-party content and harms flowing from the dissemination of that content (i.e., Backpage’s alleged failure to prevent or remove certain posted advertisements), a situation that should fall under the CDA.  The dissent also pointed out that Backpage provides a neutral framework that could be used for proper or improper purposes and does not mandate that users include certain information as a condition of using the website.

What are the lessons learned?  CDA immunity is generally a robust affirmative defense against claims related to the publication of third-party content.  However, as this case illustrates, courts may look for ways to circumvent the CDA in certain unsavory cases, particularly in the early stages of the litigation.  Even if the interpretation of CDA immunity in this case may turn out to be an outlier and the CDA ultimately is deemed to protect Backpage.com, the opinion – issued from a state supreme court – should prompt service providers to take heed.  In light of this decision, website operators that provide forums for user-generated content might reexamine their policies related to the creation of user-generated content and the filtering out or management of illegal content to determine whether the site itself could be reasonably alleged to be “inducing” and perhaps even “developing” any questionable content posted by users.