This week’s Apple X announcement was not more than a few hours old, and the questions began to come in. Apple’s introduction of Face ID facial recognition on its new phone – although already available in some form on several Android phones – generated curiosity, concerns and creativity.  Unfortunately, the details about specifically how the recognition feature will really work are yet unknown.  All the public knows right now is that the phone’s facial “capture” function, powered by an updated camera and sensor array, will direct 30,000 infrared dots around a user’s face and create a hashed value that will presumably be matched against a user’s face during the unlocking procedure.

The questions and issues this raises are too numerous and varied to address in a single blog post. I will simply point out that the concerns over Face ID range from spoofing (e.g., Can the phone be unlocked by a picture? [Apple says no, explaining that the system will map the depth of faces]) to security (e.g., Is the “face map” or hashed value stored in a database which can be breached? [Apple, says no, like fingerprints in Apple’s current Touch ID feature, the face map will be securely stored locally on the device]).

One issue that I thought was particularly interesting, however, relates to the ability of apps residing on a phone to interact with facial captures. Unless disabled, Face ID could potentially be “always on,” ready to capture facial images to authenticate the unlocking of the phone, and possibly capturing facial images as the user interacts with the unlocked phone.  So, clients have asked: Will the apps on the phone be able to access and use those facial captures?

We have been writing about the biometric privacy legal landscape, which has thus far been dominated by the Illinois Biometric Information Privacy Act (BIPA).  While there are a number of states that are considering bills modeled after BIPA, Washington has enacted a bill that takes a dramatically different approach.   On May 16, 2017, HB 1493 (the “Washington Statute,” or the “Statute”) was signed into law by Governor Jay Inslee and will become effective on July 23, 2017.

The stated purpose of the Statute is to require a business that collects and can attribute biometric data to a specific individual to disclose how it uses that biometric data and provide notice to and obtain consent from an individual before enrolling or changing the use of that individual’s biometric identifiers in a database. Unlike BIPA, the Statute does not provide a private cause of action; it may be enforced solely by the state attorney general under the Washington consumer protection act.  It should be noted, however, that Washington has traditionally been one of the leading states with regard to the enforcement of consumer privacy.

Even though Washington passed its own biometric privacy law last month (HB 1493), and other states are currently debating their own bills, Illinois’s Biometric Information Privacy Act (BIPA) is still the crux of biometric and facial recognition privacy-related litigation.  Such suits have typically involved social media services, video game makers or businesses that collect biometric data to authenticate customers.  In a slight twist, on May 11, 2017, a putative class of employees filed suit against Roundy’s Supermarkets alleging violations of BIPA surrounding the collection and retention of employees’ fingerprints – as opposed to using last century’s analog time cards, Roundy’s requires employees to scan their fingers each time they clock “in” and “out” of their work shifts to verify their identities.  In the suit, plaintiffs claim that Roundy’s failed to offer notice and obtain written consent prior to capturing employees’ fingerprints, or post a retention policy about how long the company stores the biometric data. (See Baron v. Roundy’s Supermarkets, Inc., No. 17-03588 (N.D. Ill. filed May 11, 2017)).

Update: On March 9, 2017, Google filed a motion requesting the court certify an interlocutory appeal.  In particular, Google contends that the following question satisfies the statutory criteria: whether the term “biometric identifier,” as defined in Illinois Biometric Privacy Act, includes information derived from photographs.

We’ve closely followed the numerous biometric privacy disputes and legislative developments surrounding the Illinois Biometric Information Privacy Act (BIPA), which precludes the unauthorized collection and storing of some types of biometric data.  In the latest ruling, an Illinois district court refused to dismiss a putative class action alleging that the cloud-based Google Photos service violated BIPA by automatically uploading plaintiffs’ mobile photos and allegedly scanning them to create unique face templates (or “faceprints”) for subsequent photo-tagging without consent.  (Rivera v. Google, Inc., No. 16-02714 (N.D. Ill. Feb. 27, 2017)).

This is the third instance where a district court refused, at an early stage of a litigation, to dismiss BIPA claims relating to the online collection of facial templates for photo-tagging purposes.  Unlike those prior courts’ relatively cursory interpretations, however, the Rivera court’s expansive 30-page opinion is the deepest dive yet into the statutory scheme (and purported vagaries) of the Illinois statute.  The decision is the latest must-read for mobile or online services that collect and store biometric data from users as to what extent their activities might fall under the Illinois biometric privacy statute.  It may well turn out that the plaintiffs’ claims in Rivera (as well as the ongoing biometric privacy litigation going on in California) may prove unsuccessful on procedural or statutory grounds, yet, these initial takes on the scope of BIPA stress the importance of examining current practices and rollouts of new services that feature biometrics. 

We’ve written extensively about the numerous lawsuits, dismissals and settlements surrounding the Illinois Biometric Information Privacy Act (BIPA). The statute, generally speaking, prohibits an entity from collecting, capturing, purchasing, or otherwise obtaining a person’s “biometric identifier” or “biometric information,” unless it satisfies certain notice and consent and data retention requirements. The statute contains defined terms and limitations, and parties in ongoing suits are currently litigating what “biometric identifiers” and “biometric information” mean under the statute and whether the collection of facial templates from uploaded photographs using sophisticated facial recognition technology fits within the ambit of the statute. Moreover, in two instances in the past six months, a district court has dismissed a lawsuit alleging procedural and technical violations of the Illinois biometric privacy statute for lack of Article III standing.

Thus, the epicenter of biometric privacy compliance and litigation has been the Illinois statute. A Texas biometric statute offers similar protections, but does not contain a private right of action.

The biometrics landscape may be about to get more complicated. An amendment has been proposed to the Illinois biometric privacy, and a number of biometric privacy bills mostly resembling BIPA have been introduced in other state legislatures. While most of the new proposed statutes are roughly consistent with the Illinois statute, as noted below, the Washington state proposal is, in many ways, very different. If any or all of these bills are enacted, they will further shape and define the legal landscape for biometrics.

For the second time in the past six months, a district court has dismissed a lawsuit alleging procedural and technical violations of the Illinois biometric privacy statute for lack of Article III standing.  In Vigil v. Take-Two Interactive Software, Inc., No. 15-8211 (S.D.N.Y. Jan. 27, 2017), the court dismissed Illinois biometric privacy claims against a videogame maker related to a feature in the NBA 2K videogame series that allows users to scan their faces and create a personalized virtual avatar for in-game play.  In a lengthy opinion, the New York court provided Take-Two with a resounding victory when it ruled that procedural violations of the notice and consent provisions of the Illinois biometric privacy statute are not in-of-themselves sufficient to confer standing.

Biometric technology such as facial recognition, iris scans, or fingerprint authentication is being used and further developed to improve the security of financial and other sensitive transactions.  At the same time, social media sites, mobile apps, videogame developers and others are employing biometrics for other cutting edge uses to improve services.  The current Vigil ruling is particularly important, however, as it may buoy companies that collect biometric data under reasonable notice and usage policies, as they hope that the approval applied in Vigil is affirmed, if appealed, and followed in other jurisdictions.

Earlier this month, an Illinois state court approved a $1.5 million settlement in a class action against L.A. Tan Enterprises, Inc., operator (directly and through franchisees) of L.A. Tan tanning salons.  The settlement resolved allegations that L.A. Tan violated the Illinois Biometric Information Privacy Act (BIPA) by collecting Illinois members’ fingerprints for verification during check-in without complying with BIPA’s notice and consent requirements. (See Sekura v. L.A. Tan Enterprises, Inc., No. 2015-CH-16694 (Ill. Cir. Ct. Cook Cty. First Amended Class Complaint filed Apr. 8, 2016)).   Under the settlement, approximately 37,000 class members who had their fingerprints scanned at a L.A. Tan location in Illinois between a specified three-year period (Nov. 13, 2013 to August 11, 2016) will receive a pro rata share of the settlement. Moreover, L.A. Tan agreed to comply with BIPA in the future and ensure the compliance of its franchisees.

In Yershov v. Gannett Satellite Information Network, Inc., a user of the free USA Today app alleged that each time he viewed a video clip, the app transmitted his mobile Android ID, GPS coordinates and identification of the watched video to a third-party analytics company to create user profiles for the purposes of targeted advertising, in violation of the Video Privacy Protection Act (VPPA). When we last wrote about this case in May, the First Circuit reversed the dismissal by the district court and allowed the case to proceed, taking a more generous view as to who is a “consumer” under the VPPA.

On remand, Gannett moved to dismiss the complaint again for lack of subject matter jurisdiction, contending that the complaint merely alleges a “bare procedural violation” of the VPPA, insufficient to establish Article III standing to bring suit under the standard enunciated in the Supreme Court’s Spokeo decision. In essence, Gannett contended that the complaint does not allege a concrete injury in fact, and that even if it did, the complaint depends on the “implausible” assumption that the third-party analytics company receiving the data maintains a “profile” on the plaintiff.

Another court has contributed to the ongoing debate over the scope of the term “personally identifiable information” under the Video Privacy Protection Act – a statute enacted in 1988 to protect the privacy of consumers’ videotape rental and purchase history but lately applied to the modern age of video streaming