Header graphic for print

New Media and Technology Law Blog

Facebook Seeks Dismissal in Illinois Facial Recognition Biometric Privacy Suit

Posted in Biometrics, Privacy, Social Media, Technology

As we have previously noted, Facebook has been named as a defendant in a number of lawsuits claiming that its facial recognition-based system of photo tagging violates the Illinois Biometric Information Privacy Act (BIPA).  In a separate putative class action filed in Illinois federal court that involves the tagging of an “unwilling” non-user without his permission, Facebook seeks dismissal on grounds similar to the arguments Facebook made in those cases. (See Gullen v. Facebook, Inc., No. 15-07681 (N.D. Ill. filed Aug. 31, 2015)).  In short, the plaintiff non-user claims that another Facebook member manually tagged him in a photo using Facebook’s Tag Suggestion feature and that, as a result, Facebook allegedly created and stored a faceprint of the plaintiff without his permission and in violation of BIPA. In its motion to dismiss, Facebook argues that the Illinois court has no jurisdiction over Facebook in this matter, particularly since the plaintiff was a non-user of its service.  In addition, Facebook contends that, regardless, the plaintiff’s claim under BIPA must fail for several reasons: (1) Facebook does not create a face template and perform “corresponding name identification” for non-users who are manually tagged using Tag Suggestions; (2) BIPA expressly excludes from its coverage “photographs” and “any information derived from photographs” and that the statute’s use of the term “scan of hand or face geometry” was only meant to cover in-person scans of a person’s actual hand or face (not the scan of an uploaded photograph).

What has become clear from the pending claims under BIPA is that statutory interpretation will not be easy. We will continue to closely watch the ongoing litigation surrounding biometric privacy – particularly since the specific Illinois statute in question has yet to be interpreted by a court with respect to facial recognition technology.

European Court Gives Bitcoin a Tax-Free Boost

Posted in Digital Currency, Online Commerce

In an important ruling for digital currency service providers, EU’s top court, the Court of Justice of the European Union (CJEU), ruled that transactions to exchange a traditional currency for bitcoin virtual currency, or vice versa, were not subject to value added tax (VAT), effectively treating such transactions like an exchange of cash. (Skatteverket v David Hedqvist (C-264/14) (22 October 2015)). The CJEU declined to construe the exemption in question to apply only to transactions involving traditional currency under the facts presented.

Digital currency advocates hailed the decision, as it removed some regulatory uncertainty surrounding bitcoin exchanges, perhaps spurring further development in this nascent industry.  We will see how this ruling impacts bitcoin service providers in the EU.

Video Privacy Protection Act Narrowed – App’s Transmission of Roku ID Not Disclosure of Personal Information

Posted in Privacy, Video, Video Privacy Protection Act

A New York district court opinion is the latest addition to our watch of ongoing VPPA-related disputes, a notable decision on the issue of what exactly is a disclosure of “personally identifiable information” (PII)  under the VPPA.  Does PII refer to information which must, without more, link an actual person to actual video materials?  Or are there circumstances where the disclosure of video viewing data and a unique device ID constitute disclosure of PII?

In Robinson v. Disney Online, No. 14-04146 (S.D.N.Y. Oct. 20, 2015), the plaintiff claimed that the Disney Channel app transmitted video viewing data and his Roku device serial number to a third-party analytics company for data profiling purposes each time he viewed a video clip, constituting a violation of the VPPA.  In particular, the plaintiff did not argue that the information disclosed by Disney constituted PII by itself, but rather that the disclosed information was PII because the analytics company could potentially identify him by “linking” these disclosures with “existing personal information” obtained elsewhere.  In dismissing the action, the court held that PII is information which itself identifies a particular person as having accessed specific video materials, and whereas names and addresses, as a statutory matter, identify a specific person, an anonymized Roku serial number merely identifies a device.

“Indeed, the most natural reading of PII suggests that it is the information actually disclosed by a ‘video tape service provider,’ which must itself do the identifying that is relevant for purposes of the VPPA…not information disclosed by a provider, plus other pieces of information collected elsewhere by non-defendant third parties.”

“Disney’s liability turns only on whether the information it disclosed itself identified a specific person. It did not. Thus, [the analytics company’s] ability to identify Robinson by linking this disclosure with other information is of little significance.”

Rejecting the plaintiff’s expansive definition of PII under the statute, the court noted that if nearly any piece of information could, with enough effort, be combined with other information to identify a person, “then the scope of PII would be limitless.”  Ultimately, the court settled on the definition of PII as being “information which itself identifies a particular person as having accessed specific video materials.”  Yet, the court noted that in certain circumstances, “context may matter,” to the extent other information disclosed by the provider permits a “mutual understanding that there has been a disclosure of PII.”  For example, according to the court, a provider could not evade liability if it disclosed video viewing data and a device ID, along with a code that enabled a third party to identify the specific device’s user. However, as the court found, while Disney may have disclosed the plaintiff’s Roku serial number, it did not disclose a correlated decryption table or other identifying information that would enable a third-party analytics company to decrypt the hashed Roku serial number and other information necessary to identify the specific device’s user.

The Robinson case is an important ruling for companies that deliver video to customers via digital streaming devices (or even via mobile devices), as the court made a narrow reading of the scope of liability under the VPPA.  However, with multiple VPPA suits currently before federal appeals courts (many of which concerning the disclosure of an anonymous device ID), the debate is far from over and we will continue to monitor the latest rulings in this emerging area.

Biometrics: Facebook Files Motion to Dismiss Privacy Suit over Facial Recognition Technology

Posted in Biometrics, Privacy, Social Media, Technology

As discussed in a previous post on facial recognition technology, a putative class action has been filed against Facebook over the collection of “faceprints” for its online photo tagging function, Tag Suggestions.  (See e.g., Licata v. Facebook, Inc., No. 2015CH05427 (Ill. Cir. Ct. Cook Cty. filed Apr. 1, 2015) (the case has been transferred to a San Francisco district court, Licata v. Facebook, Inc., No. 15-03748 (N.D. Cal. Consolidated Class Action Complaint filed Aug. 28, 2015)).

The plaintiffs claim that Facebook’s use of facial recognition technology to scan user-uploaded photos for its Tag Suggestions feature violates Illinois’s Biometric Information Privacy Act (BIPA), 740 ILCS 14/1, and has been used to create, what the plaintiffs allege, is “the world’s largest privately held database of consumer biometrics data.”

Plaintiffs allege that Facebook extracts face geometry data (or faceprints) from user-uploaded photographs and retains such “biometric identifiers” within the meaning of the BIPA. The complaint alleges, among other things, that Facebook collected and stored biometric data without adequate consent.  The complaint seeks an injunction and statutory damages for each violation (note: BIPA provides for $1,000 in statutory damages for each negligent violation, and $5,000 for intentional violations, plus attorney’s fees).

Last week, Facebook filed its motion to dismiss, arguing, among other things, that based on the choice of law provision in its terms of service, California, not Illinois, law should apply (thereby precluding users from bringing a claim under BIPA), and that, regardless, Section 10 of BIPA expressly “excludes both ‘photographs’ and ‘information derived from photographs’ from its reach.”

Those wanting a preview of the plaintiffs’ response to Facebook’s motion should look to a similar privacy action against Shutterfly currently being litigated in Illinois federal court.  (See Norberg v. Shutterfly, Inc., No. 15-05351 (N.D. Ill. filed June 17, 2015)).  There, the plaintiff brought claims under BIPA against the photo storage service Shutterfly for allegedly collecting faceprints from user-upload photos for a tag suggestion feature without express written consent and “without consideration for whether a particular face belongs to a Shutterfly user or unwitting nonuser.”  In its motion to dismiss, Shutterfly, like Facebook, argued that scans of face geometry derived from uploaded photographs are not “biometric identifiers” under BIPA because the statute excludes information derived from photographs.

In his rebuttal, the plaintiff Norberg claimed if the intermediation of a photograph before processing face geometry excluded such data from the definition of a biometric identifier, then the statute would be meaningless:

“Defendants’ interpretation of the BIPA as inapplicable to face scans of photographs is contrary to the very nature of biometric technology and thus would undermine the statute’s core purpose. A photograph of a face is exactly what is scanned to map out the unique geometric patterns that establish an individual’s identity. Taken to its logical conclusion, Defendants’ argument would exclude all the biometric identifiers from the definition of biometric identifiers, because they are all based on the initial capture of a photograph or recording.”

We will be watching both disputes closely – if the suits are not dismissed on procedural or contractual grounds, this will be the first time a court will have the opportunity to interpret the contours of the Illinois biometric privacy statute with respect to facial recognition technology.

Important Circuit Court Ruling Limits Scope of VPPA Liability

Posted in Geofencing, Mobile, Privacy, Video, Video Privacy Protection Act

The Eleventh Circuit issued a notable ruling this week limiting a mobile app’s liability under the Video Privacy Protection Act (VPPA), 18 U.S.C. § 2710, a law enacted in 1988 to preserve “consumer” personal privacy with respect to the rental or purchase of movies on VHS videotape, and which has been regularly applied to streaming video sites and apps.  However, in a significant decision which potentially limits the applicability of the VPPA, the Eleventh Circuit held in Ellis v. The Cartoon Network, Inc., 2015 WL 5904760 (11th Cir. Oct. 9, 2015), that a person who downloads and uses a free mobile app to view freely available content, without more, is not a “subscriber” (and therefore not a “consumer”) under the VPPA.

Subject to certain exceptions, the VPPA generally prohibits “video tape service providers” from knowingly disclosing, to a third-party, “personally identifiable information concerning any consumer.” 18 U.S.C. §2710(b).  Under the VPPA, the term “consumer” means any “renter, purchaser, or subscriber of goods or services from a video tape service provider.” 18 U.S.C. §2710(a)(1).

In Ellis, a user who watched video clips on a free app claimed a violation of the VPPA when the app allegedly disclosed his personally identifiable information – his Android ID and video viewing records – to a third-party analytics company for digital tracking and advertising purposes.  The plaintiff claims that the analytics company identifies and tracks specific users across multiple devices and applications and can “automatically” link an Android ID to a particular person by using information previously collected from other sources.  The lower court had ruled that Ellis was a “subscriber” and therefore a “consumer” under the Act able to bring a cause of action, but that Ellis’s Android ID was not “personally identifiable information” under the VPPA.  The Eleventh Circuit affirmed the dismissal of the action, but under different reasoning.

While the Eleventh Circuit agreed with the district court that payment is not a necessary element of subscription, it took a narrower view of the definition of a “subscriber” under the Act.

“Payment, therefore, is only one factor a court should consider when determining whether an individual is a “subscriber” under the VPPA. […] But his merely downloading the CN app for free and watching videos at no cost does not make him a ‘subscriber’ either.”

As the court pointed out, the plaintiff Ellis did not sign up for or establish an account with Cartoon Network, did not provide any personal information to CN, did not make any payments to use the CN app and did not make any commitment or establish any relationship that would allow him to have access to exclusive or restricted content.  Indeed, CN app users can log in with their television provider information to view additional content, but it is not required – if a user simply wants to view freely available content, he or she does not have to create an account.  Thus, the court dismissed the action because it concluded that the plaintiff Ellis, as merely a user of a free app, was not a subscriber under the Act:

“[D]ownloading an app for free and using it to view content at no cost is not enough to make a user of the app a ‘subscriber’ under the VPPA. The downloading of an app, we think, is the equivalent of adding a particular website to one’s Internet browser as a favorite, allowing quicker access to the website’s content.”

In deciding the case on the “subscriber” issue, the appeals court offered no opinion on whether an Android ID was “personally identifiable information” under the VPPA, an issue that continues to be litigated.

Ellis, an appellate-level decision, could be an important ruling for companies that develop mobile apps that feature video and collect data for targeted advertising purposes.  The 15-page decision deserves a close reading for companies deciding on a business model for mobile apps and offers a level of clarity on what features might allow a free app or an app with a “freemium” pricing strategy to remain outside the scope of the VPPA.

There are still a number of pending VPPA cases focused on the intersection of online video and consumer privacy.  In fact, Ellis is merely one decision in a series of appeals court VPPA-related rulings that are expected in the coming year. Stay tuned!

Section 230 of the Communications Decency Act: More Lessons to Be Learned

Posted in Internet, Online Content

Courts continue to struggle with the application of CDA immunity to shield service provider defendants from liability in extreme cases. In this case, the Washington Supreme Court, in a 6-3 decision, affirmed the lower court’s decision to allow a suit to proceed against classified service Backpage.com surrounding the sexual assault of several minors by adult customers who responded to advertisements placed in the “Escorts” section of the website. (See J.S. v. Village Voice Media Holdings, L.L.C., 2015 WL 5164599 (Wash. Sept. 3, 2015).  This opinion is notable in that courts have usually (although sometimes reluctantly) resolved these struggles by extending broad immunity, even when the facts presented are unsympathetic, or, as the dissent in J.S. noted, “repulsive.”   Indeed, in a case from earlier this year, Backpage was granted CDA immunity in a dispute resting on similar facts. (See Doe No. 1 v. Backpage.com, LLC, No. 14-13870 (D. Mass. May 15, 2015)).  Why was this case decided differently?

This issue in this case turns on whether Backpage merely hosted the advertisements that featured the minor plaintiffs, in which case Backpage is protected by CDA immunity, or whether Backpage also helped develop the content of those advertisements, in which case Backpage is not protected by CDA immunity.  When viewing the plaintiffs’ allegations in a favorable light at this early stage of the litigation, the majority of the court found that the plaintiffs alleged facts that, if proved true, would show that Backpage did more than simply maintain neutral policies prohibiting or limiting certain content and acted as an “information content provider” in surreptitiously guiding pimps on how to post illegal, exploitative ads.

The dissenting justices would have ruled that Backpage qualified for CDA immunity because a person or entity does not qualify as an information content provider merely by facilitating a user’s posting of content, if it is the user alone who selects the content.  In the dissent’s view, the plaintiffs are seeking to hold Backpage liable for its publication of third-party content and harms flowing from the dissemination of that content (i.e., Backpage’s alleged failure to prevent or remove certain posted advertisements), a situation that should fall under the CDA.  The dissent also pointed out that Backpage provides a neutral framework that could be used for proper or improper purposes and does not mandate that users include certain information as a condition of using the website.

What are the lessons learned?  CDA immunity is generally a robust affirmative defense against claims related to the publication of third-party content.  However, as this case illustrates, courts may look for ways to circumvent the CDA in certain unsavory cases, particularly in the early stages of the litigation.  Even if the interpretation of CDA immunity in this case may turn out to be an outlier and the CDA ultimately is deemed to protect Backpage.com, the opinion – issued from a state supreme court – should prompt service providers to take heed.  In light of this decision, website operators that provide forums for user-generated content might reexamine their policies related to the creation of user-generated content and the filtering out or management of illegal content to determine whether the site itself could be reasonably alleged to be “inducing” and perhaps even “developing” any questionable content posted by users.

Clickwrap Agreement Available Only Through Hyperlink Enforceable Under New York Law

Posted in Contracts, Online Commerce

Last week, the Southern District of New York followed a long line of precedent under New York law and upheld the enforceability of a website clickwrap agreement, granting a website operator’s motion to compel arbitration pursuant to a clause contained in the agreement. (Whitt v. Prosper Funding LLC, 2015 WL 4254062 (S.D.N.Y. July 14, 2015)).

In Whitt, the defendant filed a motion to dismiss the plaintiff’s lawsuit and compel arbitration pursuant to the terms of a website clickwrap agreement.  The defendant ran a website that accepted loan applications and alleged that the plaintiff agreed to an arbitration provision in the website’s “borrower registration agreement.”  The website required applicants to click a box adjacent to the following bolded text: “Clicking the box below constitutes your acceptance of . . . the borrower registration agreement.”  The term “borrower registration agreement” in the text was a blue, underlined hyperlink to the actual agreement.  This acknowledgement appeared near the bottom of the webpage, immediately above a “Continue” button, and the applicant could not complete a loan application without clicking the box indicating acceptance of the agreement.

The defendant contended, among other things, that the plaintiff accepted the agreement and its arbitration provision when he applied for a loan through the website. The plaintiff countered that he was not even constructively aware of the terms of the agreement because it was only accessible via hyperlink.

The court held that the plaintiff at least had constructive knowledge of the terms of the agreement and had assented to them.  The court cited to a long line of precedent under New York law where courts had previously enforced agreements which were only accessible via hyperlink and where such hyperlink was made apparent to the average user during registration.

On the other issue in the case, the question of whether the arbitration clause was unconscionable, the court held in favor of the defendant, finding that the clause was not unconscionable and was enforceable.

Supreme Court Rejects Google’s Appeal in Java API Dispute

Posted in Copyright, Software

On Monday, the Supreme Court denied certiorari in Google’s appeal of the Federal Circuit’s 2014 ruling that that the declaring code and the structure, sequence, and organization of 37 Java API packages are entitled to copyright protection. (See Oracle America, Inc. v. Google Inc., 750 F.3d 1339 (Fed. Cir. 2014)). [A detailed discussion of the original lower court ruling can be found here.]

As we explained in a prior post, Google had argued that, contrary to the Federal Circuit’s interpretation, the Copyright Act excludes systems and methods of operations from copyright protection and that the appeals court “erased a fundamental boundary between patent and copyright law.” Tech law watchers were hoping that the Supreme Court might take the case to resolve this important copyright issue, something the court hasn’t examined since its 4-4 vote (Justice Stevens having recused himself) in the 1996 Borland case that affirmed the circuit court’s ruling regarding the copyrightability of a spreadsheet software’s hierarchy menu interface.

With the Supreme Court’s action, the case will be sent back to the district court in San Francisco to determine the viability of Google’s fair use defense.

Facial Recognition Technology: Social Media and Beyond, an Emerging Concern

Posted in Biometrics, Privacy, Social Media, Technology

This week, a major self-regulatory initiative intended to address privacy concerns associated with facial recognition technology hit a significant stumbling block.  Nine consumer advocacy groups withdrew from the National Telecommunications and Information Administration (NTIA)-initiative due to a lack of consensus on a minimum standard of consent.  The NTIA initiative had been ongoing since early 2014.  Consumer advocacy and civil liberties groups were participating with industry trade groups in NTIA-sponsored meetings intended to create guidelines on the fair commercial use of facial recognition technology.  Advocates and industry groups were attempting to develop a voluntary, enforceable code of conduct for the use of facial recognition technology and generally define the contours of transparency and informed consent.

The deadlock in discussions and withdrawal of key participants in those discussions highlights how difficult the resolution of those issues will be.

Facial recognition technology is a powerful tool with many potential uses.  Some are relatively well-known, particularly those which identify people in online social networks and photo storage services. For example, Facebook and others have for years employed “tag suggestion” tools that scan uploaded photos and identify network friends and suggest that the member “tag” them. How does the technology work? Facebook explains: “We currently use facial recognition software…to calculate a unique number (“template”) based on someone’s facial features, like the distance between the eyes, nose and ears. This template is based on your profile pictures and photos you’ve been tagged in on Facebook. We use these templates to help you tag photos by suggesting tags of your friends.”

Taking it a step further, earlier this month Facebook introduced “Moments,” a new app that syncs photos stored on a user’s phone based, in part, on which friends are depicted in the photos.

Other uses of the technology are not as familiar.  Facial recognition technology and “faceprints” have been used, for example, in retailer anti-fraud programs, for in-store analytics to determine an individual’s age range and gender to deliver targeted advertising, to assess viewers’ engagement in a videogame or movie or interest in a retail store display, to facilitate online images searches, and to develop virtual eyeglass fitting or cosmetics tools.  While these capabilities may be quite useful, many users consider the technology to be uncomfortably creepy.

The technology has been the focus of privacy concerns for quite a while. In October, 2012, the Federal Trade Commission (the “FTC”) issued a report, “Facing Facts: Best Practices for Common Uses of Facial Recognition Technologies.”  The FTC staff made a number of specific recommendations with respect to the use of the technology, including, without limitation, providing clear notice about collection and use, giving users opt-out rights, and obtaining express consent before using a consumer’s image in a materially different manner than originally collected.

Individual states have also legislated in this area. For example, Texas and Illinois have existing biometric privacy statutes that may apply to the collection of facial templates for online photo tagging functions.  Illinois’s “Biometric Information Privacy Act,” (“BIPA”) 740 ILCS 14/1, enacted in 2008, provides, among other things, that a company cannot “collect, capture, purchase, receive through trade, or otherwise obtain a person’s… biometric information, unless it first: (1) informs the subject … in writing that a biometric identifier or biometric information is being collected or stored; (2) informs the subject … in writing of the specific purpose and length of term for which a biometric identifier or biometric information is being collected, stored, and used; and (3) receives a written release executed by the subject of the biometric identifier or biometric information.” 740 ILCS 14/15(b).  The Texas statute, Tex. Bus. & Com. Code Ann. §503.001(c), enacted in 2007, offers similar protections.

Neither statute has been interpreted by a court with respect to modern facial recognition tools, but that may change in the coming months.  In April of this year,  a putative class action complaint was filed in Illinois state court against Facebook over a tagging feature that rolled out in 2010 and has been used to create, what the plaintiffs term, “the world’s largest privately held database of consumer biometrics data.”  See Licata v. Facebook, Inc., No. 2015CH05427 (Ill. Cir. Ct. Cook Cty. filed Apr. 1, 2015).  The plaintiffs brought claims against Facebook for allegedly collecting and storing biometric data without adequate notice and consent and failing to provide a retention schedule and guidelines for permanent deletion, or otherwise comply with BIPA with respect to Illinois users.  The complaint seeks an injunction and statutory damages for each violation (note: BIPA provides for $1,000 in statutory damages for each negligent violation, and $5,000 for intentional violations, plus attorney’s fees).   Facebook counters that users can turn off tag suggestion, which deletes a facial recognition template.

A month later, a similar suit alleging violations of BIPA was filed against Facebook.  (See Patel v. Facebook, Inc., No. 15-04265 (N.D. Ill. filed May 14, 2015)).  Moreover, last week, a putative class action suit was brought against the photo storage service Shutterfly in Illinois federal court alleging violations of the BIPA for collecting faceprints from user-upload photos in conjunction with a tag suggestion feature. (Norberg v. Shutterfly, Inc., No. 15-05351 (N.D. Ill. filed June 17, 2015)).

Based on the NTIA discussions, the FTC report, and the issues raised in the Illinois litigations, it is clear that there are numerous considerations for companies to think about before rolling out facial recognition features, including:

  • How should the concepts of transparency and consent apply to the use of facial recognition tools?
  • What level of control should individuals have over when and how a faceprint is stored and used?
  • Should companies obtain prior affirmative consent before collecting such data, as most apps do before collecting geolocation data?
  • Does facial recognition technology clash with a consumer’s rights in a way that “manual” tagging of photographs by social media users would not?
  • How should a company’s policy regarding facial recognition deal with non-members of a service or anonymous members of the public captured in a photo?
  • What level of transparency is appropriate when a company combines facial profiles with third-party data for analytics or secondary uses?
  • How should a company address retention periods for faceprints?
  • What should happen to the faceprint of a user who unsubscribes from a service?
  • Should faceprint data be the subject of heightened data security (e.g. encryption)?
  • Should additional restrictions be placed on the use of commercial facial recognition technology by teens?
  • Would a standard electronic contracting process, whereby a user consents to a service’s terms of service and privacy policy via a clickwrap agreement, constitute appropriate notice and consent for the use of facial recognition? Or must there be a distinct, written notice and consent process for facial recognition data collection practices as well as a formal, posted facial recognition policy?

These questions and more are yet to be answered. Companies that are planning on using facial recognition technology – whether in mobile apps, online, or even in offline applications – should be aware of the emerging legal landscape in this area.  More litigation in this area is likely.

Meeting of the Minds at the Inbox: Some Pitfalls of Contracting via Email

Posted in Contracts, E-mail

We have had a number of clients run into issues relating to whether or not an email exchange constituted a binding contract.  This issue comes up regularly when informality creeps into negotiations conducted electronically, bringing up the age-old problem that has likely been argued before judges for centuries: one party thinks “we have a deal,” the other thinks “we’re still negotiating.”  While email can be useful in many contract negotiations, care should be taken to avoid having to run to court to ask a judge to interpret an agreement or enforce a so-called “done deal.”

With limited exceptions, under the federal electronic signature law, 15 U.S.C. § 7001, and, as adopted by the vast majority of states, the Uniform Electronic Transactions Act (UETA), most signatures, contracts and other record relating to any transaction may not be denied legal effect solely because they are in electronic form.  Still, a signed email message does not necessarily evidence intent to electronically sign the document attached to the email. Whether a party has electronically signed an attached document depends on the circumstances, including whether the attached document was intended to be a draft or final version.

There have been a number of recent cases on this issue, which I’ve discussed further below, but the bottom-line, practical takeaways are as follows:

  • Consider an express statement in the agreement that performance is not a means of acceptance and that the agreement must be signed by both parties to be effective.
  • If you do not believe the agreement is final and accepted, do not begin to perform under the agreement unless there is an express written (email is ok) agreement by the parties that performance has begun but the contract is still being negotiated.
  • When exchanging emails relating to an agreement, be prudent when using certain loaded terms such as “offer,” “accept,” “amendment,” “promise,” or “signed” or  phrases of assent (e.g., “I’m ok with that”, “Agreed”) without limitations or a clear explanation of intent.
  • Terms of proposed agreements communicated via email should explicitly state that they are subject to any relevant conditions, as well as to the further review and comment of the sender’s clients and/or colleagues. To avoid ambiguity, to the extent finalizing an agreement is subject to a contingency (e.g., upper management or outside approval, or a separate side document signed by both parties), be clear about that in any email exchange that contains near-final versions of the agreement.
  • Parties wishing to close the deal with an attachment should mutually confirm their intent and verify assent when the terms of a final agreement come together.
  • While it is good practice to include standard email disclaimers that say that the terms of an email are not an offer capable of acceptance and do not evidence an intention to enter into any contract, do not rely on this disclaimer to prevent an email exchange – which otherwise has all the indicia of a final agreement – from being considered binding.
  • Exercise extreme caution when using text messaging for contract negotiations – the increased informality, as well as the inability to attach a final document to a text, is likely to lead to disputes down the road.

While courts have clearly become more comfortable with today’s more informal, electronic methods of contracting, judges still examine the parties’ communications closely to see if an enforceable agreement has been reached.

Now, for those who are really interested in this subject and want more, here comes the case discussion….

Last month, a Washington D.C. district court jury found in favor of MSNBC host Ed Schultz in a lawsuit filed by a former business partner who had claimed that the parties had formed a partnership to develop a television show and share in the profits based, in part, upon a series of emails that purported to form a binding agreement.  See Queen v. Schultz, 2014 WL 1328338 (D.C. Cir. Apr. 4, 2014), on remand, No. 11-00871 (D. D.C. Jury Verdict May 18, 2015).  And, earlier last month, a New York appellate court ruled that emails between a decedent and a co-owner of real property did not evince an intent of the co-owner to transfer the parcel to the decedent’s sole ownership because, even though the parties discussed the future intention to do so, the material term of consideration for such a transfer was fatally absent.  See Matter of Wyman, 2015 NY Slip Op 03908 (N.Y. App. Div., 3rd Dept. May 7, 2015).  Another recent example includes Tindall Corp. v. Mondelēz Int’l, Inc., No. 14-05196 (N.D. Ill. Mar. 3, 2015), where a court, on a motion to dismiss, had to decide whether a series of ambiguous emails that contained detailed proposals and were a follow-up to multiple communications and meetings over the course of a year created a binding contract or rather, whether this was an example of fizzled negotiations, indefinite promises and unreasonable reliance.  The court rejected the defendant’s argument that the parties anticipated execution of a memorialized contract in the future and that it “strains belief that these two companies would contract in such a cavalier manner,” particularly since the speed of the project may have required that formalities be overlooked.

Enforceability of Electronic Signatures

A Minnesota appellate court decision from last year highlights that, unless circumstances indicate otherwise, parties cannot assume that an agreement attached to an email memorializing discussions is final, absent a signature by both parties.  See SN4, LLC v. Anchor Bank, fsb, 848 N.W.2d 559 (Minn. App. 2014) (unpublished). The court found although the bank representatives intended to electronically sign their e-mail messages, the evidence was insufficient to establish that they intended to electronically sign the attached agreement or that the attached document was intended to be a final version (“Can you confirm that the agreements with [the bank] are satisfactory[?] If so, can you have your client sign and I will have my client sign.”).

A California decision brings up similar contracting issues. In JBB Investment Partners, Ltd. v. Fair, 182 Cal. Rptr. 974 (Cal. App. 2014), the appellate court reversed a trial court’s finding that a party that entered his name at the end of an email agreeing to settlement terms electronic “signed” off on the deal under California law. The facts in JBB Investment offered a close case – with the defendant sending multiple emails and text messages with replies such as “We clearly have an agreement” and that he “agree[d] with [plaintiff’s counsel’s] terms” yet, the court found it wasn’t clear as to whether that agreement was merely a rough proposal or an enforceable final settlement.  It was clear that the emailed offer was conditioned on a formal writing (“[t]he Settlement paperwork would be drafted . . .”).

Performance as Acceptance

Another pitfall of contracting via email occurs when parties begin performance prior to executing the governing agreement – under the assumption that a formal deal “will get done.”  If the draft agreement contains terms that are unfavorable to a party and that party performs, but the agreement is never executed, that party may have to live with those unfavorable terms. In DC Media Capital, LLC v. Imagine Fulfillment Services, LLC, 2013 WL 46652 (Cal. App. Aug. 30, 2013) (unpublished), a California appellate court held that a contract electronically sent by a customer to a vendor and not signed by either party was nevertheless enforceable where there was performance by the offeree.  The court held that the defendant’s performance was acceptance of the contract, particularly because the agreement did not specifically preclude acceptance by performance and expressly require a signature to be effective.