Header graphic for print

New Media and Technology Law Blog

Facial Recognition Technology: Social Media and Beyond, an Emerging Concern

Posted in Privacy, Social Media, Technology

This week, a major self-regulatory initiative intended to address privacy concerns associated with facial recognition technology hit a significant stumbling block.  Nine consumer advocacy groups withdrew from the National Telecommunications and Information Administration (NTIA)-initiative due to a lack of consensus on a minimum standard of consent.  The NTIA initiative had been ongoing since early 2014.  Consumer advocacy and civil liberties groups were participating with industry trade groups in NTIA-sponsored meetings intended to create guidelines on the fair commercial use of facial recognition technology.  Advocates and industry groups were attempting to develop a voluntary, enforceable code of conduct for the use of facial recognition technology and generally define the contours of transparency and informed consent.

The deadlock in discussions and withdrawal of key participants in those discussions highlights how difficult the resolution of those issues will be.

Facial recognition technology is a powerful tool with many potential uses.  Some are relatively well-known, particularly those which identify people in online social networks and photo storage services. For example, Facebook and others have for years employed “tag suggestion” tools that scan uploaded photos and identify network friends and suggest that the member “tag” them. How does the technology work? Facebook explains: “We currently use facial recognition software…to calculate a unique number (“template”) based on someone’s facial features, like the distance between the eyes, nose and ears. This template is based on your profile pictures and photos you’ve been tagged in on Facebook. We use these templates to help you tag photos by suggesting tags of your friends.”

Taking it a step further, earlier this month Facebook introduced “Moments,” a new app that syncs photos stored on a user’s phone based, in part, on which friends are depicted in the photos.

Other uses of the technology are not as familiar.  Facial recognition technology and “faceprints” have been used, for example, in retailer anti-fraud programs, for in-store analytics to determine an individual’s age range and gender to deliver targeted advertising, to assess viewers’ engagement in a videogame or movie or interest in a retail store display, to facilitate online images searches, and to develop virtual eyeglass fitting or cosmetics tools.  While these capabilities may be quite useful, many users consider the technology to be uncomfortably creepy.

The technology has been the focus of privacy concerns for quite a while. In October, 2012, the Federal Trade Commission (the “FTC”) issued a report, “Facing Facts: Best Practices for Common Uses of Facial Recognition Technologies.”  The FTC staff made a number of specific recommendations with respect to the use of the technology, including, without limitation, providing clear notice about collection and use, giving users opt-out rights, and obtaining express consent before using a consumer’s image in a materially different manner than originally collected.

Individual states have also legislated in this area. For example, Texas and Illinois have existing biometric privacy statutes that may apply to the collection of facial templates for online photo tagging functions.  Illinois’s “Biometric Information Privacy Act,” (“BIPA”) 740 ILCS 14/1, enacted in 2008, provides, among other things, that a company cannot “collect, capture, purchase, receive through trade, or otherwise obtain a person’s… biometric information, unless it first: (1) informs the subject … in writing that a biometric identifier or biometric information is being collected or stored; (2) informs the subject … in writing of the specific purpose and length of term for which a biometric identifier or biometric information is being collected, stored, and used; and (3) receives a written release executed by the subject of the biometric identifier or biometric information.” 740 ILCS 14/15(b).  The Texas statute, Tex. Bus. & Com. Code Ann. §503.001(c), enacted in 2007, offers similar protections.

Neither statute has been interpreted by a court with respect to modern facial recognition tools, but that may change in the coming months.  In April of this year,  a putative class action complaint was filed in Illinois state court against Facebook over a tagging feature that rolled out in 2010 and has been used to create, what the plaintiffs term, “the world’s largest privately held database of consumer biometrics data.”  See Licata v. Facebook, Inc., No. 2015CH05427 (Ill. Cir. Ct. Cook Cty. filed Apr. 1, 2015).  The plaintiffs brought claims against Facebook for allegedly collecting and storing biometric data without adequate notice and consent and failing to provide a retention schedule and guidelines for permanent deletion, or otherwise comply with BIPA with respect to Illinois users.  The complaint seeks an injunction and statutory damages for each violation (note: BIPA provides for $1,000 in statutory damages for each negligent violation, and $5,000 for intentional violations, plus attorney’s fees).   Facebook counters that users can turn off tag suggestion, which deletes a facial recognition template.

A month later, a similar suit alleging violations of BIPA was filed against Facebook.  (See Patel v. Facebook, Inc., No. 15-04265 (N.D. Ill. filed May 14, 2015)).  Moreover, last week, a putative class action suit was brought against the photo storage service Shutterfly in Illinois federal court alleging violations of the BIPA for collecting faceprints from user-upload photos in conjunction with a tag suggestion feature. (Norberg v. Shutterfly, Inc., No. 15-05351 (N.D. Ill. filed June 17, 2015)).

Based on the NTIA discussions, the FTC report, and the issues raised in the Illinois litigations, it is clear that there are numerous considerations for companies to think about before rolling out facial recognition features, including:

  • How should the concepts of transparency and consent apply to the use of facial recognition tools?
  • What level of control should individuals have over when and how a faceprint is stored and used?
  • Should companies obtain prior affirmative consent before collecting such data, as most apps do before collecting geolocation data?
  • Does facial recognition technology clash with a consumer’s rights in a way that “manual” tagging of photographs by social media users would not?
  • How should a company’s policy regarding facial recognition deal with non-members of a service or anonymous members of the public captured in a photo?
  • What level of transparency is appropriate when a company combines facial profiles with third-party data for analytics or secondary uses?
  • How should a company address retention periods for faceprints?
  • What should happen to the faceprint of a user who unsubscribes from a service?
  • Should faceprint data be the subject of heightened data security (e.g. encryption)?
  • Should additional restrictions be placed on the use of commercial facial recognition technology by teens?
  • Would a standard electronic contracting process, whereby a user consents to a service’s terms of service and privacy policy via a clickwrap agreement, constitute appropriate notice and consent for the use of facial recognition? Or must there be a distinct, written notice and consent process for facial recognition data collection practices as well as a formal, posted facial recognition policy?

These questions and more are yet to be answered. Companies that are planning on using facial recognition technology – whether in mobile apps, online, or even in offline applications – should be aware of the emerging legal landscape in this area.  More litigation in this area is likely.

Meeting of the Minds at the Inbox: Some Pitfalls of Contracting via Email

Posted in Contracts, E-mail

We have had a number of clients run into issues relating to whether or not an email exchange constituted a binding contract.  This issue comes up regularly when informality creeps into negotiations conducted electronically, bringing up the age-old problem that has likely been argued before judges for centuries: one party thinks “we have a deal,” the other thinks “we’re still negotiating.”  While email can be useful in many contract negotiations, care should be taken to avoid having to run to court to ask a judge to interpret an agreement or enforce a so-called “done deal.”

With limited exceptions, under the federal electronic signature law, 15 U.S.C. § 7001, and, as adopted by the vast majority of states, the Uniform Electronic Transactions Act (UETA), most signatures, contracts and other record relating to any transaction may not be denied legal effect solely because they arein electronic form.  Still, a signed email message does not necessarily evidence intent to electronically sign the document attached to the email. Whether a party has electronically signed an attached document depends on the circumstances, including whether the attached document was intended to be a draft or final version.

There have been a number of recent cases on this issue, which I’ve discussed further below, but the bottom-line, practical takeaways are as follows:

  • Consider an express statement in the agreement that performance is not a means of acceptance and that the agreement must be signed by both parties to be effective.
  • If you do not believe the agreement is final and accepted, do not begin to perform under the agreement unless there is an express written (email is ok) agreement by the parties that performance has begun but the contract is still being negotiated.
  • When exchanging emails relating to an agreement, be prudent when using certain loaded terms such as “offer,” “accept,” “amendment,” “promise,” or “signed” or  phrases of assent (e.g., “I’m ok with that”, “Agreed”) without limitations or a clear explanation of intent.
  • Terms of proposed agreements communicated via email should explicitly state that they are subject to any relevant conditions, as well as to the further review and comment of the sender’s clients and/or colleagues. To avoid ambiguity, to the extent finalizing an agreement is subject to a contingency (e.g., upper management or outside approval, or a separate side document signed by both parties), be clear about that in any email exchange that contains near-final versions of the agreement.
  • Parties wishing to close the deal with an attachment should mutually confirm their intent and verify assent when the terms of a final agreement come together.
  • While it is good practice to include standard email disclaimers that say that the terms of an email are not an offer capable of acceptance and do not evidence an intention to enter into any contract, do not rely on this disclaimer to prevent an email exchange – which otherwise has all the indicia of a final agreement – from being considered binding.
  • Exercise extreme caution when using text messaging for contract negotiations – the increased informality, as well as the inability to attach a final document to a text, is likely to lead to disputes down the road.

While courts have clearly become more comfortable with today’s more informal, electronic methods of contracting, judges still examine the parties’ communications closely to see if an enforceable agreement has been reached.

Now, for those who are really interested in this subject and want more, here comes the case discussion….

Last month, a Washington D.C. district court jury found in favor of MSNBC host Ed Schultz in a lawsuit filed by a former business partner who had claimed that the parties had formed a partnership to develop a television show and share in the profits based, in part, upon a series of emails that purported to form a binding agreement.  See Queen v. Schultz, 2014 WL 1328338 (D.C. Cir. Apr. 4, 2014), on remand, No. 11-00871 (D. D.C. Jury Verdict May 18, 2015).  And, earlier last month, a New York appellate court ruled that emails between a decedent and a co-owner of real property did not evince an intent of the co-owner to transfer the parcel to the decedent’s sole ownership because, even though the parties discussed the future intention to do so, the material term of consideration for such a transfer was fatally absent.  See Matter of Wyman, 2015 NY Slip Op 03908 (N.Y. App. Div., 3rd Dept. May 7, 2015).  Another recent example includes Tindall Corp. v. Mondelēz Int’l, Inc., No. 14-05196 (N.D. Ill. Mar. 3, 2015), where a court, on a motion to dismiss, had to decide whether a series of ambiguous emails that contained detailed proposals and were a follow-up to multiple communications and meetings over the course of a year created a binding contract or rather, whether this was an example of fizzled negotiations, indefinite promises and unreasonable reliance.  The court rejected the defendant’s argument that the parties anticipated execution of a memorialized contract in the future and that it “strains belief that these two companies would contract in such a cavalier manner,” particularly since the speed of the project may have required that formalities be overlooked.

Enforceability of Electronic Signatures

A Minnesota appellate court decision from last year highlights that, unless circumstances indicate otherwise, parties cannot assume that an agreement attached to an email memorializing discussions is final, absent a signature by both parties.  See SN4, LLC v. Anchor Bank, fsb, 848 N.W.2d 559 (Minn. App. 2014) (unpublished). The court found although the bank representatives intended to electronically sign their e-mail messages, the evidence was insufficient to establish that they intended to electronically sign the attached agreement or that the attached document was intended to be a final version (“Can you confirm that the agreements with [the bank] are satisfactory[?] If so, can you have your client sign and I will have my client sign.”).

A California decision brings up similar contracting issues. In JBB Investment Partners, Ltd. v. Fair, 182 Cal. Rptr. 974 (Cal. App. 2014), the appellate court reversed a trial court’s finding that a party that entered his name at the end of an email agreeing to settlement terms electronic “signed” off on the deal under California law. The facts in JBB Investment offered a close case – with the defendant sending multiple emails and text messages with replies such as “We clearly have an agreement” and that he “agree[d] with [plaintiff’s counsel’s] terms” yet, the court found it wasn’t clear as to whether that agreement was merely a rough proposal or an enforceable final settlement.  It was clear that the emailed offer was conditioned on a formal writing (“[t]he Settlement paperwork would be drafted . . .”).

Performance as Acceptance

Another pitfall of contracting via email occurs when parties begin performance prior to executing the governing agreement – under the assumption that a formal deal “will get done.”  If the draft agreement contains terms that are unfavorable to a party and that party performs, but the agreement is never executed, that party may have to live with those unfavorable terms. In DC Media Capital, LLC v. Imagine Fulfillment Services, LLC, 2013 WL 46652 (Cal. App. Aug. 30, 2013) (unpublished), a California appellate court held that a contract electronically sent by a customer to a vendor and not signed by either party was nevertheless enforceable where there was performance by the offeree.  The court held that the defendant’s performance was acceptance of the contract, particularly because the agreement did not specifically preclude acceptance by performance and expressly require a signature to be effective.

New York Releases Final BitLicense Rules

Posted in Digital Currency, Regulatory

Readers of this blog will know that we have been following the recent legal developments relating to bitcoin and other virtual currency systems [also here and here].  Yesterday, in a significant development reflecting the general maturation of virtual currencies as a recognized payment system, Benjamin M. Lawsky, Superintendent of the New York State Dept. of Financial Services (NYSDFS), announced New York’s final “BitLicense” rules.

The NYSDFS is the first state agency to release a comprehensive framework for regulating digital currency-related businesses.  This has been the culmination of a two-year period of drafting and public comment, with the initial proposed rules having been published last July.  The keystone of the regulations are consumer protections, anti-money laundering compliance and cybersecurity rules that are designed to place appropriate “guardrails” around the industry without “stifling beneficial innovation.”

In his remarks at the BITS Emerging Payments Forum in Washington, D.C., Lawsky stated that the agency had listened to the industry’s concerns that called for a more hands-off approach, and made a few minor changes from the prior drafts.

These changes include the following:

  • Companies will not need prior approval for standard software or app updates – only for material changes to their products or business models.
  • The agency will not be regulating software developers – only financial intermediaries. Under the regulations, the definition of “Virtual Currency Business Activity” generally involves entities that act as: (1) virtual currency transmitters; (2) digital wallet services; (3) those “buying and selling Virtual Currency as a customer business”; (4) virtual currency exchange services; (5) controllers or issuers of a virtual currency.  The regulations explicitly state that the “the development and dissemination of software in and of itself does not constitute Virtual Currency Business Activity.”
  • Companies will not be required to file a duplicative set of application submissions if they want both a BitLicense and a money transmitter license. Companies will be able to work with the agency to have a “one-stop” application submission.
  • Companies that already file suspicious activity reports (also called “SARs”) with federal regulators such as FinCEN will not have to file a duplicate set of SARs with the NYSDFS.
  • Companies won’t need prior approval from NYDFS for every new round of venture capital funding – a company would only need prior approval if the new investor wants to “direct the management and policies of the firm” as opposed to merely being a “passive investor.”

While Mr. Lawsky has announced his departure from the NYSDFS, he has fulfilled his promise to take steps towards bringing some regulatory oversight to bitcoin and other digital currencies. While the future of these currencies is still a matter of speculation, it will be interesting to see whether other states follow New York’s lead. We will keep you updated.

Upcoming Summer Blockbuster: Impending Shortage of IPv4 IP Addresses

Posted in Internet, Technology

There is an interesting article in today’s Wall Street Journal about the impending shortage of IPv4 IP addresses (forcing tech companies and cloud providers to scramble to secure the remaining stock for U.S. users) and the IPv6 solution.

Hate to say “I told you so” but…see our prior coverage here and here.

But what should you be doing now? To the extent you are involved in negotiating internet-related infrastructure transactions, you should be thinking about adding in IPv6-capability provisions, including, as appropriate, commitments regarding upgrades.  There is a real cost to becoming IPv6-compatible, as companies have to purchase new network switches and routers. According to sources quoted in today’s Wall Street Journal article, only 9% of the Internet community has done that so far, and such a company-wide migration may cost as much as 7% of a company’s annual IT budget. Thus, since this can be a significant financial issue, best practice would be to capture any understandings in the transaction documents.    

Who Exactly Is a ‘User’ under the DMCA Safe Harbor?

Posted in Copyright, Online Content

The DMCA was enacted in 1998 to preserve “strong incentives for service providers and copyright owners to cooperate to detect and deal with copyright infringements that take place in a digital networked environment.”  As part of this implicit bargain, Title II of the DMCA offers safe harbors for qualifying service providers to limit their liability for claims of copyright infringement.  The Section 512(c) safe harbor protects storage providers (and has been the subject of the much litigation over the past decade).

Specifically, Section 512(c) applies to infringements that occur “by reason of the storage at the direction of a user of material” on a service provider’s system or network.  The statute does not define “user” and it seems, until recently, no court had interpreted the term.  Is a “user” simply anyone who uses an online storage platform, or should the definition exclude those persons who may have an independent contractor or similar relationship with the service provider?  In the typical situation, a website or app might offer to host content and facilitate online sharing and viewing of uploaded photos and videos.  But what if the online site only allows selected applicants to post content on the site on specific topics and offers financial incentives based upon the number of clicks?

In BWP Media USA, Inc. v. Clarity Digital Group, LLC, No. 14-00467 (D. Colo. Mar. 31, 2015), the court was compelled to take a deep dive into what “storage at the direction of a user” means in the context of new media — specifically, the business model of Examiner.com, a so-called “content farm” style site which posts articles written by third parties on popular news, entertainment and lifestyle topics.  Unlike the popular video sharing sites, Examiner.com has more involvement in what goes up on its site. The site does not assign stories and asserts that it does not pre-screen or edit the work of its contributors.  However, the site conducts background checks on individuals who apply to be “Examiners” on a chosen topic, requires contributors to comply with its Editorial Requirements, and may decide to feature a contributor’s work outside his or her topic page (such as on the Examiner.com main page). In some circumstances, contributors may be paid based upon an article’s page views, traffic and similar metrics.  In addition, contributors must also enter into the “Examiners Independent Contractor Agreement and License” before receiving permission to post to the site.  The agreement holds contributors to certain editorial requirements and obligates them to “regularly create and post new Works to the Web Page and update the Web Page as often as reasonably needed.” The agreement also states that contributors must not include copyrighted content on their pages without permission.

The plaintiffs alleged that the Examiner hosted user-submitted articles that contained their copyrighted photos without an appropriate license.  The defendant admitted that the site displayed 75 of plaintiffs’ copyrighted photographs without permission, but argued that the Examiners who included those images in posted content did so without involvement from Examiner.com staff.  In claiming protection under the DMCA safe harbor, the defendant countered that it had no involvement or specific knowledge of the infringing use of plaintiffs’ photographs before they were posted to the site, that it didn’t pre-screen or approve the articles at issue and that it otherwise complied with the requirements of the DMCA and removed the photos at issue in a timely manner after it received a takedown notice.

The dispute centered on whether the defendant was entitled to protection under the § 512(c) safe harbor.  More specifically, the question became whether the contributors to the Examiner were “users” under § 512(c), that is, were the plaintiffs’ photographs stored on defendant’s system at the direction of the site’s contributors or stored at the direction of defendant.  In rejecting the plaintiffs’ copyright claims, the court found no evidence that a content manager, review team, or any Examiner.com staff had any actual control or influence over the content of the articles containing plaintiffs’ photographs so as to render the use of plaintiffs’ photographs at the direction of defendant.  Moreover, the court stated that in the absence of evidence that the defendant directed the contributors to upload plaintiffs’ photographs to the site, defendant’s policies (e.g., prohibiting use of infringing content in the user agreement, having a repeat infringer policy and offering contributors free access to a licensed photo library) further supported the conclusion that plaintiffs’ photographs were not stored on the site at the direction of defendant.

Finding that the defendant complied with the remaining requirements of the statute, the court ruled that the defendant was entitled to § 512(c) safe harbor protection, dismissing the case.

The ruling appears to be the first court to interpret the term “user” under the § 512(c) safe harbor and the broad reading is certainly notable for providers and distributors of new media.  Today’s online “storage” providers that rely on the DMCA for legal protection are so much more than simple web hosts.  Indeed, the BWP Media ruling echoes prior, expansive readings of the safe harbor, where courts have avoided rigid interpretations and ruled that certain automated functions that make files accessible – such as transcoding or converting uploaded content to certain file formats, extracting metadata to aid searching or assigning permalinks – fell within the ordinary meaning of “storage.”

Emergence of Live Streaming Apps Brings Up Copyright, Privacy, Legal Concerns

Posted in Copyright, Online Content, Social Media

The big fight may be over, but the implications of Mayweather vs. Pacquiao with respect to real-time, one-to-many streaming of video through apps like Meerkat and Periscope are still rippling through the media industry. In short, livestreaming apps allow anyone with a smartphone to effortlessly broadcast live video to social media followers and the wider internet – everything from ordinary life activities (e.g., an individual walking down the street), to live action (e.g., events, protests), to the redistribution of content (e.g., streaming a popular cable show).

This past week, the media reported widespread streaming of the pay-per-view broadcast of the fight by individuals who had paid to view the fight at home. While Periscope’s streams expire after 24 hours and Meerkat does not archive streams, new platforms are being rolled out to support the users of these types of apps, thus suggesting that this may be a growing phenomenon. Expect the delicate push and pull involving DMCA takedown notices to continue between content owners and these new streaming apps. Though, it should be noted that the intellectual property rights issues associated with real-time streaming through these apps are not straightforward, particularly when dealing with the stream of a live event directly from the venue of that event.

Stay tuned for further developments.

U.S. Dept. of Commerce Releases Multistakeholder Guidance on DMCA Notice and Takedown Best Practices

Posted in Copyright

On Tuesday, the U.S. Dept. of Commerce’s Internet Policy Task Force released a guidance containing a list of best practices (and notable “bad” practices), all designed to improve the DMCA’s notice and takedown system for both senders and recipients of notices [See “DMCA Notice-and-Takedown Processes: List of Good, Bad, and Situational Practices”]. The document was developed as part of a multistakeholder forum held between rights holders, creators, service providers and consumer protection advocates, all of whom have an interest in an effective notice and takedown system that balances the interests of both rights holders and online services.

For example, some “Good Practices” for service providers include:

  • Making DMCA takedown and counter-notice mechanisms easy to find and understand.  Web forms should have clearly labeled fields, with help buttons and instructions.
  • Implementing efficient processes for receiving notices that are commensurate with the volume of claims (e.g., allowing multiple URLs to be submitted at one time).
  • Providing confirmation of receipt of a notice or counter-notice that includes a method for referencing past notices in further communications.

The document also offers good general practices for Notice Senders:

  • Taking reasonable measures to determine the online location of the infringing material and “appropriately consider” whether use of the material identified in the notice is unauthorized.
  • When using automated tools, conducting a human review of the site where the notices will be directed, as well as performing periodic spot checks to evaluate whether the automated search parameters are returning the expected results.

Lastly, the document offers guidance on “Situational Practices,” such as Trusted Submitter Programs, which provide efficiencies for rights holders who have a track record of submitting accurate notices.

At only 7 pages, the Dept. of Commerce document is certainly worth the read for both copyright holders and service providers.  Improved notice and takedown practices can result in streamlined procedures, a better reputation in the online community, and fewer disagreements over posted content (which, will decrease the likelihood of litigation for copyright infringement or wrongful takedown notices).

FCC Adopts Net Neutrality Rules, Reclassifies Broadband Access under Title II

Posted in Internet, Regulatory

After nearly 4 million public comments, and months of vigorous public, industry, and Congressional debate, the FCC, by a 3-2 vote, approved revised net neutrality rules to “protect the Open Internet.”  As expected by the Chairman’s statements in the lead-up to the vote, the FCC’s Open Internet Order reclassifies broadband internet access as a “telecommunications service” (or common carrier service) under Title II of the Communications Act.  The new rules cover both wired and wireless broadband.

The principal aspects of the Open Internet Order are:

  • No blocking lawful content.
  • No throttling lawful content.
  • No discrimination against lawful content.
  • No paid prioritization.
  • New FCC authority to examine interconnection agreements.
  • Transparency requirements regarding rates, data caps, network management practices.
  • Reasonable network management permitted to manage the technical and engineering aspects of a provider’s broadband networks.
  • The Order cites the legal foundation for the rules as both Title II of the Communications Act and Section 706 of the Telecommunications Act of 1996.
  • Forbearance from many Title II regulations, including rate regulation, tariffs, or network unbundling.

A clearer picture of the net neutrality rules will emerge in the next few weeks when they are officially published in the Federal Register, and presumably take effect 60 days later.  However, based upon multiple reports, broadband providers are expected to challenge the Order in a federal appeals court (the prior 2010 Open Internet Order was challenged and largely overturned in the U.S. Court of Appeals for the D.C. Circuit).  Only time will tell whether broadband providers will win a stay of the regulations or successfully challenge part or all of the Order and the Title II classification, or whether the court will find that the FCC had adequate statutory authority to enact the rules.  Moreover, Congress has also expressed interest in preempting the FCC and passing its own net neutrality legislation.  Stay tuned.  In the meantime, with the growth of over-the-top streaming video programming and other changes in the video and broadband marketplace, it remains to be seen how the new net neutrality will affect emerging business models, relationships with content providers, and future investments in technology.

Virginia Court Dismisses Webcaster’s Suit Concerning Geofencing Workaround to Copyright Royalty Obligations

Posted in Copyright, Licensing, Technology

We previously wrote about a Virginia federal magistrate judge’s report recommending dismissal of a declaratory judgment action brought by several radio stations asking the court to rule that webcasts limited in scope via geofencing technology to 150 miles from the site of the transmitter should be exempt from liability for copyright royalties under section 114 of the Copyright Act. This past month, the district court agreed with the magistrate’s report and dismissed the action for lack of a justiciable case or controversy between the radio stations and SoundExchange, an organization designated by the Copyright Royalty Board to collect royalties from broadcasters on behalf of copyright owners.

On February 13, 2015, the district court adopted the Magistrate’s report and dismissed the plaintiff’s complaint due to lack of standing to sue.  (WTGD 105.1 FM v. Sound Exchange, Inc., No. 14-00015 (W.D. Va. Feb. 13, 2015)).  Tracking the reasoning of the Magistrate’s decision, the court ruled that the radio station’s allegations against SoundExchange were “too speculative, indefinite and hypothetical” and would seek an impermissible advisory opinion about whether the proposed geofenced broadcasts would result in copyright infringement or not.  The court pointed out that the radio stations have not demonstrated that using geofencing technology to limit the range of a webcast was “anything more than a pipe dream” and pointed out that the stations had only consulted with experts and had not done anything to “implement the technology or demonstrate that geofenced retransmissions will meet the § 114 exemption.”  The court also noted that the real injury at issue in the dispute is the radio station’s fear of liability for copyright infringement – an injury not traceable to SoundExchange, a collector and distributor of royalties due under statutory licenses.  In fact, SoundExchange’s lawyers confirmed in open court that SoundExchange (as opposed to the copyright holders themselves) would have no role in asserting copyright claims should the plaintiff implement geofenced broadcasts in the future.

QVC Sues Shopping App for Web Scraping That Allegedly Triggered Site Outage

Posted in Contracts, Internet, Online Commerce

Operators of public-facing websites are typically concerned about the unauthorized, technology-based extraction of large volumes of information from their sites, often by competitors or others in related businesses.  The practice, usually referred to as screen scraping, web harvesting, crawling or spidering, has been the subject of many questions and a fair amount of litigation over the last decade.

However, despite the litigation in this area, the state of the law on this issue remains somewhat unsettled: neither scrapers looking to access data on public-facing websites nor website operators seeking remedies against scrapers that violate their posted terms of use have very concrete answers as to what is permissible and what is not.

In the latest scraping dispute, the e-commerce site QVC objected to the Pinterest-like shopping aggregator Resultly’s scraping of QVC’s site for real-time pricing data.  In its complaint, QVC claimed that Resultly “excessively crawled” QVC’s retail site (purpotedly sending search requests to QVC’s website at rates ranging from 200-300 requests per minute to up to 36,000 requests per minute) causing a crash that wasn’t resolved for two days, resulting in lost sales.  (See QVC Inc. v. Resultly LLC, No. 14-06714 (E.D. Pa. filed Nov. 24, 2014)). The complaint alleges that the defendant disguised its web crawler to mask its source IP address and thus prevented QVC technicians from identifying the source of the requests and quickly repairing the problem.  QVC brought some of the causes of action often alleged in this type of case, including violations of the Computer Fraud and Abuse Act (CFAA), breach of contract (QVC’s website terms of use), unjust enrichment, tortious interference with prospective economic advantage, conversion and negligence and breach of contract.  Of these and other causes of action typically alleged in these situations, the breach of contract claim is often the clearest source of a remedy.

This case is a particularly interesting scraping case because QVC is seeking damages for the unavailability of their website, which QVC alleges to have been caused by Resultly.  This is an unusual theory of recovery in these types of cases.   For example,  this past summer, LinkedIn settled a scraping dispute with Robocog, the operator of HiringSolved, a “people aggregator” employee recruiting service, over claims that the service employed bots to register false accounts in order to scrape LinkedIn member profile data and thereafter post it to  its service without authorization from Linkedin or its members.  LinkedIn brought various claims under the DMCA and the CFAA, as well as state law claims of trespass and breach of contract, but did not allege that their service was unavailable due to the defendant’s activities.  The parties settled the matter, with Robocog agreeing to pay $40,000, cease crawling LinkedIn’s site and destroy all LinkedIn member data it had collected.  (LinkedIn Corp. v. Robocog Inc., No. 14-00068 (N.D. Cal.  Proposed Final Judgment filed July 11, 2014).

However, in one of the early, yet still leading cases on scraping, eBay, Inc. v. Bidder’s Edge, Inc., 100 F. Supp. 2d 1058 (N.D. Cal. 2000), the district court touched on the foreseeable harm that could result from screen scraping activities, at least when taken in the aggregate.  In the case, the defendant Bidder’s Edge operated an auction aggregation site and accessed eBay’s site about 100,000 times per day, accounting for between 1 and 2 percent of the information requests received by eBay and a slightly smaller percentage of the data transferred by eBay. The court rejected eBay’s claim that it was entitled to injunctive relief because of the defendant’s unauthorized presence alone, or because of the incremental cost the defendant had imposed on operation of the eBay site, but found sufficient proof of threatened harm in the potential for others to imitate the defendant’s activity.

It remains to be seen if the parties will reach a resolution or whether the court will have a chance to interpret QVC’s claims, and whether QVC can provide sufficient evidence of the causation between Resultly’s activities and the website outage.

Companies concerned about scraping should make sure that their website terms of use are clear about what is and isn’t permitted, and that the terms are positioned on the site to support their enforceability. In addition, website owners should ensure they are using “robots.txt,” crawl delays and other technical means to communicate their intentions regarding scraping.  Companies that are interested in scraping should evaluate the terms at issue and other circumstances to understand the limitations in this area.