2015 and 2016 saw a wave of transactions among cable, satellite, and other linear programming distributors: AT&T & DirecTV, Altice and Suddenlink, etc. That transactional wave is beginning to spawn a litigation wave, principally over interpretation and application of the pre-existing licenses and contracts between networks and distributors. A recent

In Nghiem v Dick’s Sporting Goods, Inc., No. 16-00097 (C.D. Cal. July 5, 2016), the Central District of California held browsewrap terms to be unenforceable because the hyperlink to the terms was “sandwiched” between two links near the bottom of the third column of links in a website footer.  Website developers – and their lawyers – should take note of this case, part of an emerging trend of judicial scrutiny over how browsewrap terms are presented. Courts have, in many instances, refused to enforce browsewraps due to a finding of a lack of user notice and assent. In this case, the most recent example of a court’s specific analysis of website design, a court suggests that what has become a fairly standard approach to browsewrap presentment fails to achieve the intended purpose.   

For years, craigslist has aggressively used technological and legal methods to prevent unauthorized parties from scraping, linking to or accessing user postings for their own commercial purposes.  In a prior post, we briefly discussed craigslist’s action against a certain aggregator that was scraping content from the craigslist site (despite having

We live in a world that has rapidly redefined and blurred the roles of the “creator” of content, as compared to the roles of the “publisher” and “distributor” of such content.  A recent case touches on some of the important legal issues associated with such change.  Among other things, the

Operators of public-facing websites are typically concerned about the unauthorized, technology-based extraction of large volumes of information from their sites, often by competitors or others in related businesses.  The practice, usually referred to as screen scraping, web harvesting, crawling or spidering, has been the subject of many questions and a fair amount of litigation over the last decade.

However, despite the litigation in this area, the state of the law on this issue remains somewhat unsettled: neither scrapers looking to access data on public-facing websites nor website operators seeking remedies against scrapers that violate their posted terms of use have very concrete answers as to what is permissible and what is not.

In the latest scraping dispute, the e-commerce site QVC objected to the Pinterest-like shopping aggregator Resultly’s scraping of QVC’s site for real-time pricing data.  In its complaint, QVC claimed that Resultly “excessively crawled” QVC’s retail site (purpotedly sending search requests to QVC’s website at rates ranging from 200-300 requests per minute to up to 36,000 requests per minute) causing a crash that wasn’t resolved for two days, resulting in lost sales.  (See QVC Inc. v. Resultly LLC, No. 14-06714 (E.D. Pa. filed Nov. 24, 2014)). The complaint alleges that the defendant disguised its web crawler to mask its source IP address and thus prevented QVC technicians from identifying the source of the requests and quickly repairing the problem.  QVC brought some of the causes of action often alleged in this type of case, including violations of the Computer Fraud and Abuse Act (CFAA), breach of contract (QVC’s website terms of use), unjust enrichment, tortious interference with prospective economic advantage, conversion and negligence and breach of contract.  Of these and other causes of action typically alleged in these situations, the breach of contract claim is often the clearest source of a remedy.