Header graphic for print

New Media and Technology Law Blog

Section 230 of the Communications Decency Act: More Lessons to Be Learned

Posted in Internet, Online Content

Courts continue to struggle with the application of CDA immunity to shield service provider defendants from liability in extreme cases. In this case, the Washington Supreme Court, in a 6-3 decision, affirmed the lower court’s decision to allow a suit to proceed against classified service Backpage.com surrounding the sexual assault of several minors by adult customers who responded to advertisements placed in the “Escorts” section of the website. (See J.S. v. Village Voice Media Holdings, L.L.C., 2015 WL 5164599 (Wash. Sept. 3, 2015).  This opinion is notable in that courts have usually (although sometimes reluctantly) resolved these struggles by extending broad immunity, even when the facts presented are unsympathetic, or, as the dissent in J.S. noted, “repulsive.”   Indeed, in a case from earlier this year, Backpage was granted CDA immunity in a dispute resting on similar facts. (See Doe No. 1 v. Backpage.com, LLC, No. 14-13870 (D. Mass. May 15, 2015)).  Why was this case decided differently?

This issue in this case turns on whether Backpage merely hosted the advertisements that featured the minor plaintiffs, in which case Backpage is protected by CDA immunity, or whether Backpage also helped develop the content of those advertisements, in which case Backpage is not protected by CDA immunity.  When viewing the plaintiffs’ allegations in a favorable light at this early stage of the litigation, the majority of the court found that the plaintiffs alleged facts that, if proved true, would show that Backpage did more than simply maintain neutral policies prohibiting or limiting certain content and acted as an “information content provider” in surreptitiously guiding pimps on how to post illegal, exploitative ads.

The dissenting justices would have ruled that Backpage qualified for CDA immunity because a person or entity does not qualify as an information content provider merely by facilitating a user’s posting of content, if it is the user alone who selects the content.  In the dissent’s view, the plaintiffs are seeking to hold Backpage liable for its publication of third-party content and harms flowing from the dissemination of that content (i.e., Backpage’s alleged failure to prevent or remove certain posted advertisements), a situation that should fall under the CDA.  The dissent also pointed out that Backpage provides a neutral framework that could be used for proper or improper purposes and does not mandate that users include certain information as a condition of using the website.

What are the lessons learned?  CDA immunity is generally a robust affirmative defense against claims related to the publication of third-party content.  However, as this case illustrates, courts may look for ways to circumvent the CDA in certain unsavory cases, particularly in the early stages of the litigation.  Even if the interpretation of CDA immunity in this case may turn out to be an outlier and the CDA ultimately is deemed to protect Backpage.com, the opinion – issued from a state supreme court – should prompt service providers to take heed.  In light of this decision, website operators that provide forums for user-generated content might reexamine their policies related to the creation of user-generated content and the filtering out or management of illegal content to determine whether the site itself could be reasonably alleged to be “inducing” and perhaps even “developing” any questionable content posted by users.

Clickwrap Agreement Available Only Through Hyperlink Enforceable Under New York Law

Posted in Contracts, Online Commerce

Last week, the Southern District of New York followed a long line of precedent under New York law and upheld the enforceability of a website clickwrap agreement, granting a website operator’s motion to compel arbitration pursuant to a clause contained in the agreement. (Whitt v. Prosper Funding LLC, 2015 WL 4254062 (S.D.N.Y. July 14, 2015)).

In Whitt, the defendant filed a motion to dismiss the plaintiff’s lawsuit and compel arbitration pursuant to the terms of a website clickwrap agreement.  The defendant ran a website that accepted loan applications and alleged that the plaintiff agreed to an arbitration provision in the website’s “borrower registration agreement.”  The website required applicants to click a box adjacent to the following bolded text: “Clicking the box below constitutes your acceptance of . . . the borrower registration agreement.”  The term “borrower registration agreement” in the text was a blue, underlined hyperlink to the actual agreement.  This acknowledgement appeared near the bottom of the webpage, immediately above a “Continue” button, and the applicant could not complete a loan application without clicking the box indicating acceptance of the agreement.

The defendant contended, among other things, that the plaintiff accepted the agreement and its arbitration provision when he applied for a loan through the website. The plaintiff countered that he was not even constructively aware of the terms of the agreement because it was only accessible via hyperlink.

The court held that the plaintiff at least had constructive knowledge of the terms of the agreement and had assented to them.  The court cited to a long line of precedent under New York law where courts had previously enforced agreements which were only accessible via hyperlink and where such hyperlink was made apparent to the average user during registration.

On the other issue in the case, the question of whether the arbitration clause was unconscionable, the court held in favor of the defendant, finding that the clause was not unconscionable and was enforceable.

Supreme Court Rejects Google’s Appeal in Java API Dispute

Posted in Copyright, Software

On Monday, the Supreme Court denied certiorari in Google’s appeal of the Federal Circuit’s 2014 ruling that that the declaring code and the structure, sequence, and organization of 37 Java API packages are entitled to copyright protection. (See Oracle America, Inc. v. Google Inc., 750 F.3d 1339 (Fed. Cir. 2014)). [A detailed discussion of the original lower court ruling can be found here.]

As we explained in a prior post, Google had argued that, contrary to the Federal Circuit’s interpretation, the Copyright Act excludes systems and methods of operations from copyright protection and that the appeals court “erased a fundamental boundary between patent and copyright law.” Tech law watchers were hoping that the Supreme Court might take the case to resolve this important copyright issue, something the court hasn’t examined since its 4-4 vote (Justice Stevens having recused himself) in the 1996 Borland case that affirmed the circuit court’s ruling regarding the copyrightability of a spreadsheet software’s hierarchy menu interface.

With the Supreme Court’s action, the case will be sent back to the district court in San Francisco to determine the viability of Google’s fair use defense.

Facial Recognition Technology: Social Media and Beyond, an Emerging Concern

Posted in Privacy, Social Media, Technology

This week, a major self-regulatory initiative intended to address privacy concerns associated with facial recognition technology hit a significant stumbling block.  Nine consumer advocacy groups withdrew from the National Telecommunications and Information Administration (NTIA)-initiative due to a lack of consensus on a minimum standard of consent.  The NTIA initiative had been ongoing since early 2014.  Consumer advocacy and civil liberties groups were participating with industry trade groups in NTIA-sponsored meetings intended to create guidelines on the fair commercial use of facial recognition technology.  Advocates and industry groups were attempting to develop a voluntary, enforceable code of conduct for the use of facial recognition technology and generally define the contours of transparency and informed consent.

The deadlock in discussions and withdrawal of key participants in those discussions highlights how difficult the resolution of those issues will be.

Facial recognition technology is a powerful tool with many potential uses.  Some are relatively well-known, particularly those which identify people in online social networks and photo storage services. For example, Facebook and others have for years employed “tag suggestion” tools that scan uploaded photos and identify network friends and suggest that the member “tag” them. How does the technology work? Facebook explains: “We currently use facial recognition software…to calculate a unique number (“template”) based on someone’s facial features, like the distance between the eyes, nose and ears. This template is based on your profile pictures and photos you’ve been tagged in on Facebook. We use these templates to help you tag photos by suggesting tags of your friends.”

Taking it a step further, earlier this month Facebook introduced “Moments,” a new app that syncs photos stored on a user’s phone based, in part, on which friends are depicted in the photos.

Other uses of the technology are not as familiar.  Facial recognition technology and “faceprints” have been used, for example, in retailer anti-fraud programs, for in-store analytics to determine an individual’s age range and gender to deliver targeted advertising, to assess viewers’ engagement in a videogame or movie or interest in a retail store display, to facilitate online images searches, and to develop virtual eyeglass fitting or cosmetics tools.  While these capabilities may be quite useful, many users consider the technology to be uncomfortably creepy.

The technology has been the focus of privacy concerns for quite a while. In October, 2012, the Federal Trade Commission (the “FTC”) issued a report, “Facing Facts: Best Practices for Common Uses of Facial Recognition Technologies.”  The FTC staff made a number of specific recommendations with respect to the use of the technology, including, without limitation, providing clear notice about collection and use, giving users opt-out rights, and obtaining express consent before using a consumer’s image in a materially different manner than originally collected.

Individual states have also legislated in this area. For example, Texas and Illinois have existing biometric privacy statutes that may apply to the collection of facial templates for online photo tagging functions.  Illinois’s “Biometric Information Privacy Act,” (“BIPA”) 740 ILCS 14/1, enacted in 2008, provides, among other things, that a company cannot “collect, capture, purchase, receive through trade, or otherwise obtain a person’s… biometric information, unless it first: (1) informs the subject … in writing that a biometric identifier or biometric information is being collected or stored; (2) informs the subject … in writing of the specific purpose and length of term for which a biometric identifier or biometric information is being collected, stored, and used; and (3) receives a written release executed by the subject of the biometric identifier or biometric information.” 740 ILCS 14/15(b).  The Texas statute, Tex. Bus. & Com. Code Ann. §503.001(c), enacted in 2007, offers similar protections.

Neither statute has been interpreted by a court with respect to modern facial recognition tools, but that may change in the coming months.  In April of this year,  a putative class action complaint was filed in Illinois state court against Facebook over a tagging feature that rolled out in 2010 and has been used to create, what the plaintiffs term, “the world’s largest privately held database of consumer biometrics data.”  See Licata v. Facebook, Inc., No. 2015CH05427 (Ill. Cir. Ct. Cook Cty. filed Apr. 1, 2015).  The plaintiffs brought claims against Facebook for allegedly collecting and storing biometric data without adequate notice and consent and failing to provide a retention schedule and guidelines for permanent deletion, or otherwise comply with BIPA with respect to Illinois users.  The complaint seeks an injunction and statutory damages for each violation (note: BIPA provides for $1,000 in statutory damages for each negligent violation, and $5,000 for intentional violations, plus attorney’s fees).   Facebook counters that users can turn off tag suggestion, which deletes a facial recognition template.

A month later, a similar suit alleging violations of BIPA was filed against Facebook.  (See Patel v. Facebook, Inc., No. 15-04265 (N.D. Ill. filed May 14, 2015)).  Moreover, last week, a putative class action suit was brought against the photo storage service Shutterfly in Illinois federal court alleging violations of the BIPA for collecting faceprints from user-upload photos in conjunction with a tag suggestion feature. (Norberg v. Shutterfly, Inc., No. 15-05351 (N.D. Ill. filed June 17, 2015)).

Based on the NTIA discussions, the FTC report, and the issues raised in the Illinois litigations, it is clear that there are numerous considerations for companies to think about before rolling out facial recognition features, including:

  • How should the concepts of transparency and consent apply to the use of facial recognition tools?
  • What level of control should individuals have over when and how a faceprint is stored and used?
  • Should companies obtain prior affirmative consent before collecting such data, as most apps do before collecting geolocation data?
  • Does facial recognition technology clash with a consumer’s rights in a way that “manual” tagging of photographs by social media users would not?
  • How should a company’s policy regarding facial recognition deal with non-members of a service or anonymous members of the public captured in a photo?
  • What level of transparency is appropriate when a company combines facial profiles with third-party data for analytics or secondary uses?
  • How should a company address retention periods for faceprints?
  • What should happen to the faceprint of a user who unsubscribes from a service?
  • Should faceprint data be the subject of heightened data security (e.g. encryption)?
  • Should additional restrictions be placed on the use of commercial facial recognition technology by teens?
  • Would a standard electronic contracting process, whereby a user consents to a service’s terms of service and privacy policy via a clickwrap agreement, constitute appropriate notice and consent for the use of facial recognition? Or must there be a distinct, written notice and consent process for facial recognition data collection practices as well as a formal, posted facial recognition policy?

These questions and more are yet to be answered. Companies that are planning on using facial recognition technology – whether in mobile apps, online, or even in offline applications – should be aware of the emerging legal landscape in this area.  More litigation in this area is likely.

Meeting of the Minds at the Inbox: Some Pitfalls of Contracting via Email

Posted in Contracts, E-mail

We have had a number of clients run into issues relating to whether or not an email exchange constituted a binding contract.  This issue comes up regularly when informality creeps into negotiations conducted electronically, bringing up the age-old problem that has likely been argued before judges for centuries: one party thinks “we have a deal,” the other thinks “we’re still negotiating.”  While email can be useful in many contract negotiations, care should be taken to avoid having to run to court to ask a judge to interpret an agreement or enforce a so-called “done deal.”

With limited exceptions, under the federal electronic signature law, 15 U.S.C. § 7001, and, as adopted by the vast majority of states, the Uniform Electronic Transactions Act (UETA), most signatures, contracts and other record relating to any transaction may not be denied legal effect solely because they are in electronic form.  Still, a signed email message does not necessarily evidence intent to electronically sign the document attached to the email. Whether a party has electronically signed an attached document depends on the circumstances, including whether the attached document was intended to be a draft or final version.

There have been a number of recent cases on this issue, which I’ve discussed further below, but the bottom-line, practical takeaways are as follows:

  • Consider an express statement in the agreement that performance is not a means of acceptance and that the agreement must be signed by both parties to be effective.
  • If you do not believe the agreement is final and accepted, do not begin to perform under the agreement unless there is an express written (email is ok) agreement by the parties that performance has begun but the contract is still being negotiated.
  • When exchanging emails relating to an agreement, be prudent when using certain loaded terms such as “offer,” “accept,” “amendment,” “promise,” or “signed” or  phrases of assent (e.g., “I’m ok with that”, “Agreed”) without limitations or a clear explanation of intent.
  • Terms of proposed agreements communicated via email should explicitly state that they are subject to any relevant conditions, as well as to the further review and comment of the sender’s clients and/or colleagues. To avoid ambiguity, to the extent finalizing an agreement is subject to a contingency (e.g., upper management or outside approval, or a separate side document signed by both parties), be clear about that in any email exchange that contains near-final versions of the agreement.
  • Parties wishing to close the deal with an attachment should mutually confirm their intent and verify assent when the terms of a final agreement come together.
  • While it is good practice to include standard email disclaimers that say that the terms of an email are not an offer capable of acceptance and do not evidence an intention to enter into any contract, do not rely on this disclaimer to prevent an email exchange – which otherwise has all the indicia of a final agreement – from being considered binding.
  • Exercise extreme caution when using text messaging for contract negotiations – the increased informality, as well as the inability to attach a final document to a text, is likely to lead to disputes down the road.

While courts have clearly become more comfortable with today’s more informal, electronic methods of contracting, judges still examine the parties’ communications closely to see if an enforceable agreement has been reached.

Now, for those who are really interested in this subject and want more, here comes the case discussion….

Last month, a Washington D.C. district court jury found in favor of MSNBC host Ed Schultz in a lawsuit filed by a former business partner who had claimed that the parties had formed a partnership to develop a television show and share in the profits based, in part, upon a series of emails that purported to form a binding agreement.  See Queen v. Schultz, 2014 WL 1328338 (D.C. Cir. Apr. 4, 2014), on remand, No. 11-00871 (D. D.C. Jury Verdict May 18, 2015).  And, earlier last month, a New York appellate court ruled that emails between a decedent and a co-owner of real property did not evince an intent of the co-owner to transfer the parcel to the decedent’s sole ownership because, even though the parties discussed the future intention to do so, the material term of consideration for such a transfer was fatally absent.  See Matter of Wyman, 2015 NY Slip Op 03908 (N.Y. App. Div., 3rd Dept. May 7, 2015).  Another recent example includes Tindall Corp. v. Mondelēz Int’l, Inc., No. 14-05196 (N.D. Ill. Mar. 3, 2015), where a court, on a motion to dismiss, had to decide whether a series of ambiguous emails that contained detailed proposals and were a follow-up to multiple communications and meetings over the course of a year created a binding contract or rather, whether this was an example of fizzled negotiations, indefinite promises and unreasonable reliance.  The court rejected the defendant’s argument that the parties anticipated execution of a memorialized contract in the future and that it “strains belief that these two companies would contract in such a cavalier manner,” particularly since the speed of the project may have required that formalities be overlooked.

Enforceability of Electronic Signatures

A Minnesota appellate court decision from last year highlights that, unless circumstances indicate otherwise, parties cannot assume that an agreement attached to an email memorializing discussions is final, absent a signature by both parties.  See SN4, LLC v. Anchor Bank, fsb, 848 N.W.2d 559 (Minn. App. 2014) (unpublished). The court found although the bank representatives intended to electronically sign their e-mail messages, the evidence was insufficient to establish that they intended to electronically sign the attached agreement or that the attached document was intended to be a final version (“Can you confirm that the agreements with [the bank] are satisfactory[?] If so, can you have your client sign and I will have my client sign.”).

A California decision brings up similar contracting issues. In JBB Investment Partners, Ltd. v. Fair, 182 Cal. Rptr. 974 (Cal. App. 2014), the appellate court reversed a trial court’s finding that a party that entered his name at the end of an email agreeing to settlement terms electronic “signed” off on the deal under California law. The facts in JBB Investment offered a close case – with the defendant sending multiple emails and text messages with replies such as “We clearly have an agreement” and that he “agree[d] with [plaintiff’s counsel’s] terms” yet, the court found it wasn’t clear as to whether that agreement was merely a rough proposal or an enforceable final settlement.  It was clear that the emailed offer was conditioned on a formal writing (“[t]he Settlement paperwork would be drafted . . .”).

Performance as Acceptance

Another pitfall of contracting via email occurs when parties begin performance prior to executing the governing agreement – under the assumption that a formal deal “will get done.”  If the draft agreement contains terms that are unfavorable to a party and that party performs, but the agreement is never executed, that party may have to live with those unfavorable terms. In DC Media Capital, LLC v. Imagine Fulfillment Services, LLC, 2013 WL 46652 (Cal. App. Aug. 30, 2013) (unpublished), a California appellate court held that a contract electronically sent by a customer to a vendor and not signed by either party was nevertheless enforceable where there was performance by the offeree.  The court held that the defendant’s performance was acceptance of the contract, particularly because the agreement did not specifically preclude acceptance by performance and expressly require a signature to be effective.

New York Releases Final BitLicense Rules

Posted in Digital Currency, Regulatory

Readers of this blog will know that we have been following the recent legal developments relating to bitcoin and other virtual currency systems [also here and here].  Yesterday, in a significant development reflecting the general maturation of virtual currencies as a recognized payment system, Benjamin M. Lawsky, Superintendent of the New York State Dept. of Financial Services (NYSDFS), announced New York’s final “BitLicense” rules.

The NYSDFS is the first state agency to release a comprehensive framework for regulating digital currency-related businesses.  This has been the culmination of a two-year period of drafting and public comment, with the initial proposed rules having been published last July.  The keystone of the regulations are consumer protections, anti-money laundering compliance and cybersecurity rules that are designed to place appropriate “guardrails” around the industry without “stifling beneficial innovation.”

In his remarks at the BITS Emerging Payments Forum in Washington, D.C., Lawsky stated that the agency had listened to the industry’s concerns that called for a more hands-off approach, and made a few minor changes from the prior drafts.

These changes include the following:

  • Companies will not need prior approval for standard software or app updates – only for material changes to their products or business models.
  • The agency will not be regulating software developers – only financial intermediaries. Under the regulations, the definition of “Virtual Currency Business Activity” generally involves entities that act as: (1) virtual currency transmitters; (2) digital wallet services; (3) those “buying and selling Virtual Currency as a customer business”; (4) virtual currency exchange services; (5) controllers or issuers of a virtual currency.  The regulations explicitly state that the “the development and dissemination of software in and of itself does not constitute Virtual Currency Business Activity.”
  • Companies will not be required to file a duplicative set of application submissions if they want both a BitLicense and a money transmitter license. Companies will be able to work with the agency to have a “one-stop” application submission.
  • Companies that already file suspicious activity reports (also called “SARs”) with federal regulators such as FinCEN will not have to file a duplicate set of SARs with the NYSDFS.
  • Companies won’t need prior approval from NYDFS for every new round of venture capital funding – a company would only need prior approval if the new investor wants to “direct the management and policies of the firm” as opposed to merely being a “passive investor.”

While Mr. Lawsky has announced his departure from the NYSDFS, he has fulfilled his promise to take steps towards bringing some regulatory oversight to bitcoin and other digital currencies. While the future of these currencies is still a matter of speculation, it will be interesting to see whether other states follow New York’s lead. We will keep you updated.

Upcoming Summer Blockbuster: Impending Shortage of IPv4 IP Addresses

Posted in Internet, Technology

There is an interesting article in today’s Wall Street Journal about the impending shortage of IPv4 IP addresses (forcing tech companies and cloud providers to scramble to secure the remaining stock for U.S. users) and the IPv6 solution.

Hate to say “I told you so” but…see our prior coverage here and here.

But what should you be doing now? To the extent you are involved in negotiating internet-related infrastructure transactions, you should be thinking about adding in IPv6-capability provisions, including, as appropriate, commitments regarding upgrades.  There is a real cost to becoming IPv6-compatible, as companies have to purchase new network switches and routers. According to sources quoted in today’s Wall Street Journal article, only 9% of the Internet community has done that so far, and such a company-wide migration may cost as much as 7% of a company’s annual IT budget. Thus, since this can be a significant financial issue, best practice would be to capture any understandings in the transaction documents.    

Who Exactly Is a ‘User’ under the DMCA Safe Harbor?

Posted in Copyright, Online Content

The DMCA was enacted in 1998 to preserve “strong incentives for service providers and copyright owners to cooperate to detect and deal with copyright infringements that take place in a digital networked environment.”  As part of this implicit bargain, Title II of the DMCA offers safe harbors for qualifying service providers to limit their liability for claims of copyright infringement.  The Section 512(c) safe harbor protects storage providers (and has been the subject of the much litigation over the past decade).

Specifically, Section 512(c) applies to infringements that occur “by reason of the storage at the direction of a user of material” on a service provider’s system or network.  The statute does not define “user” and it seems, until recently, no court had interpreted the term.  Is a “user” simply anyone who uses an online storage platform, or should the definition exclude those persons who may have an independent contractor or similar relationship with the service provider?  In the typical situation, a website or app might offer to host content and facilitate online sharing and viewing of uploaded photos and videos.  But what if the online site only allows selected applicants to post content on the site on specific topics and offers financial incentives based upon the number of clicks?

In BWP Media USA, Inc. v. Clarity Digital Group, LLC, No. 14-00467 (D. Colo. Mar. 31, 2015), the court was compelled to take a deep dive into what “storage at the direction of a user” means in the context of new media — specifically, the business model of Examiner.com, a so-called “content farm” style site which posts articles written by third parties on popular news, entertainment and lifestyle topics.  Unlike the popular video sharing sites, Examiner.com has more involvement in what goes up on its site. The site does not assign stories and asserts that it does not pre-screen or edit the work of its contributors.  However, the site conducts background checks on individuals who apply to be “Examiners” on a chosen topic, requires contributors to comply with its Editorial Requirements, and may decide to feature a contributor’s work outside his or her topic page (such as on the Examiner.com main page). In some circumstances, contributors may be paid based upon an article’s page views, traffic and similar metrics.  In addition, contributors must also enter into the “Examiners Independent Contractor Agreement and License” before receiving permission to post to the site.  The agreement holds contributors to certain editorial requirements and obligates them to “regularly create and post new Works to the Web Page and update the Web Page as often as reasonably needed.” The agreement also states that contributors must not include copyrighted content on their pages without permission.

The plaintiffs alleged that the Examiner hosted user-submitted articles that contained their copyrighted photos without an appropriate license.  The defendant admitted that the site displayed 75 of plaintiffs’ copyrighted photographs without permission, but argued that the Examiners who included those images in posted content did so without involvement from Examiner.com staff.  In claiming protection under the DMCA safe harbor, the defendant countered that it had no involvement or specific knowledge of the infringing use of plaintiffs’ photographs before they were posted to the site, that it didn’t pre-screen or approve the articles at issue and that it otherwise complied with the requirements of the DMCA and removed the photos at issue in a timely manner after it received a takedown notice.

The dispute centered on whether the defendant was entitled to protection under the § 512(c) safe harbor.  More specifically, the question became whether the contributors to the Examiner were “users” under § 512(c), that is, were the plaintiffs’ photographs stored on defendant’s system at the direction of the site’s contributors or stored at the direction of defendant.  In rejecting the plaintiffs’ copyright claims, the court found no evidence that a content manager, review team, or any Examiner.com staff had any actual control or influence over the content of the articles containing plaintiffs’ photographs so as to render the use of plaintiffs’ photographs at the direction of defendant.  Moreover, the court stated that in the absence of evidence that the defendant directed the contributors to upload plaintiffs’ photographs to the site, defendant’s policies (e.g., prohibiting use of infringing content in the user agreement, having a repeat infringer policy and offering contributors free access to a licensed photo library) further supported the conclusion that plaintiffs’ photographs were not stored on the site at the direction of defendant.

Finding that the defendant complied with the remaining requirements of the statute, the court ruled that the defendant was entitled to § 512(c) safe harbor protection, dismissing the case.

The ruling appears to be the first court to interpret the term “user” under the § 512(c) safe harbor and the broad reading is certainly notable for providers and distributors of new media.  Today’s online “storage” providers that rely on the DMCA for legal protection are so much more than simple web hosts.  Indeed, the BWP Media ruling echoes prior, expansive readings of the safe harbor, where courts have avoided rigid interpretations and ruled that certain automated functions that make files accessible – such as transcoding or converting uploaded content to certain file formats, extracting metadata to aid searching or assigning permalinks – fell within the ordinary meaning of “storage.”

Emergence of Live Streaming Apps Brings Up Copyright, Privacy, Legal Concerns

Posted in Copyright, Online Content, Social Media

The big fight may be over, but the implications of Mayweather vs. Pacquiao with respect to real-time, one-to-many streaming of video through apps like Meerkat and Periscope are still rippling through the media industry. In short, livestreaming apps allow anyone with a smartphone to effortlessly broadcast live video to social media followers and the wider internet – everything from ordinary life activities (e.g., an individual walking down the street), to live action (e.g., events, protests), to the redistribution of content (e.g., streaming a popular cable show).

This past week, the media reported widespread streaming of the pay-per-view broadcast of the fight by individuals who had paid to view the fight at home. While Periscope’s streams expire after 24 hours and Meerkat does not archive streams, new platforms are being rolled out to support the users of these types of apps, thus suggesting that this may be a growing phenomenon. Expect the delicate push and pull involving DMCA takedown notices to continue between content owners and these new streaming apps. Though, it should be noted that the intellectual property rights issues associated with real-time streaming through these apps are not straightforward, particularly when dealing with the stream of a live event directly from the venue of that event.

Stay tuned for further developments.

U.S. Dept. of Commerce Releases Multistakeholder Guidance on DMCA Notice and Takedown Best Practices

Posted in Copyright

On Tuesday, the U.S. Dept. of Commerce’s Internet Policy Task Force released a guidance containing a list of best practices (and notable “bad” practices), all designed to improve the DMCA’s notice and takedown system for both senders and recipients of notices [See “DMCA Notice-and-Takedown Processes: List of Good, Bad, and Situational Practices”]. The document was developed as part of a multistakeholder forum held between rights holders, creators, service providers and consumer protection advocates, all of whom have an interest in an effective notice and takedown system that balances the interests of both rights holders and online services.

For example, some “Good Practices” for service providers include:

  • Making DMCA takedown and counter-notice mechanisms easy to find and understand.  Web forms should have clearly labeled fields, with help buttons and instructions.
  • Implementing efficient processes for receiving notices that are commensurate with the volume of claims (e.g., allowing multiple URLs to be submitted at one time).
  • Providing confirmation of receipt of a notice or counter-notice that includes a method for referencing past notices in further communications.

The document also offers good general practices for Notice Senders:

  • Taking reasonable measures to determine the online location of the infringing material and “appropriately consider” whether use of the material identified in the notice is unauthorized.
  • When using automated tools, conducting a human review of the site where the notices will be directed, as well as performing periodic spot checks to evaluate whether the automated search parameters are returning the expected results.

Lastly, the document offers guidance on “Situational Practices,” such as Trusted Submitter Programs, which provide efficiencies for rights holders who have a track record of submitting accurate notices.

At only 7 pages, the Dept. of Commerce document is certainly worth the read for both copyright holders and service providers.  Improved notice and takedown practices can result in streamlined procedures, a better reputation in the online community, and fewer disagreements over posted content (which, will decrease the likelihood of litigation for copyright infringement or wrongful takedown notices).