In April, we wrote about how OpenAI had eased the procedure by which ChatGPT users can opt out of their inputs being used for model training purposes (click here for that post). While neither web scraping nor the collection of user data to improve services are new concepts, AI did
Generative AI Providers Subject to Reduced CDA Immunity Under Proposed Legislation
One of the many legal questions swirling around in the world of generative AI (“GenAI”) is to what extent Section 230 of the Communications Decency Act (CDA) applies to the provision of GenAI. Can CDA immunity apply to GenAI-generated output and protect GenAI providers from potential third party liability?
On June 14, 2023, Senators Richard Blumenthal and Josh Hawley introduced the “No Section 230 Immunity for AI Act,” bipartisan legislation that would expressly remove most immunity under the CDA for a provider of an interactive computer service if the conduct underlying the claim or charge “involves the use or provision of generative artificial intelligence by the interactive computer service.” While the bill would eliminate “publisher” immunity under §230(c)(1) for claims involving the use or provision of generative artificial intelligence by an interactive computer service, immunity for so-called “Good Samaritan” blocking under § 230(c)(2)(A), which protects service providers and users from liability for claims arising out of good faith actions to screen or restrict access to “objectionable” material from their services, would not be affected.
Interoperability of Artificial Intelligence and Copyright Law Examined by Congress
Within the rapidly evolving artificial intelligence (“AI”) legal landscape (as explored in Proskauer’s “The Age of AI” Webinar series), there is an expectation that Congress may come together to draft some form of AI-related legislation. The focus is on how generative AI (“GenAI”) in the last six months or so has already created new legal, societal, and ethical questions.
Intellectual property (“IP”) protection – and, in particular, copyright – has been a forefront issue. Given the boom in GenAI, some content owners and creators, have lately begun to feel that AI developers have been free riding by training GenAI datasets off a vast swath of web content (some of it copyrighted content) without authorization, license or reasonable royalty. Regardless of whether certain GenAI tools’ use of web-based training data and the tools’ output to users could be deemed infringement or not (such legal questions do not have simple answers), it is evident that the rollout of GenAI has already begun to affect the vocations of creative professionals and the value of IP for content owners, as AI-created works (or hybrid works of human/AI creation) are already competing with human-created works in the marketplace. In fact, one of the issues in the Writers Guild of America strike currently affecting Hollywood concerns provisions that would govern the use of AI on projects.
On May 17, 2023, the House of Representatives Subcommittee on Courts, Intellectual Property, and the Internet held a hearing on the interoperability of AI and copyright law. There, most of the testifying witnesses agreed that Congress should consider enacting careful regulation in this area that balances innovation and creators’ rights in the context of copyright. The transformative potential of AI across industries was acknowledged by all, but the overall view was that AI should be used as a tool for human creativity rather than a replacement. In his opening remarks, Subcommittee Chair, Representative Darrell Issa, stated that one of the purposes of the hearing was to “address properly the concerns surrounding the unauthorized use of copyrighted material, while also recognizing that the potential for generative AI can only be achieved with massive amounts of data, far more than is available outside of copyright.” The Ranking Member of the Subcommittee, Representative Henry Johnson, expressed an openness for finding middle ground solutions to balance IP rights with innovation but stated one of the quandaries voiced by many copyright holders as to GenAI training methods: “I am hard-pressed to understand how a system that rests almost entirely on the works of others, and can be commercialized or used to develop commercial products, owes nothing, not even notice, to the owners of the works it uses to power its system.”
That Was Close! The Supreme Court Declines Opportunity to Address CDA Immunity in Social Media
Back in October 2022, the Supreme Court granted certiorari in Gonzalez v. Google, an appeal that challenged whether YouTube’s targeted algorithmic recommendations qualify as “traditional editorial functions” protected by the CDA — or, rather, whether such recommendations are not the actions of a “publisher” and thus fall outside of…
OpenAI Eases Procedure to Opt-Out of Inputs Being Used for Training Purposes
A quick update on a new development with OpenAI’s ChatGPT. One of the concerns raised by users of ChatGPT is the ability of OpenAI to use queries for the training of the GPT model, and therefore potentially expose confidential information to third parties. In our prior post on ChatGPT risks…
Amazon Acts Against DMCA Abuse
Competition between Amazon’s third-party merchants is notoriously fierce. The online retail giant often finds itself playing the role of referee, banning what it considers unfair business practices (such as offering free products in exchange for perfect reviews, or targeting competitors with so-called “review bombing”). Last month, in the latest round…
ChatGPT Risks and the Need for Corporate Policies
ChatGPT has quickly become the talk of business, media and the Internet – reportedly, there were over 100 million monthly active users of the application just in January alone.
While there are many stories of the creative, humorous, apologetic, and in some cases unsettling interactions with ChatGPT,[1] the potential business applications for ChatGPT and other emerging generative artificial intelligence applications (generally referred to in this post as “GAI”) are plentiful. Many businesses see GAI as a potential game-changer. But, like other new foundational technology developments, new issues and possible areas of risk are presented.
ChatGPT is being used by employees and consultants in business today. Thus, businesses are well advised to evaluate the issues and risks to determine what policies or technical guardrails, if any, should be imposed on GAI’s use in the workplace.
New York Enacts First State “Right-to-Repair” Law
At the close of 2022, New York Governor Kathy Hochul signed the “Digital Fair Repair Act” (S4101A/A7006-B) (to be codified at N.Y. GBL §399-nn) (the “Act”). The law makes New York the first state in the country to pass a consumer electronics right-to-repair law.[1] Similar bills are pending in other states. The Act is a slimmed down version of the bill that was first passed by the legislature last July.
Generally speaking, the Act will require original equipment manufacturers (OEMs), or their authorized repair providers, to make parts and tools and diagnostic and repair information required for the maintenance and repair of “digital electronic equipment” available to independent repair providers and consumers, on “fair and reasonable terms” (subject to certain exceptions). The law only applies to products that are both manufactured for the first time as well as sold or used in the state for the first time on or after the law’s effective date of July 1, 2023 (thus exempting electronic products currently owned by consumers).
hiQ and LinkedIn Reach Settlement in Landmark Scraping Case
UPDATE: On December 8, 2022, the court issued an order granting the Consent Judgment and Permanent Injunction.
On December 6, 2022, the parties in the long-running litigation between now-defunct data analytics company hiQ Labs, Inc. (“hiQ”) and LinkedIn Corp. (“LinkedIn”) filed a Stipulation and Proposed Consent Judgment (the “Stipulation”) with the California district court, indicating that they have reached a confidential settlement agreement resolving all outstanding claims in the case.
This case has been a litigation odyssey of sorts, to the Supreme Court and back: it started with the original district court injunction in 2017, Ninth Circuit affirmance in 2019, Supreme Court vacating of the order in 2021, Ninth Circuit issuing a new order in April 2022 affirming the original injunction, and back again where we started, the lower court in August 2022 issuing an order dissolving the preliminary injunction, and the most recent mixed ruling on November 4th, 2022. It certainly has been one of the most heavily-litigated scraping cases in recent memory and has been closely followed on our blog. Practically speaking, though, the dispute had essentially reached its logical end with the last court ruling in November – hiQ had prevailed on the Computer Fraud and Abuse Act (CFAA) “unauthorized access” issue related to public website data but was facing a ruling that it had breached LinkedIn’s User Agreement due to its scraping and creation of fake accounts (subject to its equitable defenses).
Data Scraper’s Declaratory Action Seeking Green Light to Scrape LinkedIn Survives Motion to Dismiss
On November 15, 2022, a California district court declined to dismiss a declaratory judgment action brought by a data scraper, 3taps, Inc. (“3taps”), against LinkedIn Corp. (“LinkedIn”). (3taps, Inc. v. LinkedIn Corp., No. 18-00855 (N.D. Cal. Nov. 15, 2022)). 3taps is seeking an order to clarify whether the federal Computer Fraud and Abuse Act (CFAA) (or its California state law counterpart) prevents it from accessing and using publicly-available data on LinkedIn, and whether scraping such data would also subject it to an action brought by LinkedIn for breach of contract or trespass.
This is not 3tap’s first experience with scraping litigation (see prior post). But if this dispute sounds strangely familiar and reminiscent of the long-running dispute between hiQ Labs and LinkedIn (which we’ve followed closely), it is. The 3taps action traces its origin, in part, to the original hiQ ruling in August 2017, where this same judge first granted a preliminary injunction in favor of hiQ, enjoining LinkedIn from blocking hiQ’s access to LinkedIn members’ public profiles. Following that ruling, 3taps sent a letter to LinkedIn stating that it also intended to scrape publicly-available data from LinkedIn. LinkedIn responded that while it was not considering legal action against 3taps, it cautioned that “any further access by 3taps to the LinkedIn website and LinkedIn’s servers is without LinkedIn’s or its members’ authorization.” Thus, the hiQ ruling, 3taps’s letter to LinkedIn, and LinkedIn’s reply were the genesis of the current declaratory judgment action filed by 3taps against LinkedIn.[1]