New Media and Technology Law Blog

ChatGPT Risks and the Need for Corporate Policies

ChatGPT has quickly become the talk of business, media and the Internet – reportedly, there were over 100 million monthly active users of the application just in January alone.

While there are many stories of the creative, humorous, apologetic, and in some cases unsettling interactions with ChatGPT,[1] the potential business applications for ChatGPT and other emerging generative artificial intelligence applications (generally referred to in this post as “GAI”) are plentiful. Many businesses see GAI as a potential game-changer.  But, like other new foundational technology developments, new issues and possible areas of risk are presented.

ChatGPT is being used by employees and consultants in business today.  Thus, businesses are well advised to evaluate the issues and risks to determine what policies or technical guardrails, if any, should be imposed on GAI’s use in the workplace.

What are the risks?

Confidentiality. While it may be tempting to use GAI to further develop or refine business strategies, software or other proprietary information, the input of confidential information into ChatGPT and other GAIs presents a number of risks:

  • ChatGPT may train on the input that is provided,[2] and thus it is possible that portions of that inputted confidential information may be provided, in some form, to a subsequent user. Indeed, it was reported that at least one company advised employees not to input confidential code into the application for data security concerns.[3]
  • Some confidential business information may be licensed from third parties and may be subject to confidentiality requirements or restrictions on use, and by putting such information into ChatGPT, a company may be in violation of those restrictions.
  • Trade secret law requires one to maintain reasonable steps to protect the secrecy of information claimed to be a trade secret, and putting information into ChatGPT may weaken a company’s position that such information is actually, as a matter of law, protectable as a trade secret.
  • Privacy laws may restrict the submission of personal information of employees, clients, affiliates or consumers into any GAI.

Regulatory Issues.

To the extent a regulated business is using ChatGPT or other GAI in its business operations, thought should be given to whether some or all of that use is subject to regulatory requirements.  For example, should or must some of the interactions be logged, recorded, archived in some manner? The analysis of this issue will possibly be informed by applicable law, contract, insurance-based requirements, as well as possibly a company’s own internal policies.

Intellectual Property.  GAI presents a number of interesting and new intellectual property issues:

  • Does training of GAI via scraping the web constitute an intellectual property infringement or DMCA violation for the removal of CMI (copyright management information), and if so, can the user of that GAI be found to be liable in any way?
  • What is the IP status of the output of GAI? For example, if a software developer uses ChatGPT to create software, can that developer represent to its user that the developer owns all IP rights in that software? Can the developer indemnify the user for infringement issues? And what is the status of GAI-generated images, which often bear a recognizable similarity to one or more of their human-created sources?
  • To the extent the use of GAI is infringing, is the fair use or implied license doctrine relevant?
  • Can a GAI or the user of GAI be an “inventor” under patent law or an owner of a U.S. copyright in GAI-generated material?

These intellectual property issues are, to varying degrees, all open questions, with litigants just beginning to bring suit and ask some of these questions.  However, a few basic principles are clear:

  • It is best practice to avoid claiming copyright in GAI-generated content (particularly in AI-generated artwork or images).  ChatGPT’s terms are instructive. The terms cover rights in content: “As between the parties and to the extent permitted by applicable law, you own all Input, and subject to your compliance with these Terms, OpenAI hereby assigns to you all its right, title and interest in and to Output.” While such license to the output is a broad grant of OpenAI’s rights in the Output, it is not definitive that ChatGPT has any rights in the Output to grant at all.
  • Consideration should be given as to whether third party software developers or content creators of any kind should be permitted to use ChatGPT or any GAI in their deliverables. This is an issue that should be addressed in development agreements with those third parties.
  • Copyright Office policy, as currently stated in the Compendium of U.S. Copyright Office Practices (3d Ed. 2021), is that the Copyright Office “will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author. The crucial question is ‘whether the ‘work’ is basically one of human authorship, with the computer [or other device] merely being an assisting instrument, or whether the traditional elements of authorship in the work…were actually conceived and executed not by man but by a machine.’” (See also Trade-Mark Cases, 100 U.S. 82, 94 (1879) (copyright law only protects “the fruits of intellectual labor” that “are founded in the creative powers of the mind”). Thus, based on this policy, GAI-generated content would not be subject to copyright protection.

Quality and Output Issues.

There are a number of issues that are presented by the nature of GAI’s output:

  • ChatGPT and the other GAIs are still works-in-progress with limitations. As OpenAI has advised: “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.” Thus, while the current ChatGPT interface is ready to use “out of the box,” the accuracy and truth of any output must be confirmed before finalizing or publishing any work product.
  • GAI-generated analysis may reflect biased or discriminatory content on which it was trained.[4] Along with fact-checking the veracity of ChatGPT and other GAI output, users should be attuned to any discriminatory or biased statements or conclusions resulting in the algorithmic mining of such source materials. This could be a particular concern in the context of employment discrimination laws and laws regulating the use of artificial intelligence in employment decisions.
  • Publishers and other content creators often procure “Errors and Omissions” insurance to cover exposure based on infringement and other risks. Often the underwriting of those policies involves a review of internal content creation practices. Will GAI-generated content be within the scope of traditional errors and omissions policies?
  • Section 230 of the Communications Decency Act is highly controversial in its scope and application. To the extent GAI-generated content is used in an online business, it is unclear if and to what extent the CDA would apply with respect to that content. CDA § 230 prohibits a “provider or user of an interactive computer service” from being held responsible “as the publisher or speaker of any information provided by another information content provider.”  Are there any situations where GAI-generated content would not be considered “information provided by another information provider”? These types of third-party content issues are especially fraught, as the Supreme Court just heard argument on February 21, 2023 in a case examining the applicability of the CDA to algorithmic functions.
  • Thought should be given to whether GAI-generated content should be identified as such when made public. This may be an issue if the content is generated in a real-time fashion, e.g., in a bot conversation with a customer or employee.  Organizations should also consider whether such disclosures are appropriate to clients, business partners or the public.
  • Are GAI interactions discoverable in litigation? Should a company’s document retention policy specifically address GAI-generated content?

Artificial Intelligence Compliance Issues

There are a number of laws and regulations place and in various stages of enactment in the United States and abroad that address the use of artificial intelligence. For example, California’s chatbot law (Bus. and Prof. Code § 17940) requires, among other things, that in certain consumer interactions, a company provide clear and conspicuous disclosure that the consumer is interacting with a bot. Moreover, New York City and several states have regulations impacting automated decision-making in the employment context and the FTC and state attorneys general have enforcement powers against “unfair or deceptive” trade practices. Organizations must ensure that their use of GAI is compliant with such laws.

Thoughts on Policies

ChatGPT is being used today. Organizations cannot ignore it and the inevitability of the even wider use of these technologies in the near future. Every organization should be evaluating the issues GAI presents to determine to what degree they present material risk to the organization.  Each entity must approach GAI from its own particular risk profile. Indeed, as outlined in the National Institute of Standards and Technology’s (NIST) recently published Artificial Intelligence Risk Management Framework 1.0, risk tolerances can change over time as AI systems, policies, and norms evolve.[5]

Possible courses of action include the following:

  • Messaging to the relevant community that the use of GAI is permitted, but outlining the power and risks of GAI and asking the community to be vigilant.
  • Enacting policies that may do some or all of the following:
    • Precluding certain uses of GAI. News reports suggest that some companies have already taken actions to restrict employee use of ChatGPT.
    • Identifying permitted uses of GAI, and the cases in which prior approval is required
    • Requiring internal tracking of the use of GAI and additional human review of selected GAI-generated content
    • Addressing external disclosures of the use of GAI and GAI output
    • Regulating the uses of GAI by external business partners and vendors.
    • Addressing the possibility of embedding GAI applications on the company’s website

We are likely just at the start of a cycle of innovation surrounding generative AI technology and its application for businesses and consumers, much like the early days of e-commerce or web 2.0 or the current days of web 3.0. Of course this post highlights just some of the preliminary issues and concerns associated with GAI — there will likely be many more issues to unpack in the future as the technology evolves.  To the extent an organization perceives GAI to present any of the risks highlighted above, or views GAI to present other issues for its business, putting appropriate policies in place now may be helpful.

_________________________

[1] A Feb. 16, 2023 post on the OpenAI Blog noted that the company has received copies of biased or offensive outputs from users, noting that in many cases the responses showed limitations of the system that will be addressed: “Many are rightly worried about biases in the design and impact of AI systems. We are committed to robustly addressing this issue and being transparent about both our intentions and our progress.”

[2] As per the ChatGPT terms: “To help OpenAI provide and maintain the Services, you agree and instruct that we may use Content to develop and improve the Services.”

[3] Organizations that are using ChatGPT’s API and are concerned with such issues might consider using ChatGPT’s opt-out procedure (outlined here).

[4] See generally: The White House, “Blueprint for an AI Bill of Rights” (“Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex…, religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law. Depending on the specific circumstances, such algorithmic discrimination may violate legal protections”).

[5] The NIST framework recommends that organizations develop enhanced processes for governing, mapping, measuring, and managing AI risk and clearly define the roles and responsibilities for the personnel overseeing AI system usage and performance.

New York Enacts First State “Right-to-Repair” Law

At the close of 2022, New York Governor Kathy Hochul signed the “Digital Fair Repair Act” (S4101A/A7006-B) (to be codified at N.Y. GBL §399-nn) (the “Act”). The law makes New York the first state in the country to pass a consumer electronics right-to-repair law.[1] Similar bills are pending in other states. The Act is a slimmed down version of the bill that was first passed by the legislature last July.

Generally speaking, the Act will require original equipment manufacturers (OEMs), or their authorized repair providers, to make parts and tools and diagnostic and repair information required for the maintenance and repair of “digital electronic equipment” available to independent repair providers and consumers, on “fair and reasonable terms” (subject to certain exceptions). The law only applies to products that are both manufactured for the first time as well as sold or used in the state for the first time on or after the law’s effective date of July 1, 2023 (thus exempting electronic products currently owned by consumers). Continue Reading

District Court Decision Brings New Life to CFAA to Combat Unwanted Scraping

On October 24, 2022, a Delaware district court held that certain claims under the Computer Fraud and Abuse Act (CFAA) relating to the controversial practice of web scraping were sufficient to survive the defendant’s motion to dismiss. (Ryanair DAC v. Booking Holdings Inc., No. 20-01191 (D. Del. Oct. 24, 2022)). The opinion potentially breathes life into the use of the CFAA to combat unwanted scraping.

In the case, Ryanair DAC (“Ryanair”), a European low-fare airline, brought various claims against Booking Holdings Inc. (and its well-known suite of online travel and hotel booking websites) (collectively, “Defendants”) for allegedly scraping the ticketing portion of the Ryanair site. Ryanair asserted that the ticketing portion of the site is only accessible to logged-in users and therefore the data on the site is not public data.

The decision is important as it offers answers (at least from one district court) to several unsettled legal issues about the scope of CFAA liability related to screen scraping. In particular, the decision addresses:

  • the potential for vicarious liability under the CFAA (which is important as many entities retain third party service providers to perform scraping)
  • how a data scraper’s use of evasive measures (e.g., spoofed email addresses, rotating IP addresses) may be considered under a CFAA claim centered on an “intent to defraud”
  • clarification as to the potential role of technical website-access limitations in analyzing CFAA “unauthorized access” liability

To find answers to these questions, the court’s opinion distills the holdings of two important CFAA rulings from this year – the Supreme Court’s holding in Van Buren that adopted a narrow interpretation of “exceeds unauthorized access” under the CFAA and the Ninth Circuit’s holding in the screen scraping hiQ case where that court found that the concept of “without authorization” under the CFAA does not apply to “public” websites. Continue Reading

Amazon’s Recent Acquisitions Highlight the Value of Consumer Data (and the Evolving Privacy Issues)

Roughly two weeks apart, on July 21, 2022 and August 5, 2022, respectively, Amazon made headlines for agreeing to acquire One Medical, “a human-centered and technology-powered primary care organization,” for approximately $3.9 billion and iRobot, a global consumer robot company, known for its creation of the Roomba vacuum, for approximately $1.7 billion. These proposed acquisitions have drawn the scrutiny of the Federal Trade Commission (FTC), which following President Biden’s 2021 Executive Order on antitrust and competition, has taken a more aggressive stance toward acquisitions by large technology companies in an effort to, in FTC Chair Lina Khan’s words, “prevent incumbents from unlawfully capturing control over emerging markets.”

Beyond antitrust issues, Amazon’s recent acquisition decisions bring the discussion of the collection of consumer information and its secondary uses, specifically location and health data (which we have previously written about), to the forefront, yet again.

Read the full post on our Privacy Law Blog.

Important CDA Section 230 Case Lands in Supreme Court: Level of Protection Afforded Modern Online Platforms at Stake

Since the passage of Section 230 of the Communication Decency Act (“CDA”), the majority of federal circuits have interpreted the CDA to establish broad federal immunity to causes of action that would treat service providers as publishers of content provided by third parties.  The CDA was passed in the early days of e-commerce and was written broadly enough to cover not only the online bulletin boards and not-so-very interactive websites that were common then, but also more modern online services, web 2.0 offerings and today’s platforms that might use algorithms to organize, repackage or recommend user-generated content.

Over 25 years ago, the Fourth Circuit, in the landmark Zeran case, the first major circuit court-level decision interpreting Section 230, held that Section 230 bars lawsuits, which, at their core, seek to hold a service provider liable for its exercise of a publisher’s “traditional editorial functions — such as deciding whether to publish, withdraw, postpone or alter content.” Courts have generally followed this reasoning ever since to determine whether an online provider is being treated as a “publisher” of third party content and thus entitled to immunity under the CDA.  The scope of “traditional editorial functions” is at the heart of a case currently on the docket at the Supreme Court. On October 3, 2022, the Supreme Court granted certiorari in an appeal that is challenging whether a social media platform’s targeted algorithmic recommendations fall under the umbrella of “traditional editorial functions” protected by the CDA or whether such recommendations are not the actions of a “publisher” and thus fall outside of CDA immunity. (Gonzalez v. Google LLC, No. 21-1333 (U.S. cert. granted Oct. 3, 2022)). Continue Reading

App Store Protected by CDA Immunity (and Limitation of Liability) for Losses from Fraudulent Crypto Wallet App

In a recent ruling, a California district court held that Apple, as operator of that App Store, was protected from liability for losses resulting from that type of fraudulent activity. (Diep v. Apple Inc., No. 21-10063 (N.D. Cal. Sept. 2, 2022)). This case is important in that, in a motion to dismiss, a platform provider was able to use both statutory and contractual protections to avoid liability for the acts of third party cyber criminals.

Read the full post on our Blockchain and the Law blog.

FTC Sues Data Provider over the Collection and Sale of Geolocation Data

On August 29, 2022, the Federal Trade Commission (FTC) announced that it had filed a complaint against Kochava, Inc. (“Kochava”), a digital marketing and analytics firm, seeking an order halting Kochava’s alleged acquisition and downstream sale of “massive amounts” of precise geolocation data collected from consumers’ mobile devices.

The complaint alleges that the data is collected in a format that would allow third parties to track consumers’ movements to and from sensitive locations, including those related to reproductive health, places of worship, and their private residences, among others.  The FTC alleged that “consumers have no insight into how this data is used” and that they do not typically know that inferences about them and their behaviors will be drawn from this information.  The FTC claimed that the sale or license of this sensitive data, which could present an “unwarranted intrusion” into personal privacy, was an unfair business practice under Section 5 of the FTC Act. Continue Reading

LexBlog

This website uses third party cookies, over which we have no control. To deactivate the use of third party advertising cookies, you should alter the settings in your browser.

OK