In the rapidly-evolving AI space, the last few days of this week saw significant AI developments occur perhaps even faster than usual.  For example, seven AI companies agreed to voluntary guidelines covering AI safety and security and ChatGPT rolled out a custom preferences tool to streamline usage. In addition, as a related point, Microsoft issued a transparency note for the Azure OpenAI service.  And on top of that, this week saw announcements of a number of generative AI commercial ventures which are beyond the scope of this particular post.

AI Voluntary Guidelines

The White House announced that it had secured voluntary commitments from seven major AI companies (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI) focusing on what the White House termed as “three principles that must be fundamental to the future of AI – safety, security, and trust.”  According to the announcement, the Voluntary AI Commitments (the “commitments”) are “consistent with existing laws and regulations,” reflect current safety practices and are intended to remain in effect until regulations covering AI safeguards are enacted.  Two important observations: (1) there is no enforcement mechanism in the commitments (though, it’s possible that the FTC could investigate a false claim about compliance with the commitments under its authority over unfair or deceptive acts or practices); and (2) the commitments apply only to generative AI models that are more powerful than the current industry models (i.e., more powerful than ChatGPT-4, DALL-E 2, Claude 2, PaLM 2, Titan). 

The commitments include statements on the following principles:

  • Safety: The companies committed to internal and external security testing of AI systems before their release (including related to misuse, societal risks and national security concerns).  The companies also committed to sharing information regarding safety risks, dangerous capabilities and attempts to circumvent safeguards.
  • Security: The companies agreed to invest in cybersecurity and insider threat safeguards that protect secure AI training processes from bad actors, including establishing incentive programs (e.g., bug bounties) to uncover undiscovered vulnerabilities.
  • Trust: The companies agreed to develop technical processes that label AI-generated audio or visual content and develop tools that allow others to determine if a piece of content was created with their system.  One example of such a label might be a watermarked AI-generated photo (note: DALL-E generated images contain a watermark in the bottom right corner; however, such watermarks can be removed relatively easily by users). Another commitment requires companies to publish reports for all new “significant model public releases within scope” that include safety evaluations and limitations on performance and intended uses (note: There appears no requirement to publish information about a model’s training data). Finally, under the commitments, the companies would prioritize research on social risks posed by AI and support research on solving the consequential challenges of our age, such as climate change, cancer detection and cybersecurity challenges.

ChatGPT Custom Instructions

ChatGPT introduced a beta feature to allow users to save custom instructions or preferences to steer ChatGPT output or help improve performance of ChatGPT plugins.  For example, if a particular user is a software developer, he or she could save a preference for a particular coding language so that by default, the platform releases output in that language.  Regarding privacy concerns, ChatGPT states that it may use users’ custom instructions to improve model performance, but that users could disable this sharing through data control settings.

Transparency Note for Azure OpenAI Service

Microsoft released a transparency note for its Azure OpenAI service that describes the basics of the Azure OpenAI models for businesses implementing enterprise-grade functions. The note lists some intended uses (e.g., chat and conversation creation, writing assistance, code generation, perform sentiment analysis) and some considerations for customers when choosing a use case (i.e., which uses might be ill-suited for the service, including the avoidance of using the models for high stakes scenarios or open-ended unconstrained content generation).  The note also lists best practices for improving model outputs, such as through human review of outputs, measuring model quality and fairness, and offers some reminders about the technical limitations of the system.  All in all, this is the type of information that should inform business decisions on AI implementation and which potential risks should be prioritized during contract negotiations with the provider.

* * *

The White House’s interest in generative AI and the flurry of activity by major generative AI platform providers is not surprising given the extensive attention focused on this technology. While there has been some discussion of the benefits afforded by this technology, other activity in the area such as pending litigations, ongoing labor actions, congressional discussions, and international developments has brought extensive attention to the risks the technology presents

Despite recent testimony in Congress, it seems logical that some of these developments on behalf of the platform providers are intended to forestall legislation on the topic.  Whether this divided Congress can come together to enact meaningful regulation in this area is yet to be seen.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Jeffrey Neuburger Jeffrey Neuburger

Jeffrey Neuburger is co-head of Proskauer’s Technology, Media & Telecommunications Group, head of the Firm’s Blockchain Group and a member of the Firm’s Privacy & Cybersecurity Group.

Jeff’s practice focuses on technology, media and intellectual property-related transactions, counseling and dispute resolution. That expertise…

Jeffrey Neuburger is co-head of Proskauer’s Technology, Media & Telecommunications Group, head of the Firm’s Blockchain Group and a member of the Firm’s Privacy & Cybersecurity Group.

Jeff’s practice focuses on technology, media and intellectual property-related transactions, counseling and dispute resolution. That expertise, combined with his professional experience at General Electric and academic experience in computer science, makes him a leader in the field.

As one of the architects of the technology law discipline, Jeff continues to lead on a range of business-critical transactions involving the use of emerging technology and distribution methods. For example, Jeff has become one of the foremost private practice lawyers in the country for the implementation of blockchain-based technology solutions, helping clients in a wide variety of industries capture the business opportunities presented by the rapid evolution of blockchain. He is a member of the New York State Bar Association’s Task Force on Emerging Digital Finance and Currency.

Jeff counsels on a variety of e-commerce, social media and advertising matters; represents many organizations in large infrastructure-related projects, such as outsourcing, technology acquisitions, cloud computing initiatives and related services agreements; advises on the implementation of biometric technology; and represents clients on a wide range of data aggregation, privacy and data security matters. In addition, Jeff assists clients on a wide range of issues related to intellectual property and publishing matters in the context of both technology-based applications and traditional media.