In the rapidly-evolving AI space, the last few days of this week saw significant AI developments occur perhaps even faster than usual.  For example, seven AI companies agreed to voluntary guidelines covering AI safety and security and ChatGPT rolled out a custom preferences tool to streamline usage. In addition, as a related point, Microsoft issued a transparency note for the Azure OpenAI service.  And on top of that, this week saw announcements of a number of generative AI commercial ventures which are beyond the scope of this particular post.

Within the rapidly evolving artificial intelligence (“AI”) legal landscape (as explored in Proskauer’s “The Age of AI” Webinar series), there is an expectation that Congress may come together to draft some form of AI-related legislation. The focus is on how generative AI (“GenAI”) in the last six months or so has already created new legal, societal, and ethical questions.

Intellectual property (“IP”) protection – and, in particular, copyright – has been a forefront issue. Given the boom in GenAI, some content owners and creators, have lately begun to feel that AI developers have been free riding by training GenAI datasets off a vast swath of web content (some of it copyrighted content) without authorization, license or reasonable royalty. Regardless of whether certain GenAI tools’ use of web-based training data and the tools’ output to users could be deemed infringement or not (such legal questions do not have simple answers), it is evident that the rollout of GenAI has already begun to affect the vocations of creative professionals and the value of IP for content owners, as AI-created works (or hybrid works of human/AI creation) are already competing with human-created works in the marketplace. In fact, one of the issues in the Writers Guild of America strike currently affecting Hollywood concerns provisions that would govern the use of AI on projects.

On May 17, 2023, the House of Representatives Subcommittee on Courts, Intellectual Property, and the Internet held a hearing on the interoperability of AI and copyright law. There, most of the testifying witnesses agreed that Congress should consider enacting careful regulation in this area that balances innovation and creators’ rights in the context of copyright. The transformative potential of AI across industries was acknowledged by all, but the overall view was that AI should be used as a tool for human creativity rather than a replacement. In his opening remarks, Subcommittee Chair, Representative Darrell Issa, stated that one of the purposes of the hearing was to “address properly the concerns surrounding the unauthorized use of copyrighted material, while also recognizing that the potential for generative AI can only be achieved with massive amounts of data, far more than is available outside of copyright.” The Ranking Member of the Subcommittee, Representative Henry Johnson, expressed an openness for finding middle ground solutions to balance IP rights with innovation but stated one of the quandaries voiced by many copyright holders as to GenAI training methods: “I am hard-pressed to understand how a system that rests almost entirely on the works of others, and can be commercialized or used to develop commercial products, owes nothing, not even notice, to the owners of the works it uses to power its system.”

On January 7, 2019, the federal Office of Management and Budget (OMB) released a draft of a memorandum setting forth guidance to assist federal agencies in developing regulatory and non-regulatory approaches regarding artificial intelligence (AI).  This draft guidance will be available for public comment for sixty days, after which it will be finalized and issued to federal agencies.

According to the draft, the guidance was developed with the intent to reduce barriers to innovation while also balancing privacy and security concerns and respect for IP. The proposed guidance features ten principles to guide regulatory approaches to AI applications.  In addition, in what may be a boon for those in the private sector developing AI infrastructure, the OMB reinforces the objective of making federal data and models generally available to the private sector for non-federal use in developing AI systems.

Initial responses to the proposed guidance has been mixed, and it remains to be seen how the principles in the guidance (when finalized) will be put in practice. Notably, however, those who intend to invest significant resources in AI-based infrastructure should be aware of what may prove to be the emerging blueprint for AI regulation in the near future.