AI developers say it’s not their fault that their machine learning programs produce copyrighted material, even though they are the ones who trained their systems on copyrighted material. Instead, they want users to take legal responsibility for material generated by their systems.
The U.S. Copyright Office is mulling new regulations regarding generative AI and published a request for comment on artificial intelligence and copyright in August. The responses to the request are public and can be found here.
Among the responses, companies such as Google, Dall-E developer OpenAI and Microsoft wrote, arguing that only unlicensed production of copyrighted material violates existing protections. According to them, AI software is like audio or video recording equipment, photocopiers or cameras, all of which can be used to infringe copyrights. Manufacturers of those products are not held accountable when that happens, so why should AI companies be held responsible, the thinking goes.
Microsoft, which has a multi-billion dollar partnership with OpenAI, wrote:
[U]Users must take responsibility for using the tools responsibly, as designed. … To address rightsholders’ concerns, AI developers have taken steps to reduce the risk of AI tools being misused for copyright infringement. Microsoft integrates many such measures and safeguards to limit potentially harmful uses of our AI tools. These measures include meta-prompts and classifiers, controls that add additional instructions to a user prompt to limit harmful or infringing results.
It should be noted that the safeguards Microsoft supposedly put in place have done little to prevent massive trademark and copyright infringement. The Walt Disney Company recently asked the tech giant to prevent users from infringing on its trademarks.
Google, meanwhile, argued:
The possibility that a generative AI system could, through prompt engineering, replicate the contents of its training data raises questions about the appropriate boundary between direct and secondary infringement. When an AI system is induced by a user to produce an infringing output, any resulting liability should rest with the user as the party whose voluntary conduct substantially caused the infringement. … A rule that would hold AI developers directly (and strictly) liable for infringing output that users create would impose crushing liability on AI developers even if they have taken reasonable steps to prevent users’ infringing activity. If that standard had been applied in the past, we would not have legal access to photocopiers, personal audio and video recording equipment, or personal computers – all of which can be used for infringement and for substantial beneficial purposes.
And OpenAI wrote:
When assessing infringement claims regarding outputs, the analysis starts with the user. After all, there is no output without a user prompt, and the nature of the output is directly affected by what is requested.
It’s worth pointing out that all of the above companies have used copyrighted material and trademarks without permission to train their software, and OpenAI is currently being sued by more than a dozen major authors who accuse the company of infringing on its royalty.
And to muddy the waters even further, despite these companies telling the US government that users should be liable for the output of their systems, many of them, including Google, OpenAI, Microsoft and Amazon, are offering to pay their legal fees to cover clients in copyright infringement lawsuits.
But ultimately, the companies argue that current copyright law is on their side and the copyright office doesn’t need to change that, at least not right now. They say if the agency cracks down on developers and changes copyright law, it could hinder the emerging technology. In its letter, OpenAI said it “urges the Copyright Office to proceed cautiously in calling for new legislative solutions that, in retrospect, may prove premature or misleading as technology rapidly evolves.”
Perhaps surprisingly, the major movie studios are on the side of big tech here, even if they approach it from a different angle. In its submission to the Copyright Office, the Motion Picture Association (MPA) made a distinction between generative AI and the use of artificial intelligence in the film industry, stating: “AI is a tool that supports, but does not replace, humans. creation of the members’ works.” The MPA also argued against updating the current legislation:
MPA members have a uniquely balanced perspective on the interplay between AI and copyright. The members’ copyrighted content is extremely popular and valuable. Strong copyright protection is the backbone of their industry. At the same time, MPA members have a strong interest in developing creator-driven tools, including AI technologies, to support the creation of world-class content. AI, like other tools, supports and enhances creativity, drawing audiences to the stories and experiences that define the entertainment industry. MPA’s overarching view, based on the current state of play, is that while AI technologies raise a host of new questions, these questions imply established doctrines and principles of copyright law. At this time, there is no reason to conclude that these existing doctrines and principles will be inadequate to provide courts and the Copyright Office with the tools they need to answer AI-related questions as and when they arise.
While the MPA writes that existing copyright laws are sufficient, it has expressed strong objections to the idea that AI companies should be able to freely train their systems on their material. In its letter, the MPA wrote:
MPA currently believes that existing copyright law should be able to answer these questions. A copyright owner who finds infringement should be able to pursue the existing remedies available in §§ 502-505, including monetary damages and injunctive relief. … At this time, there is no reason to believe that copyright owners and companies engaged in training generative AI models and systems cannot enter into voluntary licensing agreements, such that government intervention might be necessary.