Who owns art generated by a computer?
The emergence of artificial intelligence systems that can generate computer code, artwork, essays, medical diagnoses, and more in response to simple text prompts is reigniting an unresolved legal debate: Who owns the rights to computer-generated creations?
Careers and professional futures will turn on the answer.
US law does not preclude nonhuman authorship. But courts and copyright registration offices have refused to accord intellectual property rights to nonhuman authors or creators. Their hesitance reflects a reluctance to create a new class of rights-holder without a firm legal basis. Should that change? Should we go ahead and provide that legal basis?
Before answering, let’s consider the other parties, starting with the user. Say you tell your favorite text-toimage generator to gin up a cat made of carrots. You’re delighted with the result. Are you the artist? Sure, you might think, the AI was just a tool — like a paintbrush or a very fancy version of Photoshop. Artists always use tools, right? Problem is, you haven’t done much work. All you really contributed was an idea — and copyright doesn’t protect ideas, only expressions of ideas. Copyright might cover what the machine pushed out but not what you dropped in. You just placed an order.
In fact, you did so little work that you don’t need the incentive of exclusive rights to motivate your next request to the AI. Here, copyright would serve no socially useful purpose. Quite the opposite — it would let you prevent others from creating their own carroty kitties with the same few words, merely because you ordered first.
Suppose, however, that the AI’s first effort was abysmal. You had to go back and forth with the system, progressively refining your prompts, cleverly leading the image generator down the path to a much more original vision. Now you have done some real work and contributed much more meaningfully to the final product. You have a stronger claim at least to co-authorship of the output image. You might see yourself like the artist Sol LeWitt, who issued detailed instructions to assistants who actually produced the finished works. But LeWitt never sought to stop or limit others from doing the same thing — nor could he have. Copyright might cover the precise words of his instructions as it would a recipe in a cookbook, but it prevents no one from reproducing the work itself (or baking the cake). Only copying the words themselves is prohibited. Sorry, user: Regardless of your efforts, you don’t have a good claim to copyright.
What about the creator of the text-to-image AI program? Certainly copyright covers the literal code. But even here protection is limited. It doesn’t cover the underlying methodology, the algorithms, the ideas — only the expression of these concepts in code. Sell knockoff copies of Microsoft Office, and copyright law will crush you like a bug. Write your own productivity software with similar functionality, and the result is LibreOffice, which Microsoft is powerless to stop with copyright.
Copyright cannot possibly protect the images that the AI creates merely because it protects the literal code. The connection between the originator of the AI and the work the AI generates is far too tenuous. It would be like an art teacher claiming ownership of their students’ subsequent work, forever. The teacher is free to turn students away and charge high tuition, but when class is over, the students’ work is their own.
Can a machine be creative? Asking the question philosophically sends us running in circles, since you can argue forcefully, and somewhat pointlessly, either way. More practically, recognizing copyright in machine-produced works would mean rewarding the owner of the AI. And traditional principles of intellectual-property law already provide ample motivation to the originators of AI systems. They can limit access and charge for each use if they so choose; why allow them to grab pieces of downstream sales as well? And do we really want to dive into messy allocation questions when both user and AI make identifiable contributions to the final work?
Finally, enshrining machine creativity in a legal framework designed for human activity might be a step beyond what the public is ready for. ChatGPT and its ilk may have already administered a troubling dose of humility to writers, artists, and other professionals who see their efforts as expressions of high human purpose and capacity. Searching out sources, synthesizing information, constructing coherent arguments — does it matter whether an AI is in some epistemic sense “creative” in doing these things if the results are indistinguishable from human output? Purely in terms of utility, maybe not. But erasing legal distinctions between human and what is, at bottom, mechanical activity exalts the machine less than it denigrates the human.
There is more to a work of art than paint on a canvas or pixels on a screen. At least for art, intellectual property rights implicitly reward the fashioning of intent, experience, emotion, and message into a perceivable expression. Whether the results can be spoofed by an AI is irrelevant to the incentives we choose to provide in intellectual property law. If we attach social value to specifically human creativity, we should confine legal protection accordingly.
Of course, whether human artists can keep the marketplace convinced of their works’ monetary value is a different matter. But maybe the best answer to who owns the rights in machine-produced art is: no one.