Two Courts, Two Verdicts
The contradictory copyright landscape that will define AI art for a generation
“The law has not yet decided what AI training is. And until it does, everyone is guessing.”
On November 4, 2025, the UK High Court issued its ruling in Getty Images v. Stability AI. On November 11, a Munich court issued its ruling in a case involving AI training on copyrighted lyrics. The two decisions, separated by seven days and a body of water, reached diametrically opposite conclusions about the same fundamental question: does training an AI model on copyrighted content constitute infringement?
The UK said no. Germany said yes. And the rest of the world said: well, now what?
The Getty Images case had been the great hope of the rights-holder community. Filed in February 2023, it accused Stability AI of infringing over twelve million photographs by using them to train Stable Diffusion. The argument was intuitive and powerful: Stability scraped Getty’s library, fed those images into a model, and now sells a product that can generate images in the style of—and occasionally bearing traces of—Getty’s copyrighted work. That looks like theft. It feels like theft.
The court largely disagreed. Getty was unable to secure sufficient evidence about Stability’s specific training process, and the technical arguments about whether a trained model “contains” its training data in a legally meaningful sense proved slippery. The ruling was a major blow to rights holders who had hoped to establish a bright-line rule: use our content to train, pay us a license.
Seven days later, a Munich court looked at essentially the same question—can an AI model trained on copyrighted content be considered to have “embodied” that content?—and answered yes. The model in question had been trained on copyrighted lyrics and could reproduce them. The court found that because the model could output the copyrighted content, the training process constituted a use that required licensing.
The technical distinction matters. Stable Diffusion generates new images that are stylistically influenced by but not identical to its training data. The Munich model reproduced lyrics verbatim. One could argue, reasonably, that these are different situations warranting different legal treatments. But the underlying principle—does training equal use?—remains unresolved, and the two rulings pull in opposite directions.
Meanwhile, the case pipeline is staggering. Over fifty lawsuits between IP owners and AI developers are pending in U.S. federal courts. Disney and Universal have discovery deadlines against Midjourney set for August 2026. The music industry settled one case—UMG v. Udio—with a licensing agreement, suggesting that at least some rights holders see negotiation as more productive than litigation.
There’s an additional wrinkle that deserves more attention than it gets. In the United States, copyright officials have ruled that AI-generated artwork cannot be copyrighted. The Colorado State Fair controversy—where an AI-generated image won a competition in 2022—came full circle when the Copyright Office denied registration. The logic: copyright requires human authorship, and a prompt is not authorship.
This creates a remarkable paradox. Using copyrighted works to train an AI may or may not be infringement. But the outputs of that AI—no matter how commercially valuable—receive no copyright protection. The person who prompts a million-dollar advertising image owns nothing. The person whose copyrighted work helped train the model that generated it may have no recourse. Nobody’s rights are clear, and everybody is exposed.
The emerging licensing frameworks suggest where this might land. Shutterstock established a Contributor Fund to compensate contributors whose work was used in AI training. Harper Collins negotiated deals worth two thousand five hundred to five thousand dollars per book. The music industry is pushing for micropayments triggered per AI usage instance. These are pragmatic solutions, but they presume a legal framework that doesn’t yet exist.
For the AI image generation industry, the uncertainty is both shield and sword. It’s a shield because no clear precedent means no clear liability—companies can continue training on web-scraped data without definitive legal risk. It’s a sword because the uncertainty itself chills investment, partnerships, and enterprise adoption. No Fortune 500 company wants to build a marketing pipeline on technology whose legal foundation could shift overnight.
For artists, the situation is bleak in a specific way. The tools are trained on their work. The outputs compete with their work. And the legal system hasn’t decided whether any of this entitles them to compensation, protection, or even acknowledgment. The few who have the resources to litigate face years of uncertainty and the real possibility of adverse precedent.
What would clarity look like? Probably something like the music industry’s approach: compulsory licensing with standardized rates. Train on whatever you want, but pay a per-work or per-parameter fee into a pool distributed to rights holders. It’s imperfect, bureaucratic, and slow. It’s also the model that allowed radio, streaming, and sampling to coexist with creator compensation.
Until then, we live in the gap between two courtrooms—one in London, one in Munich—that looked at the same technology and reached opposite conclusions. The gap is not just legal. It is philosophical. Is an AI model a derivative work, or a tool? Is training a form of reading, or a form of copying? Does a model that has ingested twelve million photographs “contain” those photographs, or has it extracted something more abstract—patterns, relationships, aesthetics—that exists independent of any single source?
These are not questions the law was built to answer. They are questions the law is being forced to answer anyway. And the answers, when they come, will determine not just the future of AI art, but the future of copyright itself.