Featured image for Copyright and AI: Can You Own AI-Generated Content? (2026)
AI Ethics · · 21 min read · Updated

Copyright and AI: Can You Own AI-Generated Content? (2026)

Can you copyright AI art? Learn what the law actually says about AI-generated content ownership, major court cases, and practical steps to protect your work.

ai copyrightai art legalai content ownershipintellectual propertyai law

Last week, I generated an image using Midjourney that turned out exactly how I pictured it. Clean lines, perfect composition, exactly the style I wanted. I was thrilled—until a thought hit me: Do I actually own this thing?

That question sent me down a rabbit hole of legal documents, court rulings, and Copyright Office reports. And honestly? The answer isn’t as simple as I hoped it would be. In fact, some recent court cases have made things even more complicated.

If you’ve ever created something with AI—whether it’s an image, a blog post, or marketing copy—you’ve probably wondered the same thing. Can anyone just take it and use it? Do you have any legal protection? What happens if you want to sell it commercially?

Here’s the short version: You cannot copyright purely AI-generated content in the United States—the U.S. Copyright Office requires human authorship as a “bedrock requirement.” However, if you provide substantial creative input that shapes the final work, those human-authored elements may be eligible for protection.

But the details matter a lot, and the stakes are getting higher as more creators rely on AI tools for their livelihood. Let’s break down exactly what the law says, what the major court cases mean for you, and what practical steps you can take to protect your work.

Let me give you the direct answer first, then we’ll unpack the nuances.

If an AI generates something entirely on its own with minimal human input, that work cannot be copyrighted in the United States. It falls into the public domain, meaning anyone can copy it, use it, or sell it without your permission.

But—and this is a big but—if you provide what the Copyright Office calls “sufficient creative input,” things change.

The U.S. Copyright Office released Part 2 of its comprehensive report on AI and copyright in January 2025, and they were crystal clear: human authorship isn’t just preferred, it’s required. They called it a “bedrock requirement” that has been consistent since copyright law’s inception.

Here’s how they draw the line:

  • AI-generated (no protection): You type a prompt, the AI generates an image, you save it. That’s it. No copyright.
  • AI-assisted (potentially protected): You generate an image, then you edit it, combine it with other elements, modify specific parts, or use it as a starting point for substantial original work. The human-authored portions may be copyrightable.

The key court case that cemented this approach was Thaler v. Perlmutter in 2023, where Dr. Stephen Thaler tried to register copyright for an image created entirely by an AI system called DABUS. The court said no—copyright law protects only human authorship. That ruling was affirmed by the D.C. Circuit in 2025, and as of October 2025, there’s a petition pending before the U.S. Supreme Court.

When I first learned all this, my reaction was honestly frustration. I spent time crafting that prompt! I had a vision in my head! But legally, prompting alone typically isn’t considered “sufficient creative input.” The AI, not you, is executing the creative expression—the actual brushstrokes, the color choices, the composition decisions.

It took me a while to accept that distinction, but it makes sense when you think about it from the law’s perspective. Copyright protects the expression of an idea, not the idea itself. When you prompt an AI, you’re providing the idea. The AI provides the expression.

Let’s dig deeper into what the law actually says right now, because the nuances matter more than ever.

The Copyright Office has been remarkably consistent on this point. Their 2025 report stated that “traditional elements of authorship”—the actual creative expression like brushstrokes, word choices, or composition decisions—must be executed by a human.

Here’s what qualifies as human authorship in their view:

  • Selecting and arranging AI-generated elements in a creative way
  • Substantially editing or modifying AI outputs
  • Using AI-generated content as one component of a larger human-created work
  • Making creative choices that materially influence the final expression

Here’s what doesn’t qualify:

  • Simply typing a prompt, even a detailed one
  • Running multiple generations until you get something you like
  • Making minor tweaks or color corrections
  • Upscaling or enhancing AI outputs without substantial modification

They’ve also implemented disclosure requirements that are worth paying attention to. If you’re registering a work that contains AI-generated material, you’re now required to disclose the extent of AI involvement. This isn’t optional—failure to disclose can result in your registration being canceled.

I actually find this disclosure requirement reasonable. It creates transparency about what’s human-authored and what isn’t, which helps everyone—courts, other creators, and the public—understand what they’re looking at.

Key Court Cases You Should Know

Beyond Thaler v. Perlmutter, several cases are shaping how courts interpret AI and copyright.

The Supreme Court petition (pending as of October 2025) could potentially change everything. If the Court takes the case and rules differently, we’d see a major shift in how AI authorship is treated. But most legal experts I’ve read don’t expect them to overturn the human authorship requirement—it’s too fundamental to copyright law.

The D.C. Circuit’s 2025 affirmation made clear that this isn’t just a quirk of the Copyright Office’s policy. Federal courts agree: copyright exists to incentivize human creativity. Extending it to AI outputs would change the very purpose of the law.

Here’s a quote from the ruling that stuck with me: copyright has never stretched “so far as to protect works generated by new forms of technology operating absent any guiding human hand.” That phrase—“guiding human hand”—is key to understanding where the line is drawn.

The Gray Area: AI-Assisted Work

Here’s where things get genuinely tricky, and I’ll admit I don’t have a perfect answer because neither do the courts.

The standard is “sufficient creative input,” but what exactly does that mean? The Copyright Office intentionally left this vague, saying it depends on the circumstances of each case.

Some scenarios that likely create copyrightable human authorship:

  • You generate 20 AI images, select and arrange them into a cohesive collage with original framing and design
  • You use an AI to generate a rough concept, then extensively paint over it, modify elements, and create a final piece that’s substantially different
  • You write original text and use AI to suggest edits, but you make the final decisions on every sentence
  • You create an AI image, then hand-paint additional elements and integrate it into a larger artwork

Some scenarios that probably don’t:

  • You generate hundreds of images and pick your favorite
  • You make minor color corrections or crop an AI image
  • You upscale or slightly enhance an AI-generated output
  • You adjust the contrast or saturation without changing the composition

The frustrating truth? There’s no bright line. And until more cases work through the courts, we’re all operating in uncertainty. I’ve talked to artists who are keeping detailed records of their process just in case they ever need to prove their level of involvement.

The Court Cases Changing Everything

While we debate who owns AI outputs, some of the biggest lawsuits are actually about AI inputs—specifically, what happens when companies train AI models on copyrighted work.

Disney & Universal vs. Midjourney (June 2025)

This is the blockbuster case that everyone in the AI art world is watching. In June 2025, Disney and Universal filed a landmark lawsuit against Midjourney, alleging both direct and secondary copyright infringement.

Their claims are serious: they allege Midjourney trained its model on their copyrighted works without consent and that the system can generate images that clearly resemble their characters—think Star Wars, Shrek, Spider-Man. They submitted examples showing Midjourney outputs that were strikingly similar to their copyrighted characters.

Midjourney’s defense? Fair use. They argue that training a model by extracting statistical information from copyrighted works is “transformative”—the model doesn’t store copies of the images, it learns patterns.

Interestingly, Midjourney also claimed the plaintiffs themselves use generative AI tools—a kind of “you do it too” defense that probably won’t carry much legal weight but makes for good headlines.

What does this mean for you? If Midjourney loses, we could see significant changes to how image generators operate. They might need licensing deals with content creators, which could mean higher prices or more restricted outputs. But if they win on fair use, it validates the current model.

I’m watching this one closely because the outcome could reshape the entire AI art industry.

Getty Images vs. Stability AI

Getty Images sued Stability AI in both the UK and US, alleging the company used over 12 million Getty images to train Stable Diffusion without permission. This case gets at a fundamental question: can you train an AI on copyrighted material without a license?

The UK case reached a significant ruling in November 2025: the High Court largely rejected Getty’s copyright infringement claims. The court found that Stable Diffusion’s model weights—the mathematical representations the AI learned—were not “infringing copies” of the original images.

This is a big deal. It suggests that in the UK, at least, training an AI on copyrighted images might not constitute copyright infringement, even without permission.

But Getty had already dropped direct copyright infringement claims in the UK case earlier in 2025, focusing instead on trademark infringement (specifically, watermarks appearing in some Stable Diffusion outputs). The US case continues with similar claims.

The trademark angle is interesting—even if training on images is legal, reproducing recognizable watermarks in outputs might not be.

Bartz v. Anthropic

This case took a different turn but established important principles. Anthropic (the company behind Claude) was sued for training on books, and in 2025 the court ruled that training on lawfully acquired books was “quintessentially transformative” and thus fair use.

But here’s the catch: using pirated copies of books was explicitly not fair use. The distinction between legal and illegal sources for training data matters significantly.

The case eventually settled for a reported $1.5 billion—a massive number that shows how much is at stake in these disputes.

What Artists Have Won

It’s not all corporate vs. corporate. Individual artists have also had some wins that matter for the creative community.

Courts have ruled against AI tools that secretly used artist work for training without disclosure, requiring companies to maintain public dataset records and pay damages.

This suggests a shift away from “scrape and go” practices toward more transparency about training data sources. If you’re an artist concerned about your work being used to train AI, public dataset records and opt-out mechanisms might become more common.

Some artists I know have started including explicit terms in their online portfolios about AI training, though how enforceable those are remains uncertain.

Who Owns What When You Use AI Tools?

Okay, but what about the terms of service? When you use AI image generators like Midjourney, DALL-E, and Stable Diffusion, what do the platforms say you own?

OpenAI (DALL-E, ChatGPT)

OpenAI’s terms generally grant you ownership of outputs generated for your use. For paid users, they assign rights to you, subject to their usage policies.

But here’s the critical distinction: OpenAI saying “you own it” doesn’t mean copyright law protects it. You own it in the sense that OpenAI isn’t claiming ownership—but if it’s purely AI-generated with minimal human input, copyright law still won’t protect it from others copying it.

Think of it this way: OpenAI is saying “we have no claim on this”—but that’s different from “the law will protect your exclusive rights.”

Midjourney

Midjourney’s terms give paid subscribers commercial rights to use, sell, and reproduce images generated with their service. Free users have more limited rights.

Again, this is about your relationship with Midjourney, not about copyright law itself. Midjourney won’t come after you for selling generated images—but that doesn’t mean a competitor can’t freely copy them.

I actually think Midjourney does a decent job of being clear about this in their terms, though I wish they were even more explicit about the distinction between platform rights and legal protection.

Anthropic (Claude)

Anthropic takes a similar approach, granting users rights to outputs. For text-based AI, the analysis is similar: if you’re generating text that you then substantially edit and modify, your human-authored contributions may be protectable. Pure AI output without modification? Probably not.

The Key Distinction

I want to be really clear about this because I’ve seen a lot of confusion online: license from the platform ≠ copyright protection under the law.

When Midjourney says you have commercial rights, they’re saying they won’t claim your outputs or restrict your commercial use. That’s a contractual agreement between you and them.

Copyright law is a different question entirely. It’s asking whether the government will enforce your exclusive rights against third parties who copy your work. For purely AI-generated content, the answer under current U.S. law is no.

This confused me initially, and I think it confuses a lot of people. The platforms understandably focus on what they control—their relationship with you. But they can’t grant you rights under copyright law that don’t exist.

The legal situation gets even more complex internationally, which matters if you’re working with clients or audiences in different countries.

United Kingdom

The UK has an interesting provision in its Copyright, Designs and Patents Act 1988—Section 9(3) assigns authorship of “computer-generated works” to the person who made the “necessary arrangements” for their creation.

This potentially gives AI users more protection than in the US, though the provision was written before modern generative AI and is under review. The Getty v. Stability AI ruling suggests UK courts are taking a more permissive approach to AI training, but the output ownership question remains less clear.

Some legal experts think the “necessary arrangements” language could include prompting, which would be a very different outcome than the US approach.

European Union

The EU AI Act, with various provisions becoming effective throughout 2025, primarily focuses on AI safety and transparency rather than copyright. It requires transparency about AI-generated content and has some copyright-adjacent rules for “General-Purpose AI models.”

But EU copyright law generally still requires human authorship, similar to the US. Future amendments might add more explicit rules for generative AI, potentially including licensing requirements for training data.

The EU tends to be more protective of individual rights, so I wouldn’t be surprised to see stronger regulations there eventually.

China

China’s approach is notably more permissive. In November 2023, the Beijing Internet Court recognized copyright for an AI-generated image where there was “demonstrable originality and human intellectual effort.”

This is basically the opposite of the US approach. Rather than requiring substantial human contribution to the expression itself, China seems to focus more on whether any human effort went into the process.

If you’re working in or for Chinese markets, this is worth noting—you might have protections there that don’t exist elsewhere.

The Practical Implication

If you’re creating content that might be used or distributed internationally, you’re dealing with a patchwork of different legal frameworks. The World Intellectual Property Organization (WIPO) maintains resources on how different countries approach AI and intellectual property. Something that has no copyright protection in the US might have some protection in the UK or China.

For most creators, this means: don’t assume your US-based understanding of copyright applies everywhere. If you’re doing significant international business, getting professional legal advice specific to those jurisdictions is probably worth it.

What This Actually Means for Your Work

Let’s get practical. How does all this affect what you’re actually doing day-to-day?

If You’re Selling AI Art

The legal landscape is… complicated but probably okay for basic commercial use.

You can generally sell AI-generated art commercially. The platforms allow it, and there’s no law against selling public domain content. What you can’t do is stop someone else from taking that same image and selling it themselves.

Some practical considerations:

  • Price your work knowing that exclusivity isn’t enforceable
  • Consider adding substantial human modification to strengthen your rights
  • Document your creative process in case you ever need to demonstrate human involvement
  • Be transparent with buyers about AI involvement to avoid disputes later
  • Build your value proposition around curation, selection, and presentation rather than exclusivity

If You’re Using AI for Business Content

Marketing materials, blog posts, social media graphics—businesses are using AI for all of this. What should you know?

The biggest risk is that purely AI-generated content isn’t exclusive. A competitor could theoretically use the same content if they somehow obtained it. Is this likely? For most businesses, probably not. But it’s something to consider for high-value content.

Practical steps:

  • Have humans review, edit, and modify AI-generated content
  • Document your editorial process
  • Mix AI-generated elements with original human-created content
  • Keep records of prompts, iterations, and modifications
  • For high-value content, consider having legal review

If You’re an Artist or Designer

This is where it gets personal. Many artists are grappling with how AI fits into their creative practice.

Using AI as a starting point for further human work is probably fine from a copyright perspective—your human modifications can be protected even if the AI base isn’t.

My honest take? The gray area is genuinely frustrating. You might create something that’s 80% AI and 20% your modifications, and there’s no clear answer on whether that 20% creates copyright protection for the whole work or just that portion.

Some artists I’ve seen discussing the broader implications of AI on creative work are choosing to document everything—screenshots of their process, dated files, notes on their creative decisions—even if they’re not sure it will matter legally.

I think that’s honestly the smartest approach right now. Documentation costs very little but could be valuable later.

Practical Steps to Strengthen Your Rights

Given all this uncertainty, what can you actually do to protect your work?

Document Your Creative Process

This is the single most practical thing you can do. Even if you’re not sure what level of human involvement creates copyright, having documentation helps.

  • Take screenshots of your prompt drafts and iterations
  • Save multiple versions showing your modifications
  • Write notes explaining your creative decisions
  • Date-stamp everything
  • Keep records of time spent on human modifications

If you ever need to prove substantial human involvement, this documentation could be critical.

Add Substantial Human Input

The more you modify AI output, the stronger your potential copyright claim. Consider:

  • Using AI outputs as rough starting points, then substantially reworking them
  • Combining multiple AI elements with original human-created elements
  • Making meaningful creative decisions at every stage
  • Treating AI as one tool in your process, not the entire process
  • Keeping AI at the conceptual stage while you execute the final expression

For most casual creators, you probably don’t need a lawyer. But consider getting professional advice if:

  • You’re creating high-value commercial work
  • You need to defend against someone copying your work
  • You’re entering licensing agreements
  • You’re building a business around AI-generated content
  • You have concerns about potential infringement of others’ copyrights

Legal advice isn’t cheap, but neither is losing rights to valuable intellectual property.

Consider Alternative Protections

Copyright isn’t the only form of intellectual property protection. Depending on your situation, you might consider:

  • Trade secrets: Your specific prompts, workflows, and techniques might be protectable as trade secrets
  • Trademark: Brand elements, logos, and distinctive marks can be trademarked (though AI-generated logos face similar authorship questions)
  • Contracts: You can require clients and buyers to agree to certain usage restrictions contractually
  • Physical security: Simply not publishing your work in high resolution can prevent some copying

For tools to help identify AI-generated content, watermarking and authentication solutions are also evolving.

What’s Coming Next

The legal landscape is actively evolving. Here’s what’s on the horizon:

Key cases expected in summer 2026: We’re waiting on major decisions in the Disney/Midjourney case and continuing developments in Getty’s US lawsuit. These could significantly clarify fair use and training data questions.

USCO guidance continues: The Copyright Office has indicated they’ll release more guidance as issues arise. Part 3 of their report, released in May 2025, addressed fair use in AI training, and more is likely coming.

California’s SB 942 (effective January 1, 2026): California enacted legislation requiring AI developers to provide tools for detecting AI-generated content and ensure transparency. This doesn’t directly address copyright, but it signals regulatory momentum.

Potential Congressional action: Various bills have been introduced related to AI and intellectual property, though none have passed. If Congress acts, they could potentially create new frameworks for AI copyright.

Industry responses: Major AI companies are increasingly entering licensing deals with content creators and publishers. This might become the norm—companies paying for training data rather than relying on fair use arguments.

One thing I’ve learned from following this closely: even experts aren’t sure how this will shake out. The technology is moving faster than the law, and courts are making decisions about technology they’re still trying to understand. I’m not going to pretend I know exactly where this ends up.

Can I sell AI-generated art legally?

Yes, you can sell AI-generated art. There’s no law against selling public domain content, and the major AI platforms grant commercial rights to users. However, you cannot prevent others from copying or reselling AI-generated content if it lacks copyright protection.

What if my AI image looks like someone else’s copyrighted work?

This is a real risk. If an AI generates something that’s substantially similar to copyrighted work (like a recognizable character), using that output could constitute copyright infringement—even if you didn’t intend it. The ongoing Disney/Midjourney lawsuit addresses exactly this concern.

Do I need to disclose AI use when selling content?

Legally, there’s no universal requirement to disclose AI use in sales (though some platforms and contracts may require it). However, the Copyright Office requires disclosure when registering works that contain AI-generated material. Ethically and professionally, transparency is increasingly expected.

Can I trademark AI-generated logos?

Trademark law has different requirements than copyright—it focuses on distinctiveness and use in commerce rather than authorship. An AI-generated logo might be registrable as a trademark if it meets distinctiveness requirements, but you might not have copyright protection for the design itself.

What if I significantly edit AI output?

This is exactly the scenario where you might gain copyright protection. If your editing constitutes “sufficient creative input,” the human-authored modifications may be copyrightable. The more substantial and creative your edits, the stronger your claim.

Is AI content plagiarism?

Plagiarism is an ethical/academic concept, not a legal one. Whether using AI content constitutes plagiarism depends on the context and expectations—many schools and publications have policies requiring disclosure of AI assistance. Copyright law doesn’t address plagiarism directly.

Here’s where I land after all this research:

The law is clear on the extremes: purely AI-generated content with minimal human input can’t be copyrighted. Substantially human-modified work that uses AI as one tool probably can be protected. The middle ground is genuinely uncertain.

What I’d tell a friend: Don’t let copyright fear paralyze you from using AI tools. For most uses—selling prints, marketing content, social media graphics—the practical risks are relatively low. Yes, someone could theoretically copy your purely AI-generated work, but in practice, most people won’t.

But if you’re creating high-value content or building a business around AI-generated work, treat documentation as your insurance policy. Keep records, add meaningful human input, and understand that the law is still catching up to the technology.

The cases being decided in 2025 and 2026 will shape this landscape for years to come. Disney v. Midjourney alone could rewrite the rules. Stay informed, because what’s true today might not be true in a year.

And maybe that’s the most honest thing I can tell you: we’re all learning this together, in real time, as the technology outpaces the law. That’s uncomfortable, but it’s also an accurate reflection of where we are.

Understanding copyright is just one piece of the AI ethics puzzle. If you’re interested in other ways AI has its limitations and biases, that’s worth exploring too. The technology is powerful, but it’s not perfect—and neither are the laws governing it.

Found this helpful? Share it with others.

Vibe Coder avatar

Vibe Coder

AI Engineer & Technical Writer
5+ years experience

AI Engineer with 5+ years of experience building production AI systems. Specialized in AI agents, LLMs, and developer tools. Previously built AI solutions processing millions of requests daily. Passionate about making AI accessible to every developer.

AI Agents LLMs Prompt Engineering Python TypeScript