Using Generative AI in Content Production
Postedabout 2 months agoActiveabout 2 months ago
partnerhelp.netflixstudios.comTechstoryHigh profile
heatedmixed
Debate
80/100
Generative AIContent ProductionCopyright Law
Key topics
Generative AI
Content Production
Copyright Law
Netflix releases guidelines for using Generative AI in content production, sparking debate among commenters about the implications for creative industries and copyright law.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
84
0-12h
Avg / period
21.4
Comment distribution150 data points
Loading chart...
Based on 150 loaded comments
Key moments
- 01Story posted
Nov 10, 2025 at 2:28 PM EST
about 2 months ago
Step 01 - 02First comment
Nov 10, 2025 at 4:56 PM EST
2h after posting
Step 02 - 03Peak activity
84 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 17, 2025 at 5:23 PM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45879793Type: storyLast synced: 11/20/2025, 5:23:56 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Later on they do have a note suggesting that the following might be OK if you use judgement and get their approval: "Using GenAI to generate background elements (e.g., signage, posters) that appear on camera"
They do want to save money by cheaply generating content, but it's only cheap if no expensive lawsuits result. Hence the need for clear boundaries and legal review of uses that may be risky from a copyright perspective.
> GenAI is not used to replace or generate new talent performances or union-covered work without consent.
But what word should we coin as buzzword for “Netflix-Muzak”?
And when we're saturated with it all, we'll start buying DVDs (or other future media) again.
How would one ever know that the GenAI output is not influenced or based on copyrighted content.
If you take a model trained on Getty and ask it for Indiana Jones or Harry Potter, what does it give you? These things are popular enough that it's likely to be present in any large set of training data, either erroneously or because some specific works incorporated them in a way that was licensed or fair use for those particular works even if it isn't in general.
And then when it conjures something like that by description rather than by name, how are you any better off than something trained from random social media? It's not like you get to make unlicensed AI India Jones derivatives just because Getty has a photo of Harrison Ford.
Getting sued occasionally is a cost of doing business in some industries. It’s about risk mitigation rather than risk elimination.
And if you really wanted insurance then why not get it from an actual insurance company?
In particular, in the US, the legal apparatus has been gamified to the point that the expectation becomes people will sue if their expected value out of it is positive even if the case is insane on its merits, because it's much more likely someone with enough risk and cost will settle as the cheaper option.
And in that world, there is nothing that completely eliminates the risk of being sued in bad faith - but the more things you put in your mitigation basket, the narrower the error bars are on the risk even if the 99.999th percentile is still the same.
This doesn't seem to do the first one. It doesn't actually stop you from doing the things you could get in trouble for doing, even though that was ostensibly the original point.
And the second one is where you want to shave off the high side, not the low side. This is why normal insurance is typically a policy large enough to cover whatever plausible amount of damages there could be and then a deductible so you don't have tons of people filing claims for amounts small enough they could reasonably cover themselves instead of paying the insurance company a margin to do it. But this is the opposite of that, they'd cover a little claim you'd have been able to shrug off regardless but not the one where you'd most want coverage.
That is, if a well-established company offers you a service explicitly promising no human bones in the soup, when all the other services don't promise that, you can make a reasonable argument that, in the event you were required by circumstances to pick one of the services to use, you have done the best you can in that circumstance, even if human bones ended up in the soup, as long as the company doesn't have a known history of not honoring its commitments.
And in that case, it becomes easier to make an argument to your management chain that you didn't deliberately put in human bones to save on cost, you picked the one that explicitly promised no human bones.
So if you put obviously copyrighted things in the prompt you’ll still be on your own.
How do you handle this kind of prompt:
“Generate an image of a daring, whip-wielding archaeologist and adventurer, wearing a fedora hat and leather jacket. Here's some back-story about him: With a sharp wit and a knack for languages, he travels the globe in search of ancient artifacts, often racing against rival treasure hunters and battling supernatural forces. His adventures are filled with narrow escapes, booby traps, and encounters with historical and mythical relics. He’s equally at home in a university lecture hall as he is in a jungle temple or a desert ruin, blending academic expertise with fearless action. His journey is as much about uncovering history’s secrets as it is about confronting his own fears and personal demons.”
Try copy-pasting it in any image generation model. It looks awfully like Indiana Jones for all my attempts, yet I've not referenced Indiana Jones even once!
It gives you an image of Harrison Ford dressed like Indiana Jones.
https://stock.adobe.com/ca/images/adventurer-with-a-whip-and...
Disclaimer: I used to work at Adobe GenAI. Opinions are of my own ofc.
Consumers have long wanted a single place to access all content. Netflix was probably the closest that ever got, and even then it had regional difficulties. As competitors rose, they stopped licensing their content to netflix, and netflix is now arguably just another face in the crowd.
Now they want to go and leverage AI to produce more content and bam, stung by the same bee. No one is going to license their content for training, if the results of that training will be used in perpetuity. They will want a permanent cut. Which means they either need to support fair use, or more likely, they will all put up a big wall and suck eggs.
This is for studios and companies that are producing content for Netflix.
If you want to sell to Netflix, you have to play by Netflix's rules.
Netflix has all kinds of rules and guidelines, including which camera bodies and lenses are allowed [1].
[1] https://partnerhelp.netflixstudios.com/hc/en-us/articles/360...
... Of course it is. As the distributor, Netflix obviously has a fairly broad ability to control what it distributes.
The Gooner Association?
That’s likely to be the middle ground going forward for the smarter creative companies, and I’m personally all for it. Sure, use it for a pitch, or a demo, or a test - but once there’s money on the line (copyright in particular), get that shit outta there because we can’t own something we stole from someone else.
How does anyone prove it though? You can say "does that matter?" but once everybody starts doing it, it becomes a different story.
But since many of these models will blurt out very obviously infringing material without targeted prompting, it’s also an active, continuous thief.
The scenario looks like this:
* Be Netflix. Own some movie or series where the main elements (plot, characters, setting) were GenAI-created.
* See someone else using your plot/characters/setting in their own for-profit works.
* Try suing that someone else for copyright infringement.
* Get laughed out of court because the US Copyright Office has already said that GenAI is not copyrightable. [1]
[1] https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...
If there's even a hint that you used AI output in the work and you failed to disclose it to the US Copyright Office, they can cancel your registration.
Now you can sue
- https://arstechnica.com/tech-policy/2025/02/meta-torrented-o... - https://news.bloomberglaw.com/ip-law/openai-risks-billions-a...
Other than that, just a bit of common sense tells you all you need to know about where the data comes from (datasets never released, outputs of the LLMs suspisciously close to original copyrighted content, AI founders openly saying that paying for copyrighted content is too costly etc. etc. etc.)
And yes I know they do legal and agreed partnerships like with the Predator franchise, or the Beavis and Butt-Head franchise (yes they exist in CoD now...), and those only count for a tiny number of the premium skins.
(I use "stole" in a non derogatory way here - 90% of good game design is cribbing together stuff that worked elsewhere in a slightly new form)
Which in turn was likely quite inspired by Starsiege: Tribes
Personally, I was in the top 10% of HL2DM players but because I couldn't master the inputs for skating I wasn't able to compete with the truly elite tier players who would zip around the map at breakneck speeds.
Arc Raiders is a ton of fun though. Also recommend Helldivers 2 if you just want a PvE shooter. It tends to be buggy as hell but the core game experience is hilariously fun.
>Some of the voice lines were created using generative artificial intelligence tools, using an original sample of voice lines from voice actors hired specifically with an AI-use contractual clause, similar to the studio's production process in The Finals.
https://en.wikipedia.org/wiki/ARC_Raiders
Great game though, I'm really enjoying it too
(platinum rating on protondb too woohoo)
Cod4 in some ways was the beginning of the end for a lot that we took for granted in gaming up to that point. I remember when it released and a couple of us went to my friends house to play it. Boy were we in for a shock when there was no coop multiplayer like halo 3.
So just as there's no procedural difference between an AI getting something right and an AI "hallucinating", if the word "slop" describes anything AI generates, it describes all of it.
Either everything generative AI creates is slop or nothing is. So everything is.
Also I know stealing is not the same thing as copyright infringement. I'm talking about stealing livelihoods as much as stealing art.
Values aren't required for something to be good or bad. Outcomes are. A giant meteor strike causing a global firestorm & brief ice age causing mass death is bad, but giant meteors have no values.
But then I vote Giant Meteor/Pestilence 2028. They will deliver what they promise.
AI is just a wrapper around a tool - it doesn't need intention or creativity because those come from the user in the form of prompts (which are by definition intentional)
It's just a Natural Language Interface for calling CLI tools mostly, just like how GUIs are just graphical interfaces for calling CLI tools, but no one thinks a GUI has no intentionality or creativity even when using stochastic/probabilistic tools
Anything a user can do with an AI they could also do with a GUI, it would just take longer and more practice
>Either everything generative AI creates is slop or nothing is. So everything is.
But then how do you know something is slop before you know if it's made with GenAI? Does all art exist as Schrodinger's Slop until you can prove GenAI was used? (if that's even possible)
It goes beyond just IP law compliance. Creativity is their core competency and competitive differentiator. If you replace that with AI slop, then your product becomes almost indistinguishable from that of everyone else producing AI slop.
IMO, they're striking exactly the right balance - use AI as a creative aid and productivity booster not something to make the critical aspects of the final product.
This is 100% a lie.
Studios will use this to replace humans. In fact, the idea is for the technology – AI in general – to be so good you don't need humans anywhere in the pipeline. Like, the best thing a human could produce would only be as good as the average output of their model, except the model would be far cheaper and faster.
And... that's okay, honestly. I mean, it's a capitalism problem. I believe with all my strength that this automation is fundamentally different from the ones from back in the day. There won't be new jobs.
But the solution was never to ban technology
In this case, for instance, Netflix still has a relation with their partners that they don't want to damage at this moment, and we are not at the point of AI being able to generate a whole feature length film indistinguishable from a traditional one . Also, they might be apprehensive regarding legal risks and the copyrightability at this exact moment; big companies' lawyers are usually pretty conservative regarding taking any "risks," so they probably want to wait for the dust to settle down as far as legal precedents and the like.
Anyway, the issue here is:
"Does that statement actually reflect what Netflix truly think and that they actually believe GenAI shouldn't be used to replace or generate new talent performances?"
Because they believe in the sanctity of human authorship or whatever? And the answer is: no, no, hell no, absolutely no. That is a lie.
The if-statement "If you want to do X, you need to get approval." probably does actually reflect what Netflix truly think, but it doesn't mean they believe X shouldn't be done. It means they believe X is risky and they want to be in control of whether X is done or not.
I don't see how you could read the article and come away with the impression that Netflix believe GenAI shouldn't be used to replace or generate new talent performances.
However, this statement is a hell of a lot better than I expected to see, and suggests to me that the actors' strike a few years ago was necessary and successful. It may, as you say, only be holding back the "capitalism problem" dike, but... At least it's doing that?
When AI gets good enough, 2, 3, 5, 10 years from now, they simply reverse path, and this statement wouldn't delay Netflix embracing AI films that much, if anything.
> I would somewhat disagree with this statement being a sign the strike was a success because, like, AI is not at the point of generating a whole movie in human quality today, so Netflix issuing this statement like this now, in November 2025, costs them literally nothing, and feels more like a consolation prize: "Here, take this statement, so you guys can pretend the strike achieved anything."
>
> When AI gets good enough, 2, 3, 5, 10 years from now, they simply reverse path, and this statement wouldn't have delay Netflix embracing AI films that much, if anything.
Any studios that isn't playing ostrich has realized this (so possibly none of them) and should be just trying to extract as much value as possible as quickly as possible before everything goes belly up.
Of course timelines are still unclear. It could be 5 years or 20, but it is coming.
>> This is 100% a lie.
We’ve had CGI for decades and generally don’t mind. However, the point at which AI usage becomes a negative (eg: the content appears low quality) because of its usage, I’d expect some backlash and pulling back in the industry.
In film and tv, customers have so much choice. If a film or tv is low effort, it’s likely going to get low ratings.
Every business and industry is obviously incentivized to cut costs, but, if those cost cuts directly affect the reputation and imagery of your final product, you probably want to choose wisely which things you cut..
The irony is rich they built their empire on disrupting old Hollywood gatekeeping, and now they’re recreating it in AI form. Instead of letting creators experiment freely with these tools, Netflix wants control over every brushstroke of ai creativity
I do agree Netflix wants to crush creators.
They do not want to be disrupted.
Just look at early 20s people. They don't watch shows/movies. They only watch short form videos. Short form videos will mostly be created using GenAI tools as early as 2026.
Each time I scroll LinkedIn and I see some obviously AI produced images, with garbled text, etc. it immediately turns me off to whatever the content was associated with the image.
I'd be very disappointed to see the arts, including film making, shift away from the core of human expression.
“You know what the biggest problem with pushing all-things-AI is? Wrong direction. I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.” - Joanna Maciejewka
It's that not every one has the talent to produce something of quality.
If you give a professional passionate chef, the same ingredients for a full meal, as your average home cook the results will NOT be the same by a far stretch.
Much of "AI slop" is to content what Macdonald's is to food. Its technically edible but not high quality
Do we want a society where everyone can masquerade as an “artist”, flooding society with low-quality content using AI trained on the work product of actual artists?
The people doing as such do not have the talent they desire, nor did they do anything to upskill themselves. Its a short cut to an illusion of competency.
Change the statement to: Do we want a society where everyone can masquerade as an “photographer”, flooding society with low-quality photos using cell phones, never having to learn to develop film, or use focus, or understand lenses...
Do we want a society where everyone can masquerade as an “painter”, flooding society with low-quality paintings because acrylics are cheap, the old masters made their own paint after all...
Why does it matter how it was created? It wasn't Bob Ross's "Joy of Making Incredible Art", it was simply the "Joy of Painting".
And people do enjoy content that, for lack of a better word, is disposable. Look at the "short dramas" or "vertical dramas" industry that is making money hand over fist. The content isnt high brow, but people enjoy it all the same.
> AI trained on the work product of actual artists?
Should we teach people how to play guitar without using the songs of other artists? Should those artists be compensated for inspiring others?
Some of this is an artifact of our ability to sell reproductions (and I would argue that the economics were all around distribution).
There is a long (possibly decades) conversation that were going to have on this topic.
- take a photo of a subject
- paint something
- pick up a guitar
Whereas asking the computer in the lowest effort possible to do said thing for you “draw me this”, “make a song that sounds like this”, requires zero effort/skill and results in no improvement of your own ability.
By that rational, Autocad is bad because it doesn't require the same skill as Draftsmen.
Effort, labor, is not a reflection of creativity or skill.
I have a decent sense of design, I can tell you if something looks off, but I say to designers all the time "I cant do what you do, take what I say with a grain of salt". I have rubber ducked non technical people and gotten questions that steered me in better directions...
AI Doesn't make me a good designer, AI doesn't make me a better coder.
Who is going to do a better job framing a house. Someone with 20 years of experience with a hammer and a hand saw or someone who has never built a house with a nail gun and a circular saw.
The AI has no taste, no talent, it simply does what it's told. The crappy content is a result of it producing what it has been asked to produce.
The internet was sold to us with the promise that everyone could publish, and wasn't that great? So many voices, we will hear wonderful new things!
What happened? Enshittification. The rise of the antivax community. Empowerment of far right white nationalists across what had been the most (lower-case "l") liberal governments in the world. Signal being drowned amdist the noise of a bot-driven ad-hellscape internet.
No. We do not want a society where everyone can masquerade as an artist.
Unless, that is, we hate art.
I'm not for censorship, it's more just a reflection on human nature. I'm fairly pessimistic on AI "hopes" given what we've turned the internet into.
Is that just because we are at the very beginning stages of the technology, though? It is just going to keep getting better, will the bias against AI generated content remain? I know people like to talk as if AI will always have the quality issues it has now, but I wouldn't count on that.
Like, I gather that prompt adherence has improved somewhat, but the actual output still looks _very_ off.
I think _maybe_ there's an uncanny valley problem (and it may vary person to person). I found Stable Diffusion 1.5's output quite _bad_, say, but not as, I dunno, objectionable and wrong-looking as current models.
Video has always been a complete mess, remains a complete mess, I don't see any real path towards it not being a complete mess. It is, in fairness, a _much_ harder problem.
I've also found the proliferation of "AI film director" created "films" funny.
The output is a strung together set of 2-3 second clips telling some story. The characters changing between clips, the scenery changing drastically, etc. There is nothing cohesive. I imagine its a similar "context window"-like problem, if you have to keep many minutes of visual context.
For every person who gets to make creative decision, there are hundreds upon hundreds of people whose sole purpose is slavish adherence to those decisions. Miyazaki gets to design his beautiful characters - but the task of getting those characters to screen must be carried out by massive team of illustrators for whom "creative liberty" is a liability to their career.
(And this example is only for the creative aspects of film-making. There is a lot of normal corporate and logistical stuff that never even affects what you see)
That's not to say I'm looking forward to the wave of lazy AI-infused slop that is heading our way. But I also don't necessarily agree with the grandstanding that AI is inherently anti-creative or only destructive. I reserve the right to be open-minded.
The irony is that movies and TV themselves represented a cheaper, industrialized and commoditized alternative to theater. And theater is still around and just as good as it ever was.
I'm curious if the parent poster thinks this is unique to film production, because I think you can make the same argument for pretty much any trade. Software engineering is 1% brilliance and 99% grunt work. This doesn't make that software engineers are going to enjoy a world where 99% of their job goes away.
Further, I'm not sure the customers will, because the fact that human labor is comparatively expensive puts some checks and balances in place. If content generation is free, the incentive is to produce higher-volume but lower-quality output, and it's a race to the bottom. In the same way, when content-farming and rage-baiting became a way to make money, all the mainstream "news" publishers converged on that.
As it happens, I don't think "AI" is close to replacing many SEs or animators but in a world where it could, we should celebrate this huge boon to society.
Yes but at least those decisions come from some or one person not just an algorirhm
Some skills, like framing, values, balance, etc. become even more important differentiators. Yes, it is much different. But as long as humans are in the loop, there is an opportunity for human communication.
I agree. I think many artists in the future will be closer to directors/cinematographers/editors than performers
Many of the skills artists have today will still be necessary and transferable, but what will separate the good artists from the bad artists will be their ability to communicate their ideas to agents / other humans
Same with software developers I suspect - communication will be the most important skill of all, and the rockstar loner devs who don't work well in teams will slowly phase out
This is vastly oversimplifying and is misleading. Key animators have a highly creative role. The small decisions in the movements, the timings, the shapes, even scene layouts (Miyazaki didn't draw every layout in The Boy and the Heron), are creative decisions that Miyazaki handpicked his staff on the basis of. Miyazaki conceived of the opening scene [0] in that film with Shinya Ohira as the animator in mind [1]. Even in his early films, when he was known to exert more control, animator Yoshinori Kanada's signature style is evident in the movements and effects [2].
[0]: https://www.sakugabooru.com/post/show/260429
[1]: https://fullfrontal.moe/takeshi-honda-the-boy-and-the-heron-...
[2]: Search for "Kanada animated many sequences of the movie, but let’s just focus on the most famous one, the air battle scene." in https://animetudes.com/2021/05/15/directing-kanada/
“Creative Output” has an entirely different meaning when you start to think about them in the way they actually work.
* fully-generated content is public domain and copyright can not be applied to it.
Make sure any AI content gets substantially changed by humans, so that the result can be copyrighted.
More importantly: don't brag and shut up about which parts are fully AI generated.
Otherwise: public domain.
Simpler yet - and inevitable, on sufficiently long time scales - is to dispense entirely with the notion of intellectual property and treat _all_ content this way.
The internet _wants_ to copy bytes. That's what it does. Right now you're reading this because bytes were copied from my local machine to the HN server, and then to a cache, and then to your machine.
Copying bytes is, in some low-level sense, the entire function of interconnection in the human species.
The idea of trying to bolt-on little machines to restrict the flow of bytes at every vector of network connectivity, just to satisfy some abstract claim of "property" - it's completely nuts. It's never going to work. Tomorrow it will work worse than it does today, and every day going forward until states realize that the laws they want to enforce around this concept are simply not possible on the internet.
And you know who will celebrate that? Musicians like me. Nobody will be happier to see a lubricated copying machine become the human identity more than people who are trying to get their music out there, and trying to be inspired by the music of others.
Last year, I cut Drowsy Maggie with David Grier (something about which I boast every chance I get :-) ), and part of our journey was listening to aging, nearly-forgotten versions to find melodic and harmonic ideas to harvest and revive. For this, we of course made heavy use of archive.org's Great 78 project - and at the very same time, the RIAA (who is supposed to represent us?!) was waging aggressive lawfare against the Great 78 project, to try to take it down.
It was just the height of absurdity.
Consider that since at least 2020, every grammy winner in both the bluegrass and americana categories (and almost no nominee) has been released DRM-free. And that many of the up-and-coming bluegrass and jam bands are now releasing all of their shows, directly off the board, licensed with creative commons-compatible licenses.
https://pickipedia.xyz/wiki/DRM-free
The only leverage you have to stop Spotify from taking your music and publishing it without your permission is your copyright of the music.
In fact, every time I see a complaint about copyright it's always "we tried to do something at small scale for some noble purpose and couldn't because of pesky copyright laws," and it completely ignores the massive scale of abuse for profit purpose that would occur if copyright didn't exist.
Think of how AI scraped everyone's books without permission using the flimsy excuse that it's transformative work, except they wouldn't even need that excuse or the transformation. Amazon could just take everyone's books and sell it on Kindle, then kick out all authors because they only need to buy 1 book to resell it as if they were the owner of the book.
There are a lot of challenges facing a band, including the frustrations of CDBaby and Distrokid. If you told me my music would just magically appear on Spotify without my having to lift a finger (and without having to implicitly endorse them by putting it there), that'd be a huge relief.
> it completely ignores the massive scale of abuse for profit purpose that would occur if copyright didn't exist.
"abuse"? If you can somehow make money by playing music that I've made, nothing will make me prouder. And whatever you're doing that's generating that profit, it will almost certainly increase the likelihood that I can plan a series of shows around it, which will in turn generate income for me. Who exactly is losing here? Where is the "abuse"?
> Think of how AI scraped everyone's books without permission using the flimsy excuse that it's transformative work, except they wouldn't even need that excuse or the transformation.
I'm already sold, you don't have to keep making it sound sweeter and sweeter.
I don't believe that most creators would willingly let go of their right as you would.
And it's far from a foregone conclusion that what you call "their right" is real - I don't believe I have a right to stop you from copying bytes on your device. That's insane.
And yes, I'm sure that the top .01% of pop musicians for whom the system is working well will hang on, and many more hoping to hit whatever lottery they've hit.
But as I pointed out above, some genres which are experiencing thunderous revivals right now are embracing DRM-free very hard, and even CC as well.
As John Perry Barlow said about his band, The Grateful Dead, when he found himself facing down record labels and movie studios as the only person on the stage who was actually in a band:
"We gave away our so-called intellectual property and became the most popular band performing band in the United States. and we're making one hell of a lot of money giving it away. It was not required that we be absolutely firm in our hold on this material because we recognize something important which is that in an information economy the normal sense of an economy based on scarcity is turned on its head. Value in an information economy is based on familiarity and attention. These are very different principles and and trying to optimize towards scarcity as you are by all of your methods is not going to be in the benefit of creation."
You can buy our stuff on bandcamp if you want:
https://justinholmes.bandcamp.com/
In my mental model, participating in bandcamp (and getting your supporter badge) is a sort of "merch". You aren't really buying the music - the music is something you hear as the result of a FLAC file being decoded, and that FLAC file can be endlessly and freely copied.
Even better than the bandcamp model, in my thinking, is for you to get the music via pirate channels like bit torrent or IPFS, listen to it, and if you want to, you can buy album-related merch on the ethereum blockchain. That's the long-term vision.
As far as affording concerts: on sufficiently long time scales, I'm wanting to make our shows free-to-enter, buy-a-ticket-stub-if-you-want. Sadly, a lot of great rooms around the country are locked behind contracts with Ticketmaster/AEG and are prohibited from hosting such a show.
> If you release all your work under permissive licenses, how do you expect to be supported?
I believe that a huge majority of our fans are like you: they _want_ to support us, and the spotify model doesn't really give them a channel for that. Permissive licenses don't prevent people from supporting us on bandcamp and similar models.
You are welcome to "steal" anything I've ever made if it pleases you. And encourage your friends to steal it from you. If this process keeps repeating, look me up and let's book a show in your area, and we'll play our music _and_ demo our source code _and_ get you all dancin' and trippin' and having a merry old time.
Some people keep saying this but it seems obviously wrong to me.
At least in the United States, “sweat of the brow” has zero bearing on whether a work is subject to copyright[1]. You can spend years carefully compiling an accurate collection of addresses phone numbers, but anyone else can republish that information, because facts are not a creative work.
But the output of an AI system is clearly not factual! By extension, it doesn’t matter how little work you put in—if the work is creative in nature, it is still subject to copyright.
1: https://en.wikipedia.org/wiki/Sweat_of_the_brow#United_State...
(IANAL, yadda yadda.)
U.S. Copyright Office, Compendium of U.S. Copyright Office Practices § 306 (3d ed. 2021)¹ is quite explicit:
> The U.S. Copyright Office will register an original work of authorship, provided that the work was created by a human being.
> The copyright law only protects “the fruits of intellectual labor” that “are founded in the creative powers of the mind.” Trade-Mark Cases, 100 U.S. 82, 94 (1879). Because copyright law is limited to “original intellectual conceptions of the author,” the Office will refuse to register a claim if it determines that a human being did not create the work. Burrow-Giles Lithographic Co. v. Sarony, 111 U.S. 53, 58 (1884). For representative examples of works that do not satisfy this requirement, see Section 313.2 below.
This has not yet been litigated to conclusion, but it seems likely to me that LLM-generated outputs are not subject to copyright.
¹https://www.copyright.gov/comp3/chap300/ch300-copyrightable-...
How does this square with encryption keys being copyrightable?
I wonder if we're going to see a push back by media companies around copyright over AI-generated content. Though I don't see how; copyright is explicitly an artificial legal protection of human works.
https://www.equity.org.uk/advice-and-support/know-your-right...
Common-sense, practical, and covers a lot of the shifting ground around an artist’s ability to withdraw consent under GDPR and the ways they can properly use this to prevent their likenesses being used to train their digital replacements.
(Equity is the UK equivalent of the AEA and SAG-AFTRA combined)