Clair Obscur Having Its Indie Game Game of the Year Award Stripped Due to AI Use
Key topics
The indie game Clair Obscur is at the center of a heated debate after being stripped of its Game of the Year award due to its use of AI-generated assets, sparking a broader discussion on the ethics of AI in creative fields. Commenters are divided, with some drawing a distinction between AI-assisted coding and AI-generated art, while others point out a double standard in criticizing AI art for "stealing" from artists while seemingly accepting AI code generation that learns from developers. As one commenter astutely noted, the key issue lies in whether AI is "learning" or "copying," with many arguing that AI should be allowed to learn and produce, but not infringe on copyright. The conversation highlights the complexities and nuances of AI's role in creative industries.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
156
Day 1
Avg / period
40
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 21, 2025 at 2:16 AM EST
13 days ago
Step 01 - 02First comment
Dec 21, 2025 at 3:18 AM EST
1h after posting
Step 02 - 03Peak activity
156 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 29, 2025 at 2:21 PM EST
4d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The use of generative AI for art is being rightfully criticised because it steals from artists. Generative AI for source code learns from developers - who mostly publish their source with licenses that allow this.
The quality suffers in both cases and I would personally criticise generative AI in source code as well, but the ethical argument is only against profiting from artists' work eithout their consent.
As far as I'm concerned, not at all. FOSS code that I have written is not intended to enrich LLM companies and make developers of closed source competition more effective. The legal situation is not clear yet.
The double standard here is too much. Notice how one is stealing while the other is learning from? How are diffusion models not "learning from all the previous art"? It's literally the same concept. The art generated is not a 1-1 copy in any way.
Code is an abstract way of soldering cables in the correct way so the machine does a thing.
Art eludes definition while asking questions about what it means to be human.
Code can be artisanal and beautiful, or it can be plumbing. The same is true for art assets.
I consider some code I write art.
If some creator with intentionality uses an AI generated rock texture in a scene where dialogue, events, characters and angles interact to tell a story, the work does not ask questions about what it means to be human anymore because the rock texture was not made by him?
And in the same vein, all code is soldering cables so the machine does a thing? Intentionality of game mechanics represented in code, the technical bits to adhere or work around technical constraints, none of it matters?
Your argument was so bad that it made me reflexively defend Gen AI, a technology that for multiple reasons I think is extremely damaging. Bad rationale is still bad rationale though.
The game is art according to that definition while the individual assets in it are not.
All art? Those CDs full of clip art from the 90's? The stock assets in Unity? The icons on your computer screen? The designs on your wrapping paper? Some art surely does "[elude] definition while asking questions about what it means to be human", and some is the same uninspired filler that humans have been producing ever since the first the first teenagers realized they could draw penis graffiti. And everything else is somewhere in between.
Art is an abstract way of manipulating aesthetics so that the person feels or thinks a thing.
Doesn't sound very elusive nor wrong to me, while remaining remarkably similar to your coding definition.
> while asking questions about what it means to be human
I'd argue that's more Philosophy's territory. Art only really goes there to the extent coding does with creativity, which is to say
> the machine does a thing
to the extent a programmer has to first invent this thing. It's a bit like saying my body is a machine that exists to consume water and expel piss. It's not wrong, just you know, proportions and timing.
This isn't to say I classify coding and art as the same thing either. I think one can even say that it is because art speaks to the person while code speaks to the machine, that people are so much more uppity about it. Doesn't really hit the same as the way you framed this though, does it?
Ok but that's just a training issue then. Have model A be trained on human input. Have model A generate synthetic training data for model B. Ensure the prompts used to train B are not part of A's training data. Voila, model B has learned to produce rather than copy.
Many state of the art LLMs are trained in such a two-step way since they are very sensitive to low-quality training data.
Yeah right. AI art models can and have been used to basically copy any artist’s style many ways that make the original actual artist’s hard work and effort in honing their craft irrelevant.
Who profits? Some tech company.
Who loses? The artists who now have to compete with an impossibly cheap copy of their own work.
This is theft at a massive scale. We are forcing countless artists whose work was stolen from them to compete with a model trained on their art without their consent and paying them NOTHING for it. Just because it is impressive doesn’t make it ok.
Shame on any tech person who is okay with this.
Concerns about the livelihood of artists or the accumulation of wealth by large tech megacorporations are valid but aren’t rooted in AI. They are rooted in capitalism. Fighting against AI as a technology is foolish. It won’t work, and even if you had a magic wand to make it disappear, the underlying problem remains.
I'm not sure that LLMs respect that restriction (since they generally don't attibute their code).
I'm not even really sure if that clause would apply to LLM generated code, though I'd imagine that it should.
Note that this tends to require specific license exemptions. In particular, GCC links various pieces of functionality into your program that would normally trigger the GPL to apply to the whole program, and for this reason, those components had to be placed under the "GCC Runtime Library Exception"[1]
[1]: https://www.gnu.org/licenses/gcc-exception-3.1.html
In the end it doesn’t matter. Here “learning” means observing an existing work and using it to produce something that is not a copy.
so... all of them
There are artists who would (and have) happily consented, licensed, and been compensated and credited for training. If that's what LLM trainers had led with when they went commercial, if anything a sector of the creative industry would've at least considered it. But companies led with mass training for profit without giving back until they were caught being sloppy (in the previous usage of "slop").
This reasoning is invalid. If AI is doing nothing but simply "learning from" like a human, then there is no "stealing from artists" either. A person is allowed to learn from copyright content and create works that draw from that learning. So if the AI is also just learning from things, then it is not stealing from artists.
On the other hand if you claim that it is not just learning but creating derivative works based on the art (thereby "stealing" from them), then you can't say that it is not creating derivative works of the code it ingests either. And many open source licenses do not allow distribution of derivative works without condition.
Analogy: the common area had grass for grazing which local animals could freely use. Therefore, it's no problem that megacorp has come along and created a massive machine which cuts down all the trees and grass which they then sell to local farmers. After all, those resources were free, the end product is the same, and their machine is "grazing" just like the animals. Clearly animals graze, and their new "gazelle 3000" should have the same rights to the common grazing area -- regardless of what happens to the other animals.
The analogy isn't really helpful either. It's trivially obvious that they are different things without the analogy, and the details of how they are different are far too complex for it to help with.
But code is complicated, and hallucinations lead to bugs and security vulnerabilities so it's prudent to have programmers check it before submitting to production. An image is an image. It may not be as nice as a human drawn one, but for most cases it doesn't matter anyway.
The AI "stole" or "learned" in both cases. It's just that one side is feeling a lot more financial hardship as the result.
There is a problem with negative incentives, I think. The more generative AI is used and relied upon to create images (to limit the argument to inage generation), the less incentive there is for humans go put in the effort to learn how to create images themselves.
But generative AI is a deadend. It can only generate things based on what already exists, remixing its training data. It cannot come up with anything truly new.
I think this may be the only piece of technology humans created that halts human progress instead of being something that facilitates further progress. A dead end.
The argument seems to be that it's different when the learner is a machine rather than a human, and I can sort of see the 'if everyone did it' argument for making that distinction. But even if we take for granted that a human should be allowed to learn from prior art and a machine shouldn't, this just guarantees an arms race for machines better impersonating humans, and that also ends in a terrible place if everyone does it.
If there's an aspect I haven't considered here I'd certainly welcome some food for thought. I am getting seriously exasperated at the ratio of pathos to logos and ethos on this subject and would really welcome seeing some appeals to logic or ethics, even if they disagree with my position.
According to your omnivision?
1. There is tons of public domain or similarly licensed artwork to learn from, so there's no reason a generative AI for art needs to have been trained on disallowed content anymore than a code generating one.
2. I have no doubt that there exist both source code AIs that have been trained on code that had licenses disallowing such use and art AIs have that been trained only on art that allows such use. So, it feels flawed to just assume that AI code generation is in the clear and AI art is in the wrong.
As always the market decides.
I think you’ll find most of the small teams making popular indie video games aren’t going to be interested in winning a pro-AI award.
Are you sure? Maybe not in gaming, but I'm sure most large companies create awards just to get them and mention them in marketing.
I wouldn't be surprised if the likes of EA and Ubisoft create a "best use of AI in gaming" award for next year.
Do they count procedural level generation as generative AI? Am I crazy that this doesn't seem clear to me?
> Games developed using generative AI are strictly ineligible for nomination.
I haven't found anything more detailed than that; I'm not sure if anything more detailed actually exists, or needs to.
And, second, what counts as generative AI? A lot of people wouldn't include procedural generative techniques in that definition, but, AFAIK, there's no consensus on whether traditional procedural approaches should be described as "generative AI".
Sure there is. "Generative AI" is just a marketing label applied to LLMs - intended specifically to muddy these particular waters, I might add.
No one is legitimately confused about the difference between hand-built procedural generation techniques, and LLMs.
So I think Gen AI is an umbrella. The question is, do older techniques like GANs fall under Gen AI? It's technically a generative technique that can upscale images, so it's generating those extra pixels, but I don't know if it counts.
A bunch of 'if' is an "expert system", but I'm old enough to remember when that was groundbreaking AI.
I wonder if the game directors had actually made their case beforehand, they would have perhaps been let to keep the award.
AI OK: Code
AI Bad: Art, Music.
It's a double standard because people don't think of code as creative. They still think of us as monkeys banging on keyboards.
Fuck 'em. We can replace artists.
It's more like the code is the scaffolding and support, the art and experience is the core product. When you're watching a play you don't generally give a thought to the technical expertise that went into building the stage and the hall and its logistics, you are only there to appreciate the performance itself - even if said performance would have been impossible to deliver without the aforementioned factors.
Games always have their game engine touch and often for indie games it's a good part of the process. See for example Clair Obscur here which clearly has the UE5 caracter hair.
Then the gameplay itself depend a lot on how the code was made.
- Final Fantasy 7 Rebirth clearly had two completely decoupled teams working on the main game and the open world design respectively
- Cyberpunk 2077 is filled with small shoeboxes of interactable content
Also pretty sure some programmers like Jonathan Blow avoid AI generated code like the plague.
Which LLM told you that?
> Almost all games currently being made would have programmers using VSCode.
Which clearly isn't the case.
I think that is almost certainly untrue, especially among indie games developers, who are often the most stringent critics of gen ai.
https://xkcd.com/610/
Look at how easy it is to make the argument in the other direction:
> People were told by large companies to like LLMs and so they did, then told other people themselves.
Those add nothing to the discussion. Treat others like human beings. Every other person on the planet as an inner life as rich as yours and the same ability to think for themselves (and inability to perceive their own bias) that you do.
What you derogatorily call normies are the rest of the world caring about their lives until one day some tech wiz came around to say “hey, I have built a machine to replace all of you! Our next goal is to invent something even smarter than you. Isn’t it neat?” No wonder the average person isn’t really keen on this sort of development.
> “hey, I have built a machine to replace all of you! Our next goal is to invent something even smarter under our control. Wouldn’t that be neat?” No wonder the average person isn’t really keen on this sort of development.
Nope, most are just annoyed from AI slop bombarding them at every corner, AI scams getting news of claiming another poor grandma, and AI tech industry making shit expensive. Most people's job are not in current direct threat of being employed, unless you work in tech or art.
Amongst many other legitimate reasons.
- annoyance at stupid AI features being pushed on them - Playing around with them like a toy (especially image generation) - Using them for work (usually writing tasks), to varying degrees of effectiveness to pretty helpful to actively harmful depending on how much of a clue they have in the first place.
Discussion or angst about the morality of training or threats to jobs doesn't really enter much into it.
No, I don't think I am.
> AI hype had been common (but not the majority position) in tech contexts for a while, especially from those that have something to sell you.
There's a whole lot of that for quite a long time targeting normie contexts, too; in fact, the hate in normie contexts is directly responsive to it, because the hype in normie contexts is a lot of particularly clumsy grifting plus the nontechnical PR of the big AI vendors (which categories overlap quite a bit, especially in Sam Altman’s case), and the hate in normy contexts shows basically zero understanding of even what AI is beyond what could be gleaned from that hyper plus some critical pieces on broad (e.g., total water and energy use, RAM price) and localized (e.g., from fossil fuel power plants in poor neighborhoods directly tied to demand from data centers) economic and environmental impacts.
> What you derogatorily call normies
I am not using “normie” derogatorily, I am using it to contrast to tech contexts.
I for one cannot wait for a future where grandparents get targeted ads showing their grandchildren, urging them to buy some product or service so their loved ones have something to remember them by...
Whereas AI seemed to have a pretty good run for around a decade, with lots of positive press around breakthroughs and genuine interest if you showed someone AI Dungeon, DALL-E 2, etc. before it split into polarized topic.
There was a time that I remember when you could gripe at a party about banner ads showing up on the internet and have a lot of blank stares. Or ask someone for their email address and get a quizzical look.
I pointed my dad to ChatGPT a few days ago and instructed him on how to upload/create an AI image. He was delighted to later show me his AI "American Gothic" version of a photo of him and his current wife. This was all new to him.
The pushback though I think is going to be short-lived in a way other push-backs were short-lived. (I remember the self-checkout kiosk in grocery stores were initially a hard sell as an example.)
Programmers criticized the code output. Artists and art enjoyers criticized cutting out the artist.
This is not a winning PR move when most normal people are already pretty pro-artist and anti tech bro
along with news about "AI" causing electricity bills to rise
every form of media is overrun and infested with poor quality slop
garbage products (microsoft copilot) forced on them and told by their bosses to use it, or else
gee I wonder why normal people hate it
If that tangible result doesn't occur, then people will begin to criticize everything. Rightfully so.
I.e., the future of LLMs is now wobbly. That doesn't necessarily mean a phase shift in opinion, but wobbly is a prerequisite for a phase shift.
(Personal opinion at the moment: LLMs needs a couple of miracles in the same vein as the discovery/invention of transformers. Otherwise, they won't be able to break through the current fault-barrier which is too low at the moment for anything useful.)
For instance, see Luddites: https://en.wikipedia.org/wiki/Luddite
> But the Luddites themselves “were totally fine with machines,” says Kevin Binfield, editor of the 2004 collection Writings of the Luddites. They confined their attacks to manufacturers who used machines in what they called “a fraudulent and deceitful manner” to get around standard labor practices. “They just wanted machines that made high-quality goods,” says Binfield, “and they wanted these machines to be run by workers who had gone through an apprenticeship and got paid decent wages. Those were their only concerns.”[1]
[1] https://www.smithsonianmag.com/history/what-the-luddites-rea...
The issue is not the technology per se, it's how it's applied. If it eliminates vast swathes of jobs and drives wages down for those left, then people start to have a problem with it. That was true in the time of the Luddites and it's true today with AI.
They just don't like it when the machines are able to do the mediocre job they get paid to do.
Imagine if we had listened to the Luddites back in the day...
By all means, I use it. In some instances it is useful. I think it is mostly a technology that causes damages to humanity though. I just don't really care about it.
https://english.elpais.com/culture/2025-07-19/the-low-cost-c...
> Sandfall Interactive further clarifies that there are no generative AI-created assets in the game. When the first AI tools became available in 2022, some members of the team briefly experimented with them to generate temporary placeholder textures. Upon release, instances of a placeholder texture were removed within 5 days to be replaced with the correct textures that had always been intended for release, but were missed during the Quality Assurance process.
When someone goes three miles per hour over the speed limit they are literally breaking the law, but that doesn’t mean they should get a serious fine for it. Sometimes shit happens.
Nobody is preventing the studio from working, or from continuing to make (ostensibly) tons of money from their acclaimed game. Their game didn't meet the requirements for one particular GOTY award, boo hoo
I understand where you’re coming from, but it’s perfectly sane if your legal system recognizes and accepts that speed detection methodologies have a defined margin of error; every ticket issued for speeding within that MoE would likely be (correctly) rejected by a court if challenged.
The buffer means, among other things, that you don’t have to bog down your traffic courts with thousands of cases that will be immediately thrown out.
The other way around seems more clear in a legal sense to me because we want to prove with as little doubt as possible that the person actually went above the speed limit. Innocent until proven guilty and all that. So we accept people speeding a little to not falsely convict someone.
Like, using automatic lipsync is "generative AI", should that be banned ? Do we really want to fight with that purely work-saving feature ?
241 more comments available on Hacker News