Ask HN: Why GenAI is immoral but vibe coding is ok?
Mood
controversial
Sentiment
mixed
Category
ask_hn
Key topics
Artificial Intelligence
Copyright
Developer Ethics
LLM
GenAI
Every AI CEO selling the disparition of developers in 6 months. But I don't see any developer complaining about LLM steeling our job, or that LLM are infringing copyright and are bad for humanity.
I can't possibly make a fundamental difference between GenAI and vibe coding, they are the same thing.
Discussion Activity
Light discussionFirst comment
15m
Peak period
5
Hour 5
Avg / period
2.4
Based on 12 loaded comments
Key moments
- 01Story posted
Nov 23, 2025 at 6:49 AM EST
20h ago
Step 01 - 02First comment
Nov 23, 2025 at 7:04 AM EST
15m after posting
Step 02 - 03Peak activity
5 comments in Hour 5
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 23, 2025 at 6:04 PM EST
8h ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
At any rate developers complain about vibe coding on here all the time, there are complaints about stealing code from open source projects that thus ends up not obeying the license and that this is a copyright violation. There are complaints that subtle bugs in AI produced code worsen products.
The rants against GenAI are generally related to copyright but also aesthetic in nature. The aesthetic rants do not really work where code exists, although somewhat because there are complaints about bad code generated from AI.
I obviously see the rants of my own echo chamber, but the complaints about GenAI are mostly about copyright (as you said) but they also make it a moral stand. GameDev using GenAI are accused of being "lazy", of having no vision, etc... unquantifiable.
Your remark about aesthetic would map 1:1 to poor code quality. (LLM code quality can be pretty bad, depends on prompt like always).
"Stealing" is a word aften use to criticize genAI.
Not sure if you're just trolling, but um... just look at every AI thread for the last year.
GenAI and vibe coding aren't the same thing because the end product is different. While you can find art and poetry in code, generally, the code is an means to an end. It's more the medium than the message.
I don't want to completely discount genAI, because I think it has it's place, but it's also a pollutant. People value human feelings, originality, and authenticity, which are cornerstones of art. GenAI is not those things, and feels like a fake to many people. It's muzak vs music. The industry plant that looks good, but has no real talent.
Personally, I don't think LLMs are the death of developers. Most software is a big steaming pile, held together with band-aids and duct tape, and the demand for it is never ending. Any tools that help us improve on the current situation are welcome IMHO.
Artists, on the other hand, already have a hard enough time just scraping by. Many take on soulless work, making corporate stock art or editorial copy, just to pay the rent. It's not what they want to be doing, but at least it's using their skill set. GenAI is arguably fine for generating that kind of content, but now the artists are out of a job and probably will just stop being artists in order to survive. Software isn't going anywhere, but artists of all kinds are dwindling. GenAI isn't helping on that front.
Then there's the whole "reality" issue... Code is just code, but genAI is making it harder to tell what's real, which probably isn't helpful for society in general.
From your feedback, GenAI makes obvious some important problems our society is facing. LLM result is technical, genAI result is emotional because despite being the same exact tools, one produces text and the other produces images.
If someone isn't great at grammar or writing in general, using an LLM to compose an email seems fine to me. I also don't feel strongly about using it for advertising or marketing content, or purely informational cases like technical documentation, as long as it's accurate. Telling an LLM to write a novel in the style of another author however... that's not cool. The use case and intent matters greatly.
I'm not in game dev, so I'm curious where you see people drawing the line for genAI usage? I do have friends in that circle though, but their complaints have mostly been about the industry in general (brutal), and the offshoring of work, particularly in the 3d modeling/asset creation areas.
I will say however, that I've long had dreams of making a game, but the asset generation has always been too big of a hurdle to overcome, so genAI gives me some hope that maybe one day I might be able to attempt something.
1. AI CEOs oversell, by a lot. OpenAI CFO admission that they are cooked unless the US government bails them out is a tell.
2. The (almost) purely utilitarian nature of software code is in contrast to the more personally meaningful aim of art in general (although both do converge when we're talking about purpose-fit artwork: design/music for ads/shop centres, for instance). That makes, in my view, most of the difference given the following.
Vibe coding is mostly a very well evolved (albeit not perfect or deterministic) code completion/linting/review tool. Although it does bypass (for the user/coder) a LOT of the intellectual work needed to come to the same result - and by that, I mean/think that it is highly detrimental to the user/coder intellect; and because of this, it becomes highly detrimental to the employer too, especially if it reduces its own workforce.
A software company that extensively uses AI instead of hiring competent (and junior) people is faced with the same fate as a company that just stops hiring: it's going out of business or bought in a few years. Because it outsourced its control over its own process, or the process/product it sells. That's also a reason why considering Engineering or R&D as a cost center only makes sense in the "accounting sense", not in the "common sense", but that's only one example of how MBA's fucked up the world.
It certainly trained on existing open source codebases; whose code reuse is encouraged; although indeed, the license on the code in output is a question; did it train on closed source/proprietary codebases? that's an open question. Does it threaten developer jobs? I am not sure, see above.
"art" GenAI is a whole other beast, operating (and training) on a whole other order of magnitude of quantities of artwork that are very opinionated, original, and for which authors/owners have NOT given their consent to be used neither in training, neither in the output. People promoting GenAI dismiss the objections and practice of those owners showing a poor understanding of the process that is art, and a glaring contempt of the copyright law. Did it train on copyrighted works? Yes. Does it track how? No. Does it compensate people? No.
Does it make comparable quality work in output? No, because it's automated and it completely misses the point.
Does this threaten original artists that put in the work then? Yes, because a lot of people who have money (hence power) but shit taste and no understanding of the art process believe that it does replace real people trained and dedicated to this process and the particular media they work with. And they invest their money where they believe it will further this replacement and give them more money.
But it literally, from start to finish, makes no sense. And that's precisely the point of the process that is art. Through actual, personal and group work, make sense out of something.
A machine, an algorithm does not do so. The art is mangled in the training/labelling process. The prompt is crap, and always will, compared to the specifics and accidentals of the original work used in the training step.
2. Trained on opensource vs trained on copyright is a solid rationale. But yes, LLM have been trained on copyrighted code, as an UnrealEngine user, it's pretty obvious that any LLM has knowledge of the engine source code and patterns, not only the doc. But it might be marginal since the quantity of open source code is gigantic.
Retribution to copyright holders for artistic work would make a lot of sense. It reminds me the shift from emule/kazaa everything towards spotify. Legal streaming has almost totally replaced music pirating. And it looks beneficial for artists since we never had that much production in human history.
I don't believe in LLM replacing devs either, it increases my scope but not in any point allow me to prompt in the morning and collect the money on the evening. It's still a job of full focus, even if I'm braining less. I feel I moved to a managing position instead of a crafter. Pretty happy to leave the webdev world for gamedev since LLM are years away to handle complex abstracts and produce clean code.
Yes, but it does not solve everything. Because an artist does have (at least in music) a say in how their work is used. That's usually something contractual, sometimes it's supported at the law level (in France, there's the "droit moral" of the author).
Legal streaming could have replaced music pirating. But what Spotify became is a threat to musicians too: all the pro and non-pro musicians I know crave to remove their tracks from Spotify, because Spotify pays nothing. But UX-wise Spotify is the best thing on the market at moment for exposure (broadly, if people don't find you on Spotify, they assume you don't exist). This becomes even a bigger issue since Spotify trains its own GenAI on music and streams generated music (which get paid back to... Spotify, instead of the original artists). Legal streaming still "could" be a win for artists, it just isn't today. You either tour and sell ridiculously well your merch, or you go broke (or you have the funds to sustain a negative balance for the rest of your life).
The thing, back to software, is that things get abstracted, automated, but on top of these, you will still need: people to understand and manage the abstraction (developers), people to understand/manage the translation machine (more developers), and people to understand/manage the basics/foundations (more developers).
The lie that lies into GenAI is this one: we can replace human work (because it makes us reliant and vulnerable on humans) for a cheaper alternative (it's not cheaper, given the externalities, both in the environment and in society).
With other content people get to see it and feel something ever so slightly discording. Sometimes it looks good on first look but then some details are off when you properly look. Errors that would not be done by humans.
In either case there are plenty of those who generate content and do not care they simply want the output. But for those who do want certain certainty of quality it is much the same.
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.