Llms Are Not Fun
Key topics
The debate rages on: are Large Language Models (LLMs) sucking the joy out of programming and writing? Some argue that LLMs, like autocomplete, are just tools that can aid or hinder creativity, while others insist that LLMs are fundamentally different, generating novel ideas and paths that challenge traditional notions of craft and authorship. As commenters weigh in, a consensus emerges that the real issue lies not with LLMs themselves, but with being forced to use them in ways that stifle creativity or autonomy. Meanwhile, personal anecdotes, like using LLMs to spark ideas or generate bibliographies, highlight the complex, context-dependent nature of these tools.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
14m
Peak period
121
0-2h
Avg / period
13.3
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 29, 2025 at 2:06 PM EST
11 days ago
Step 01 - 02First comment
Dec 29, 2025 at 2:20 PM EST
14m after posting
Step 02 - 03Peak activity
121 comments in 0-2h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 30, 2025 at 6:55 PM EST
10 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
LLMs code for you. They write for you.
LLMs defines paths, ideas, choose routes, analyze and so on. They don't just autocomplete. They create the entire poem.
Hard to define but feels similar to the "I know it when I see it" or "if it walks like a duck and quacks like a duck" definitions.
"LLMs" are like "screens" or "recording technology". They are not good or bad by themselves - they facilitate or inhibit certain behaviors and outcomes. They are good for some things, and they ruin some things. We, as their users, need to be deliberate and thoughtful about where we use them. Unfortunately, it's difficult to gain wisdom like this a priori.
For example, it seems reasonably that using a good programming editor like Emacs or VI would offer a 2x (or more) productivity boost over using Notepad or Nano. Why hasn't Nano been banned, forbidden from professional use?
A good proxy for understanding this reality is that wealthy people who pay people to do all of these things for them have almost uniformly terrible ideas.
And dishes and laundry can be enjoyable zen moments. One only suffers by perceiving them as chores.
Some people want all yang without any yin.
Anyway, we've had machines that do our dishes and laundry for a long while now.
However you feel about LLMs or AI right now, there are a lot of people with way more money and power than you have who are primarily interested in further enriching and empowering themselves and that means bad news for you. They're already looking into how to best leverage the technology against you, and the last thing they care about is what you want.
Whether the LLM could do a better job than me at writing the essay is a separate question...I suspect it probably could. But it wouldn't be as fun.
Screens are absolutely not neutral and are bad by themselves. Might be a bad we've become used to, but they are a bad.
A director is the most important person to the creation of a film. The director delegates most work (cameras, sets, acting, costumes, makeup, lighting, etc.), but can dive in and take low-level/direct control of any part if they choose.
because ime, youre completely wrong.
I mean i get were youre coming from if you imagine it like the literal vibe coding how this started, but thats just a party trick and falls off quickly as the project gets more complex.
to be clear, simple features in an existing project can often be done simply - with a single prompt making changes across mutliple files - but that only works under _some circumstances_ and bigger features / more indepth architecture is still necessary to get the project to work according to your ideas
And that part needs you to tell the llm how it should do it - because otherwise youre rolling the dice wherever its gonna be a clusterfuck after the next 5 changes
Garbage collection and managed types are for idiots who don't know what the hell they're doing; I'm leet af. You don't need to worry about accidentally writing heartbleed if you simply don't make mistakes in the first place.
If you're doing anything UI-based, it hasn't performed well for me, but for certain areas of software development, it's been an absolute dream.
My place for that is in the shower.
I had one of those shower epiphanies a couple mornings ago... And I fed it into a couple LLMs while I was playing a video game (taking some time over the holidays to do that), and by the afternoon I had that idea as working code: ~4500 LOC with that many more in tests.
People keep saying "I want LLMs to take out the laundry so I can do art, not doing the laundry while LLMs do art." This is an example of LLMs doing the coding, so I can rekindle a joy of gaming, which feels like it's leaning in the right direction.
I have some sympathy for them, but AI is here to stay, and it's getting better, faster, and there's no stopping it. Adapt and embrace change and find joy in the process where you can, or you're just going to be "right" and miserable.
The sad truth is that nobody is entitled to a perpetual advantage in the skills they've developed and sacrificed for. Expertise and craft and specialized knowledge can become irrelevant in a heartbeat, so your meaning and joy and purpose should be in higher principles.
AI is going to eat everything - there will be no domain in which it is better for humans to perform work than it will be to have AI do it. I'd even argue that for any given task, we're pretty much already there. Pick any single task that humans do and train a multibillion dollar state of the art AI on that task, and the AI is going to be better than any human for that specific task. Most tasks aren't worth the billions of dollars, but when the cost drops down to a few hundred dollars, or pennies? When the labs figure out the generalization of problem categories such that the entire frontier of model capabilities exceeds that of all humans, no matter how competent or intelligent?
AI will be better, cheaper, and faster in any and every metric of any task any human is capable of performing. We need to figure out a better measure of human worth than the work they perform, and it has to happen fast, or things will get really grim. For individuals, that means figuring out your principles and perspective, decoupling from "job" as meaning and purpose in life, and doing your best to surf the wave.
There's still just something magical about speaking with a machine - "put the man's face from the first picture onto the cookie tin in the second picture, make sure he still looks like Santa!" You can have a vague idea or inkling about a thing, throw it at the AI, and you've got a soundingboard to refine your thoughts and chase down intuitions. I totally understand the frustration people are having, but at some point, you gotta put down the old tools and learn to use the new. You're only hurting yourself if you stay angry and frustrated with the new status quo.
Now back to computing, since I've been doing this for 25 years as my main job and it's probably what you thought I had in mind:
> at some point, you gotta put down the old tools and learn to use the new
I have the habit of learning new tools out of curiosity and only keep the ones that actually solve problems I have. Over time I have kept some (example: dvcs) and ditched others I was told were the best thing since sliced bread (example: containers). So far, conversational AI has been very good at replacing google/stack overflow. But that's about it.
I'm sure I'll use more of this stuff as time goes by, but there is really no need to rush things. I'll let early adopters adopt and I'll harvest mature solutions in due time.
My meaning could be in higher purposes; however I still need a job to be enable/pursue those things. If AI takes the meaning out of your craft it takes out the ability to use it to pursue higher order principles as well for most people, especially if you aren't in the US/big tech scene with significant equity to "make hay while the sun is still shining".
Isn't there something good about being embodied and understanding a medium of expression rather than attempting to translate ideas directly into results as quickly as possible?
My family eats out at a nice steak restaurant every Christmas no one wants to cook. None of us like to cook.
I feel like we are in a period of low empathy, understanding and caring for others as an aside from just this piece.
Advanced tools are never "merely tools".
Tools that are pushed onto people, come to be expected to even participate in social/professional life, and take over knowledge-based tasks and creative aspects, are even less "merely tools".
We are not talking of a hammer or a pencil here. An LLM user doesn't outsource typing, they outsource thinking.
I guess did not waste time learning the failure-prone arcana of how to schedule training jobs on HuggingFace, but that also seems to me like a net benefit.
I feel like it may be something inherently wrong in the interface more than the actual expression of the tool. I'm pretty sure we are in some painful era where LLM, quiet frankly, help a tons with an absurd amount of stuff, underlying tons and "stuff" because it really is about "everything".
But it also generate a lot of frustrations ; I'm not convinced of the conversational status-quo for example ; and I could easily see something inspired directly from what you said about drawing ; there is something here about the experience - and it's really difficult to work on because it's inherently personal and may require to actually spend time, accumulate frustration to finally be able to express it through something else.
Ok time to work lmao
Standard distribution says some minority of IT projects are tragi-bad… I’ve worked with dudes who would copy and paste three different JavaScript frameworks onto the same page, as long as it worked…
AirFryers are great household tabletop appliances that help people cook extraordinary dishes their ovens normally wouldn’t faster and easier than ever before. A true revolution. A proper chef can use one to craft amazing food. They’re small and economical, awesome for students.
Chefs just call it “convection cooking” though. It’s been around for a minute. Chefs also know to go hot (when and how), and can use an actual deep fryer if and when they want.
The frozen food bags here have AirFryer instructions now. The Michelin star chefs are still focusing on shit you could buy books about 50 years ago…
If I can keep adding new features without introducing big regressions that is good design and good code quality. (Of course there will come a time when it will not be possible and it will need a rewrite. Same like software created by top paid developers from the best universities.)
As long as we can keep new bugs to the same level as hand written code with LLM written code, I think, LLMs writing code is much superior just because of the speed with which it allows us to implement features.
We write software to solve (mostly) business efficiency problems. The businesses which will solve those problems faster than their competitors will win.
I have no idea what the code quality is like in any of the software I use, but I can tell you all about how well they work, how easy to use they are, and how fast they run.
The code is as good or even better than I would have written. I gave Claude the right guidelines and made sure it stayed in line. There are a bunch of playwright tests ensuring things don't break over time, and proving that things actually work.
I didn't have to mess with any of the HTML/css which is usually what makes me give up my personal projects. The result is really, really good, and I say that as someone who's been passionate about programming for about 15 years.
3 days for a complete webshop with Stripe integration, shipping labels and tracking automation, SMTP emails, admin dashboard, and all the custom features that I used to dream of.
The future is both really exciting and scary.
But it does save me time in many other aspects, so I can't complain.
I just wish I could have competent enough local LLMs and not rely on a company.
This right there in your very own comment is the crux. Unless you're rich or run your own business, your employer (and many other employers) are right now counting down the days till they can think of YOU as boilerplate they want to FARM you out to an LLM. At the very least where they currently employee 10 they are salivating about reducing it to 2.
This means painful change for a great many people. Appeal by analogy to historical changes like motorised vehicles etc miss the qualitative change occurring this time.
Many HN users may point to Jevons paradox, I would like to point out that it may very well work up until the point that is, that it doesn't. After all a chicken has always seen the farmer as benevolent provider of food, shelter and safety, that is until of course THAT day when he decides he doesn't.
AI may make low ROI projects more viable now (e.g. internal tooling in a company, or a business website) but in general the high ROI and therefore can justify high salary projects would of been done anyway.
That is exactly the moment when you cannot say anything about the code and cannot fix single line by yourself.
Years ago it was Programmer -> Code -> Compile -> Runtime Now today the Programmer is divided into two entities.
Intention/Prompt Engineer -> AI -> Code -> Compile -> Runtime.
We have entered the 'sudo make me a sandwich' world where computers are now doing our bidding via voice and intent. Despite knowing how low level device drivers work I do not care how a file is stored, in what format, or on what medium. I do want it to function with .open and .write which will work as expected with a working instruction set.
Those who can dive deep into software and hardware problems will retain their jobs or find work doing that which AI cannot. The days of requiring an army of six figure polyglots has passed. As for the ability to production or kernel level work is a matter of time.
It may be more extreme than what you are suggesting here, but there are definitely people out there who think that code quality no longer matters. I find that viewpoint maddening. I was already of the opinion that the average quality of software is appalling, even before we start talking about generated code. Probably 99% of all CPU cycles today are wasted relative to how fast software could be.
Of course there are trade-offs: we can’t and shouldn’t all be shipping only hand-optimised machine code. But the degree to which we waste these incredible resources is slightly nauseating.
Just because something doesn’t have to be better, it doesn’t mean we shouldn’t strive to make it so.
For those who have swallowed the AI panacea hook line and sinker. Those that say it's made me more productive or that I no longer have to do the boring bits and can focus on the interesting parts of coding. I say follow your own line of reasoning through. It demonstrates that AI is not yet powerful enough to NOT need to empower you, to NOT need to make you more productive. You're only ALLOWED to do the 'interesting' parts presently because the AI is deficient. Ultimately AI aims to remove the need for any human intermediary altogether. Everything in between is just a stop along the way and so for those it empowers stop and think a little about the long term implications. It may be that for you right now it is comfortable position financially or socially but your future you in just a few short months from now may be dramatically impacted.
As someone said "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes".
I can well imagine the blood draining from peoples faces, the graduate coder who can no longer get on the job ladder. The law secretary whose dream job is being automated away, a dream dreamt from a young age. The journalist whose value has been substituted by a white text box connected to an AI model.
I don't have any ideas as to what should be done or more importantly what can be done. Pandora's box has been opened, Humpty Dumpty has fallen and he can't be put back together again. AI feels like it has crossed the rubicon. We must all collectively await to see where the dust settles.
Economically it's been a mistake to let wealth get stratified so unequally; we should have and need to reintroduce high progressive tax rates on income and potentially implement wealth taxes to reduce the necessity of guessing a high-paying career over 5 years in advance. That simply won't be possible to do accurately with coming automation. But it is possible to grow social safety nets and decrease wealth disparity so that pursuing any marginally productive career is sufficient.
Practically, once automation begins producing more value than 25% or so of human workers we'll have to transition to a collective ownership model and either pay dividends directly out of widget production, grant futures on the same with subsidized transport, or UBI. I tend to prefer a distribution-of-production model because it eliminates a lot of the rent-seeking risk of UBI; your landlord is not going to want 2X the number of burgers and couches you get distributed as they'd happily double rent in dollars.
Once full automation hits (if it ever does; I can see augmented humans still producing up to 50% of GDP indefinitely [so far as anyone can predict anything past human-level intelligence] especially in healthcare/wellness) it's obvious that some kind of direct goods distribution is the only reasonable outcome; markets will still exist on top of this but they'll basically be optional participation for people who want to do that.
Career being the core of one's identity is so ingrained in society. Think about how schooling is directed towards producing what 'industry' needs. Education for educations sake isn't a thing. Capitalism see's to this and ensures so many avenues are closed to people.
Perhaps this will change but I fear it will be a painful transition to other modes of thinking and forming society.
Another problem is hoarding. Wealth inequality is one thing but the unadulterated hoarding by the very wealthy means that wealth is unable to circulate as freely as it ought to be. This burdens a society.
The main reason for the transformer architecture, and many other AI advancements really was "big tech" has lots of cash that they don't know what to do with. It seems the US system punishes dividends as well tax wise; so companies are incentivized to become like VC's -> buy lots of opportunities hoping one makes it big even if many end up losing.
Put it this way - to have a project where people have the luxury to scratch their heads for awhile and to bet on something that may not actually be possible yet is something most companies can't justify to finance. Listening to the story of the transformer invention it sounds like one of these projects to me.
They may stand on the shoulders of giants that is true (at the very least they were trained in these institutions) but putting it together as it was - that was done in a commercial setting with shareholder funds.
In addition given the disruption to Google in general LLM's have done I would say, despite Gemini, it may of been better cost/benefit wise for Google NOT to invent the transformer architecture at all/yet or at least not publish a white paper for the world to see. As a use of shareholders funds the activity above probably isn't a wise one.
Replacing Dockerfiles and Compose with CUE and Dagger
People seem to have a visceral reaction towards AI, where it angers them enough that even the idea that people might like it upsets them.
For you, maybe. In my experience, the constant need for babysitting it to avoid the generation of verbose, unmaintainable slop is exhausting and I'd rather do everything myself. Even with all the meticulously detailed instructions, it feels like a slot machine - sometimes you get lucky and the generated code is somewhat usable.
There's currently an enormous pressure on developers to pay lip service to loving AI tools. Expressing a differing opinion easily gets someone piled on for being outdated or not understanding things, from people who sometimes mainly do it to virtue-signal and perform their own branding exercise.
Open self-expression takes guts, and is hard to substitute for with AI assistance.
It’s amazing and scary. I was wondering how takeoff would look like and I’m living it for better or worse.
>The joy of management is seeing my colleagues learn and excel, carving their own paths as they grow. Watching them rise to new challenges. As they grow, I learn from their growth; mentoring benefits the mentor alongside the mentee.
I fail to grasp how using LLMs precludes either of these things. If anything, doing so allows me to more quickly navigate and understand codebases. I can immediately ask questions or check my assumptions against anything I encounter.
Likewise, I don’t find myself doing less mentorship, but focusing that on higher-level guidance. It’s great that, for example, I can tell a junior to use Claude to explore X,Y, or Z design pattern and they can get their own questions answered beyond the limited scope of my time. I remember seniors being dicks to me in my early career because they were overworked or thought my questions were beneath them. Now, no one really has to encounter stuff like that if they don’t want to.
I’m not even the most AI-pilled person I know or on my team, but it just seems so staggeringly obvious how much of a force multiplier this stuff has become over the last 3-6 months.
The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do. Things that give them meaning and many of these are tied to earning money and producing value for doing just that thing. Software/coding is once of these activities. One can do coding for fun but doing the same coding where it provides value to others/society and financial upkeep for you and your family is far more meaningful.
If that is what you've been doing, a love for coding, I can well empathise how the world is changing underneath your feet.
Turns out that a lot of code is fine with this. Some parts of the industry still have more stringent standards however.
Andrej Karpathy is one of the best engineers in the country, George Hotz is one of the best engineers in the country, etc.
They're also people who have founded AI companies, so of course they would push for it.
It feels like you're taking all the claims of the AI hype bubble at face value.
> Andrej Karpathy is one of the best engineers in the country, George Hotz is one of the best engineers in the country, etc.
You have citations of them explicitly making this claim on behalf of all SWEs in all domains/langs? I'd find that surprising, if so.
This is the same situation we were in decades ago, just before ai, and still are
AI changes nothing about this statement, humans do not write prefect code
Some projects are built by very small close-knit teams which get to drive everything.
In that scenario the developers have tight control over every little detail.
Absolutely disagree. I use LLM to speed up the process and ONLY accept code that I would write myself.
end of the day, guys like the author, for better or worse, are going to be replaced by the next generation of developers who don't care for the 'aesthetics' in the same way
Now, basically every new "AI" feature feels like a hack on top of yet another LLM. And sure the LLMs seem to keep getting marginally better, but the only people with the resources to actually work on new ones anymore are large corporate labs that hide their results behind corporate facades and give us mere mortals an API at best. The days of coding a unique ML algorithm for a domain specific problem are pretty much gone -- the only thing people pay attention to is shoving your domain specific problem into an LLM-shaped box.
It seems like there's more excitement around AI for the average person, which is probably a good thing I suppose, but for a lot of people that were into the field it's not really that fun anymore.
LLM user here with no experience of ML besides fine-tuning existing models for image classification.
What are the exciting AI fields outside of LLMs? Are there pending breakthroughs that could change the field? Does it look like LLMs are a local maxima and other approaches will win through - even just for other areas?
Personally I'm looking forward to someone solving 3D model generation as I suck at CAD but would 3D print stuff if I didn't have to draw it. And better image segmentation/classification models. There's gotta be other stuff that LLMs aren't the answer to?
There's a lot of problems LLMs are really useful for because generating text is what you want to do. But there's tons of problems which we would want some sort of intelligent, learning behaviour that do not map to language at all. There's also a lot of problems that can "sort of" be mapped to a language problem but make pretty extraneous use of resources compared to a (existing or potential) domain specific solution. For purposes of AGI, you could argue that trying to express "general intelligence" via language alone is fundamentally flawed altogether -- although that quickly becomes a debate about what actually counts as intelligence.
I pay less attention to this space lately so I'm probably not the most informed. Everyone seems so hyped about LLMs that I feel like a lot of other progress gets buried, but I'm sure it's happening. There's some problem domains that are obviously solved better with other paradigms currently: self-driving tech, recommendation systems, robotics, game AIs, etc. Some of the exciting stuff that can likely solve some problems better in the future is some of the work on world models, graph neural nets, multi modality, reinforcement learning, alternatives to gradient descent, etc. I think it's a debate whether or not LLMs are a local maxima but many of the leading AI researchers seem to think so -- Yann Lecun recently for e.g. said LLMs 'are not a path to human-level AI'
My role changes from coming up with solutions to babysitting a robotic intern. Not 100% of course. And of course an agent can be useful like 'intellisense on steroids'. Or an assistant who 'ripgreps' for me. There are advantages for sure. But for me the advantages don't match the disadvantages. LLMs take the heart out of what made me like programming: building stuff yourself with your near infinite lego box of parts and coming up with ideas yourself.
I'm only half convinced the LLMs will become as important to coding as they seem . And I'm hoping a sane balance will emerge at the other end of the hype. But if it goes where OpenAI etc. want it to go I think I'll have to re-school to become an electrician or something...
i feel like that's all im doing with llms. just in the last hour i realized that i wanted an indexed string internpool instead of passing string literals. the LLM refactored everything and then i didn't have to worry about that lego piece anymore.
I guess mechanics must feel the same about modern computerized cars, where suddenly the injection timing is no longer a mechanical gadget you can tweak by experience, but some locked down black box you don't have control over.
Also I really dislike that (for now) using an LLM means selling your soul to some dubious company. Even if you use only the free tier you still need to upload your code and let the LLM do whatever with it. If an LLM is an indispensible part of being a programmer, everybody will be held hostage by the large techfirms (even more...).
here's my current project, judge for yourself:
https://github.com/ityonemo/clr
Tbh if it wasn't for coding disruption I don't think the AI boom would of really been that hyped up.
For one thing, LLMs aren't terrible at grammar.
a) People who gain value from the process of creating content.
b) People who gain value from the end result itself.
I personally am more of a (b): I did my time learning how to create things with code, but when I create things such as open-source software that people depend on, my personal satisfaction from the process of developing is less relevant. Getting frustrated with code configuration and writing boilerplate code is not personally gratifying.
Recently, I have been experimenting more with Claude Code and 4.5 Opus and have had substantially more fun creating utterly bizarre projects that I suspect would have more frustration than fun implementing the normal way. It does still require brainpower to QA, identify problems, and identify potential fixes: it's not all vibes. The code quality, despite intuition, has no issues or bad code smells that is expected of LLM-generated code and with my approach actually runs substantially more performantly. (I'll do a full writeup at some point)
I'm in the second camp, and I think the author is as well. For those of us, LLMs are kind of boring.
For me, the fun part of programming is having the freedom to get my computer to do whatever I want. If I can't find a tool to do something, I can write it myself! That has been a magical feeling since I first discovered it all those years ago.
LLMs gives me the ability to do even more things I want, faster. I can conceptualize what I want to create, I can specify the details as much as I want, and then use an LLM to make it happen.
It is truly magical. I feel like I am programming in Star Trek, with the computer as an ally instead of as the a receptacle for my code.
I have become a general and a master of multitude of skeleton agents. my attention to the realm of managing effectively the unreproducible result of running the same incantations.
As the sailor through the waters of the coastline he have roamed plenty of times, the currents are there, yet the waves are new everyday.
Whatever limitation is removed, I should approach the market and test my creations swiftly and enrich myself, before the first legion of lich kings appear. they, better masters than I would ever be.
If you’re letting the LLM do things you aren’t spending the time to understand in depth, you are shirking your professional responsibilities
People like this have a great deal to personally lose from LLMs. It makes them substantially less "special". Or so they think, but it is actually not true at all.
I think some of them resent having to level up again to stay relevant. Like when video games add more levels to a game you though you already beat. Fair enough, but such is life and natural competition.
When they come at LLMs with this attitude (gritting their teeth while prompting) it is no wonder they are grossly offended and disgusted by its outputs.
I've been tempted at times to hold these attitudes myself but my approach for now is to see how much I can learn about this tool and use it for as much as I can while tokens are subsidized. Either it all pops with the bubble or I have gained new, marketable skills. And no your hand coding skills don't just evaporate. In fact, I now I have a new found love of hand coding as a hobby since that part of my brain is no longer used up by the end of the day with coding tasks for Work.
100%. The fun is in understanding, creating, exaplaining. Is not in typing, boilerplating, fixing missing imports, and API mismatch etc.
But, if you are in a work situation where LLM's are forced upon you in very high doses, then yes -- I understand the feeling.
26 more comments available on Hacker News