Believe the Checkbook
Key topics
The debate around the future of software engineering labor is heating up, with some commenters arguing that advancements in AI will significantly reduce the demand for engineers, while others contend that new productivity multipliers will continue to drive up demand. The phrase "revealed preferences" is tossed around, suggesting that the market will ultimately decide the fate of software engineers. As one commenter notes, "People speak in relative terms and hear in absolutes," highlighting the complexity of predicting the impact of AI on the industry. While some see AI as a force multiplier for high-judgment tasks, others point out that past productivity gains were tied to expanding markets, which may be reaching saturation.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
46m
Peak period
72
Day 1
Avg / period
26.7
Based on 80 loaded comments
Key moments
- 01Story posted
Dec 19, 2025 at 10:51 AM EST
22 days ago
Step 01 - 02First comment
Dec 19, 2025 at 11:37 AM EST
46m after posting
Step 02 - 03Peak activity
72 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 31, 2025 at 2:25 AM EST
11 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
All the last productivity multipliers in programming led to increased demand. Do you really think the market is saturated now? And what saturated it is one of the least impactful "revolutionary" tools we got in our profession?
Keep in mind that looking at statistics won't lead to any real answer, everything is manipulated beyond recognition right now.
This AI craze swooped in at the right time to help hold up the industry and is the only thing keeping it together right now. We're quickly trying to build all the low-hanging fruit for it, keeping many developers busy (although not like it used to be) but it currently doesn't look like there is much low-hanging fruit to build. LLMs don't appear to have the breadth of need like previous computing revolutions had. Once we've added chat interfaces to everything, which is far from being a Herculean task, all the low-hanging fruit will be gone. That's quite unlike previous revolutions where we had to build all the software from scratch, effectively, not just slap some lipstick on existing software.
If we want to relive the past, we need a new hardware paradigm that needs everything rewritten for it again. Not impossible, but all the low-hanging hardware directions have also been checked off at this point.
They didn't. But it may be a relevant point that all of that was slow enough to spread that we can't clearly separate them.
Anyway, the idea that any one of those large markets is at saturation point requires some data. AFAIK, anything from mainframe software to phones has (relatively) exploded in popularity every time somebody made them cheaper, so that is a claim that all of those just changed (too recently to measure), without any large thing to correlate them.
> That's quite unlike previous revolutions where we had to build all the software from scratch
We have rewritten everything from scratch exactly once since high-level languages were created in the 70s.
Also I do hold a belief that most tech companies are taking a cost/labor reduction strategy for a reason, and I think that’s because we’re closing a period of innovation. Keeping the lights on, or protecting their moats, requires less labor.
It always surprises me that this isn't obvious to everyone. If AI wrote 100% of the code that I do at work, I wouldn't get any more work done because writing the code is usually the easy part.
All OSS has been ingested, and all the discussion in forum like this about it, and the personal blog posts and newsletters about it; and the bug tracking; and theh pull requests, and...
and training etc. is only going to get better and filtering out what is "best."
At best, what I find online are basic day 1 tutorials and proof on concept stuff. None of it could be used in production where we actually need to handle errors and possible failure situations.
There is barely anything that qualifies as documentation that they are willing to provide under NDA for lock-in reasons/laziness (ERPish sort of thing narrowly designed for the specific sector, and more or less in a duopoly).
The difficulty in developing solutions is 95% understanding business processes/requirements. I suspect this kind of thing becomes more common the further you get from a "software company” into specific industry niches.
How many hours per week did you spend coding on your most recent project? If you could do something else during that time, and the code still got written, what would you do?
Or are you saying that you believe you can't get that code written without spending an equivalent amount of time describing your judgments?
So reducing the part where I go from abstract system to concrete implementation only saves me time spent typing, while at the same time decoupling me from understanding whether the code actually implements the system I have in mind. To recover that coupling, I need to read the code and understand what it does, which is often slower than just typing it myself.
And to even express the system to the code generator in the first place still requires me to mentally bridge the gap between the goal and the system that will achieve that goal, so it doesn't save me any time there.
The exceptions are things where I literally don't care whether the outputs are actually correct, or they're things that I can rely on external tools to verify (e.g. generating conformance tests), or they're tiny boilerplate autocomplete snippets that aren't trying to do anything subtle or interesting.
My judgement is built in to the time it takes me to code. I think I would be spending the same amount of time doing that while reviewing the AI code to make sure it isn't doing something silly (even if it does technically work.)
A friend of mine recently switched jobs from Amazon to a small AI startup where he uses AI heavily to write code. He says it's improved his productivity 5x, but I don't really think that's the AI. I think it's (mostly) the lack of bureaucracy in his small 2 or 3 person company.
I'm very dubious about claims that AI can improve productivity so much because that just hasn't been my experience. Maybe I'm just bad at using it.
Speed of typing code is not all that different than the speed of typing English, even accounting for the volume expansion of English -> <favorite programming language>. And then, of course, there is the new extra cost of then reading and understanding whatever code the AI wrote.
Okay, you've switched to English. The speed of typing the actual tokens is just about the same but...
THE STANDARD LIBRARY IS FUCKING HUGE!
Every concept that you have ever read about? Every professional term, every weird thing that gestures at a whole chunk of complexity/functionality ... Now, if I say something to my LLM like:
> Consider the dimensional twins problem -- how're we gonna differentiate torque from energy here?
I'm able to ... "from physics import Torque, Energy, dimensional_analysis" And that part of the stdlib was written in 1922 by Bridgman!
And extremely buggy, and impossible to debug, and does not accept or fix bug reports.
AI is like an extremely enthusiastic junior engineer that never learns or improves in any way.
I love working with junior engineers. One of the best parts about working with junior engineers is that they learn and become progressively more experienced as time goes on. AI doesn't.
And come on: AI definitely will become better as time goes on.
I guess we find out which software products just need to be 'good enough' and which need to match the vision precisely.
It’s sort of the opposite: You don’t get to the proper judgement without playing through the possibilities in your mind, part of which is accomplished by spending time coding.
A shift to not writing code (which is apparently sometimes possible now) and managing AI agents instead is a pretty major industry change.
It's like how every job requires math if you make it far enough.
If they could write exactly what I wanted but faster, I'd probably stop writing code any other way at all because that would just be a free win with no downside even though the win might be small! They don't write exactly what I want though, so the tradeoff is whether the amount of time they save me writing it is lost from the extra time debugging the code they wrote rather than my own. It's not clear to me that the code produced by an LLM right now is going to be close enough to correct enough of the time that this will be a net increase in efficiency for me. Most of the arguments I've seen for why I might want to consider investing more of my own time into learning these tools seem to be based on extrapolation of trends to up to this point, but it's still not clear to me that it's likely that they'll become good enough to reach a positive ROI for me any time soon. Maybe if the effort to actually start using them more heavily was lower I'd be willing to try it, but from what I can tell, it would take a decent amount of work for me to get the point where I'm even producing anything close to what I'm currently producing, and I don't really see the point of doing that if it's still an open question if it will ever close the remaining gap.
Never is a very strong word. I'm not a terribly fast typist but I intentionally trained to be faster because at times I wanted to whip out some stuff and the thought of typing it all out just annoyed me since it took too long. I think typing speed matters and saying it doesn't is a lie. At the very least if you have a faster baseline then typing stuff is more relaxing instead of just a chore.
All I had to do was a two line prompt, and accept the pull request. It probably took 10 minutes out of my day, which was mostly the people I was helping explaining what they thought was wrong. I think it might've taken me all day if I had to go through all the code and the documentation and fixed it. It might have taken me a couple of days because I probably would've made it less insane.
For other tasks, like when I'm working on embedded software using AI would slow me down significantly. Except when the specifications are in German.
Imperfectly fixing obvious problems in our processes could gain us 20%, easy.
Which one are we focusing on? AI. Duh.
> Everyone’s heard the line: “AI will write all the code; engineering as you know it is finished... The Bun acquisition blows a hole in that story.”
But what it actually discusses and shows by the end of the article is the aspects of engineering beyond writing the code are where the value in engineers really is. That's not a revealed preference, it's exactly matching the original claim AI can write the code and the value in human engineering around software development is different because of it.
And what about vibe coding? The whole point and selling point of many AI companies is that you don’t need experience as a programmer.
So they sell something that isn’t true, it’s not FSD for coding but driving assistance.
The house of the feeble minded: https://www.abelard.org/asimov.php
I don’t know if HN has made me hyper-sensitized to AI writing, but this is becoming unbearable.
When I find myself thinking “I wonder what the prompt was they used?” while reading the content, I can’t help but become skeptical about the quality of the thinking behind the content.
Maybe that’s not fair, but it’s the truth. Or put differently “Fair? No. Truthful? Yes.”. Ugh.
> Tighten the causal claim: “AI writes code → therefore judgment is scarce”
As one of the first suggestions, so it's not something inherent to whether the article used AI in some way. Regardless, I care less what did the writing and more whether the result was good.
This argument requires us to believe that AI will just asymptote and not get materially better.
Five years from now, I don't think anyone will make these kinds of acquisitions anymore.
That's not what asymptote means. Presumably what you mean is the curve levelling off, which it already is.
It hasn't gotten materially better in the last three years. Why would it do so in the next three or five years?
I assume this is at least partially a response to that. They wouldn't buy a company now if it would actually happen that fast.
> Everyone’s heard the line: “AI will write all the code; engineering as you know it is finished.”
Software engineering pre-LLMs will never, ever come back. Lots of folks are not understanding that. What we're doing at the end of 2025 looks so much different than what we were doing at the end of 2024. Engineering as we knew it a year or two ago will never return.
I use AI as a smart auto complete - I’ve tried multiple tools on multiple models and I still _regularlt_ have it dump absolute nonsense into my editor - in thr best case it’s gone on a tangent, but in the most common case it’s assumed something (often times directly contradicting what I’ve asked it to do), gone with it, and lost the plot along the way. Of course when I correct it it says “you’re right, X doesn’t exist so we need to do X”…
Has it made me faster? Yes. Had it changed engineering - not even close. There’s absolutely no world where I would trust what I’ve seen out of these tools to run in the real world even with supervision.
In startups I’ve competed against companies with 10x and 100x the resources and manpower on the same systems we were building. The amount of code they theoretically could push wasn’t helping them, they were locked to the code they actually had shipped and were in a downwards hiring spiral because of it.
I'm also not building webapps. I work in data engineering on a large legacy airflow project, internal python libraries, infrastructure with terraform, etc.
Assume you're writing code manually, and you personally make a mistake. It's often worthwhile to create a mechanism that prevents that class of mistake from cropping up again. Adding better LSP or refactoring support to your editor, better syntax highlighting, better type checking, etc.
That same exact game of whack a mole now has to be done for you and whatever agent you're building with. Some questions to ask: What caused the hallucination? Did you have the agent lay out its plan before it writes any code? Ask you questions and iterate on a spec before implementation? Have you given it all of the necessary tools, test harnesses, and context it needs to complete a request that you've made to it? How do you automate this so that it's impossible for these pieces to be missing for the next request? Are you using the right model for the task at hand?
I can’t see how buying a runtime for the sake of Claude Code makes sense.
But at this point I'm not confident that I'm not failing to identify a lot of LLM-generated text and not making false positives.
Unlikely. AI keeps improving, and we are already at the point where real people are accused of being AI.
The point is still valid, although I've seen it made many times over.
Clever pitch. Don't alienate all the people who've hitched their wagons to AI, but push valuing highly-skilled ICs as an actionable leadership insight.
Incidentally, strategy and risk management sound like a pay grade bump may be due.
I don’t know why the acquisition happened, or what the plans are. But it did happen, and for this we don’t have to suspend disbelief. I don’t doubt Anthropic has plans that they would rather not divulge. This isn’t a big stretch of imagination, either.
We will see how things play out, but people are definitely being displaced by AI software doing work, and people are productive with them. I know I am. The user count of Claude Code, Gemini and ChatGPT don’t lie, so let’s not kid ourselves.