Software Development in the Time of Strange New Angels
Posted2 months agoActiveabout 2 months ago
davegriffith.substack.comTechstory
calmmixed
Debate
80/100
AI in Software DevelopmentFuture of CodingImpact of Automation
Key topics
AI in Software Development
Future of Coding
Impact of Automation
The article discusses how AI is changing software development, and the discussion revolves around whether AI will replace human developers or augment their capabilities.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
9d
Peak period
59
Day 10
Avg / period
33.5
Comment distribution67 data points
Loading chart...
Based on 67 loaded comments
Key moments
- 01Story posted
Nov 3, 2025 at 2:10 PM EST
2 months ago
Step 01 - 02First comment
Nov 12, 2025 at 4:06 PM EST
9d after posting
Step 02 - 03Peak activity
59 comments in Day 10
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 14, 2025 at 1:41 PM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45803071Type: storyLast synced: 11/20/2025, 8:00:11 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
"A bad [software engineer] can easily destroy that much value even faster (A developer at Knight Capital destroyed $440 million in 45 minutes with a deployment error and some bad configuration logic, instantly bankrupting the firm by reusing a flag variable). "
The biggest being that the only safe way to recycle feature flag names is to put ample time separation between the last use of the previous meaning for the flag and the first application of the new use. They did not. If they had, they would have noticed that one server was not getting redeployed properly in the time gap between the two uses.
They also did not do a full rollback. They rolled back the code but not the toggles, which ignited the fire.
These are rookie mistakes. If you want to argue they are journeyman mistakes, I won’t fight you too much, but they absolutely demonstrate a lack of mastery of the problem domain. And when millions of dollars change hands per minute you’d better not be Faking it Til You Make It.
https://specbranch.com/posts/knight-capital/
The fact that they hordes old code for NINE YEARS evaporates most of the remaining sympathy I had for them. Witaf.
Google generates a lot of revenue per employee not because the employees are good (though many of them are of course), but because they own the front door to the web. And the Knight Capital story has many nuances left out by that summary.
In both cases the author needed a hard hitting but terse example. But as I said, both the claims are true, so in the voice of the courtroom judge, "I'll allow it."
Developers who get excited by agentic development put out posts like this. (I get excited too.)
Other developers tend to point out objections in terms of maintainability, scalability, overly complicated solutions, and so on. All of which are valid.
However, this part of AI evolves very quickly. So given these are known problems, why shouldn't we expect rapid improvements in agentic AI systems for software development, to the point where software developers who stick with the old paradigm will indeed be eroded in time? I'm genuinely curious because clearly the speed of advancement is significant.
Because writing code has always been the easy part. A senior isn't someone who's better at writing code than a junior - they might well be worse at writing code. AI can now do the easy part, sure. What grounds does that present for believing that it's soon going to be able to do the hard part?
So yeah, if you’re starting with a “write me code that does X Y Z” then you aren’t getting the most out of these tools, because you’re right, that’s not the hard part.
All I see from “excited” developers is denial that there is a problem. But why would it be a problem that you have to review code generated by a program with the same fine-tooth comb that you use for human review?
[1] Some things change fast, some things never change at all.
I've spent the bulk of my 30+ career in various in-house dev/management roles, and small to medium sizes digital agencies or IT consulting places.
I that time I have worked on many hundreds of project, probably thousands.
There are maybe a few dozen that were still in production use without major rewrites on the way for more than 5 years.
I think for a huge amount of commercial projects, "maintainability" is something that developers are passional about, but that is of very little actual value to the client.
Back in the day when I spent a lot of time on comp.lang.perl.misc, there was a well know piece of advice "alway throw away the first version". My career-long takeaway from that has been to always race to a production ready proof of concept quickly enough to get it in front of people - ideally the people who are then spending the money that generates the business profits. Then if it turns successful, re write it from scratch incorporating everything you've learned from the first version - do not be tempted to continually tweak the hastily written code. These days people call something very like that "finding product market fit", and a common startup plan is to prove a business model, and them sell or be acquired before you need to spend the time/money on that rewrite.
"You might be expecting that here is where I would start proclaiming the death of software development. That I would start on how the strange new angels of agentic AI are simply going to replace us wholesale in order to feast on that $150/hour, and that it's time to consider alternative careers. I'm not going to do that, because I absolutely don't believe it. Agentic AI means that anything you know to code can be coded very rapidly. Read that sentence carefully. If you know just what code needs to be created to solve an issue you want, the angels will grant you that code at the cost of a prompt or two. The trouble comes in that most people don't know what code needs to be created to solve their problem, for any but the most trivial problems. Who does know what code would be needed to solve complex problems? Currently that's only known by software developers, development managers and product managers, three job classifications that are going to be merging rapidly."
The downsides of code generation are only amplified with LLM code generation. Oh it’s just what I would have written. Now on the fifteenth iteration/rewrite. Generated idiomatic code for twelve years ago. squints oh yeah I would have written that back then... gosh it feels good to be in this exclusive club.
Now there are a few things I see that affect this
1. The only way Claude knew how to do this is because there was a stack of existing code, but its probably in C, so you could regard Claude as an expert programming language translator.
2. There is no way that Claude could integrate this into my current code base
3. Claude can't create anything new
4. It's often very wrong, and the bigger/more complex the code is, the wronger it gets.
So, what are the areas that Claude excels? it seems that CRUD web app/Web front end is the sweet spot? (not really sure about this - I don't do much web front end work). I write graphics Apps and Claude is handy for those things you'd have to look up and spend some time on, but thats about all.
An example - I asked it to make me some fancy paint brush code (to draw in a painterly style), this is hard, the code that it made was pretty bad, it just used very basic brush styles, and when pressed it went into crazy land.
So my point is - if something exists and its not too hard, Claude is great, if you want something large and complex, then Claude can be a good helper. I really don't see how it can replace a good dev, there are a lot of code monkeys around gluing web sites together that could be replaced but even then they are probably the same people who are vibe coding now.
If you really want some fun ask them to draw a circuit diagram for a simple amplifier, it's almost painful watching them struggle.
Plus running AI tools is going to get much more expensive. The current prices aren't sustainable long term and they don't have any viable path to reducing costs. If anything the cost of operations for the big company are going to get worse. They're in the "get 'em hooked" stage of the drug deal.
There's a great example of that in the linked post itself:
> Let's build a property-based testing suite. It should create Java classes at random using the entire range of available Java features. These random classes should be checked to see whether they produce valid parse trees, satisfying a variety of invariants.
Knowing what that means is worth $150/hour even if you don't type a single line of code to implement it yourself!
And to be fair, the author makes that point themselves later on:
> Agentic AI means that anything you know to code can be coded very rapidly. Read that sentence carefully. If you know just what code needs to be created to solve an issue you want, the angels will grant you that code at the cost of a prompt or two. The trouble comes in that most people don't know what code needs to be created to solve their problem, for any but the most trivial problems.
On your second point: I wouldn't recommend betting against costs continuing to fall. The cost reduction trend has been reliable over the past three years.
In 2022 the best available models was GPT-3 text-davinci-003 at $60/million input tokens.
GPT-5 today is $1.25/million input tokens - 48x cheaper for a massively more capable model.
... and we already know it can be even cheaper. Kimi K2 came out two weeks ago benchmarking close to (possibly even above) GPT-5 and can be run at an even lower cost.
I'm willing to bet there are still significantly more optimizations to be discovered, and prices will continue to drop - at least on a per-token basis.
We're beginning to find more expensive ways to use the models though. Coding Agents like Claude Code and Codex CLI can churn through tokens.
I said the same thing about Netflix in 2015 and Gamepass in 2020. It might have taken a while but eventually it happened. And they're gonna have to raise prices higher and faster at some point.
150% increase in 18 years is about triple the rate of inflation
I'd call that more than "a little bit", but I'd agree that if LLMs go the same way it probably doesn't change the equation much at all.
Plus they do still have an $8/mo ad-supported plan. After adjusting for inflation, that's actually cheaper than the original!
The incentives here are also fucking atrocious. They aren't incentivised to make the model as good as possible. It's in their best interest to tune so it's good enough to not drive you off, but bad enough to push you to spend more.
It wont: if/when LLMs start to get too expensive, people will just migrate to open models, run it local, etc. I see no scenario where we are held hostage by the main providers.
Gamepass, same thing.
Those are completely incomparable businesses.
>GPT-5 today is $1.25/million input tokens - 48x cheaper for a massively more capable model.
Yes - but.
GPT-5 and all the other modern "reasoning models" and tools burn through way more tokens to answer the same prompts.
As you said:
> We're beginning to find more expensive ways to use the models though. Coding Agents like Claude Code and Codex CLI can churn through tokens.
Right now, it feels that "frontier models" costs to use are staying the same as they've been for the entire ~5 year history of the current LLM/AI industry. But older models these days are comparably effectively free.
I'm wondering when/if there'll be a asymptotic flattening, where new frontier models are insignificantly better that older ones, and running some model off Huggingface on a reasonably specced up Mac Mini or gaming PC will provide AI coding assistance at basically electricity and hardware depreciation prices?
gpt-oss-120b fits on a $4000 NVIDIA Spark and can be used by Codex - it's OK but still nowhere near the bigger ones: https://til.simonwillison.net/llms/codex-spark-gpt-oss
But... MiniMax M2 benchmarks close to Sonnet 4 and is 230B - too big for one Spark but can run on a $10,000 Mac Studio.
And Kimi K2 runs on two Mac Studios ($20,000).
So we are getting closer.
Trouble is, there's not even much hype surrounding the launch yet, much less shipping hardware. Which seems kind of ominous.
Yeah but eventually there wont be enough people who actually do know all that.
The hope amongst the proponents is that by the time that happens they wont need anyone who knows that because SOTA will have replaced those people too.
Not understanding that is something I've been seeing management repeatedly doing for decades.
This article reads like all the things I discovered and the mistakes the company I worked for made learning how to outsource software development back in the late 90s and early 2000s. The only difference is this is using AI to generate the code instead of lower paid developers from developing nations. And, just like software outsourcing as an industry created practices and working styles to maximise profit to outsourcing companies, anyone who builds their business relying on OpenAI/Anthropic/Google/Meta/whoever - is going to need to address the risk of their chosen AI tool vendor ramping up the costs of using the tools to extract all; the value of the apparent cost savings.
This bit matches exactly with my experience:
"The trouble comes in that most people don't know what code needs to be created to solve their problem, for any but the most trivial problems. Who does know what code would be needed to solve complex problems? Currently that's only known by software developers, development managers and product managers, three job classifications that are going to be merging rapidly."
We found that assuming the people you employ as "developers" weren't actually also doing the dev management and product management roles was wrong. At least for our business where there were 6 or 8 devs who all understood the business goals and existing codebase and technology. When we eventually;y got successful outsourced development working was after we realised that writing code from lists of tasks/requirements was way less than 50% of what our in-house development team had been doing for years. We ended up saving a lot of money on that 30 or 40% of the work, but 60 or 70% of the higher level _understanding the business and tech stack_ work still needed to be done by people who understood the whole business and had a vested interest in the business succeeding.
Also in the mix: Stuff involving B2B customers and integrating systems, where "developer" blurs a bit with sales-engineer or consultant, ex:
* Be on a call to ask significant questions (grounded in reading the code) to determine what the customer's real problem is.
* Help craft diplomatic but accurate explanations of what's going wrong.
* Explain what bugs or changes you can do for them versus which parts are fully on their end, sometimes with "here's what I think your engineers should consider doing" advice.
Not always fun, but sometimes enlightened self-interest means I'd rather spend an 1 hour being "one of our developers" in a customer-meeting, as opposed to 6 hours discovering everything is actually working as-intended and the customer just misunderstood what feature they were using.
I wrote JavaScript in the corporate world for 15 years. Here is the reality:
* Almost nobody wants to do it. The people that get paid for it don't want to do it. They just want to get paid. The result is that everybody who does get paid for it completely sucks. Complete garbage, at least at work. There a lot of amazing people writing JavaScript, just not at work, and why would they try harder. Delivering quality at work far outside the bell curve just results in hostility aside for some very rare exceptions. My exception was when I was doing A/B testing for a major .com.
* Since everybody in the corporate JavaScript world completely sucks every major project eventually fails from a business perspective or stalls into lifeless maintenance mode. It just gets too expensive to maintain 5+ years later or too fragile to pivot to the next business demand. So, it has to get refactored or rebuilt. Sometime that means hoping the next generation framework is ready, and the business is willing to train people on it, and willing to go through growing pains. More often this means calling in outside parties who can do it correctly the first time. Its not about scale. Its about the ability to actually build something original and justify every hour productively. I was on both sides of that fence.
* The reason why the corporate overlords hire outside parties to fix problems from internal teams isn't just about talent. Keep in mind it's tremendously expensive. Yes, those people are capable of producing something that doesn't suck and do so faster. The bigger issue is that they will always deliver reliably, because they are executing under a contract with a work performance statement. The internal teams do not have a contract performance definition that will kill their careers or terminates their incomes. They just have to hope the business remains financial solvent so they don't get caught in a mass layoff. This breeds a lot of entitlement and false expectations that seem to grow on each other.
So, yes, in this case it really is about the ability to write code physically. Yes, you need to juggle client nonsense and have soft skills too, but those are layered on top of just being able to write the code. When your options are limited to a bunch of 0s that depend on copy/paste from predefined framework templates you need somebody who can actually justify their existence in a very practical solutions delivery way.
But to be fair your point, and the original authors, isn't a bad one. And I think someone else in this thread said it too. If you're only skill is typing out code against a very narrow spec that someone else did all the work to figure out, you're probably in trouble.
Energy. The new constraint is energy.
You're implying that people are selling inference at below cost right now. That's certainly not true for most third-party inference providers. I doubt API pricing at Anthropic or OpenAI is being solid below cost either.
The only place where you get what you're talking about are the fixed price plans OpenAI, Anthropic, Cursor, etc. sell.
As far as I can tell the inference provider landscape is a fucking mess, and I can't find any decent financial information on any of the ones I tried. So unless you have something showing those companies are profitable I'm not buy it.
Inference itself will keep getting cheaper.
They have one method of monitization right now, and there is no clear evidence that their costs are suddenly going to decrease anytime soon. Despite claims to the contrary, no one has actually provided any evidence of a pathway to those costs magically cutting in half over the next few years.
The entire industry is being propped up by insane over investment and an obsession with growth at all costs. Investments will dry up sooner or later, and you can't grow forever.
So how was that ever going to be a problem?
The optimal choice for marginal costs, which will naturally drop on their own, at the beginning of a new tech cycle is to run in the red. It would be a sign of gross incompetence if they were fine tuning those costs already.
Training spend is the giant expense. And either training costs are unsustainable, and training spend will hit a pause, or it is not unsustainable and training spend will continue.
So, which is it?
Critical point: The majority of their costs are not required to serve the highest level of capability they have achieved at any given time.
That is unusual. In the sense that it is an exceptionally healthy cost control structure. Note that not even open source offers a cost advantage, for training or inference.
Why would the ai model makers charge less than $149/hr ?
Why hasn't outsourcing attacked equal chunks of that $150/hr all these years now?
If companies dont realize that if an employee is required 1 hour per week, they still need a full salary that covers all the rising costs of housing/food/health/necessity... then most knowledge workers just die. Even more so in up and coming countries, its just massive world war and suffering if we dont change capitalism somehow. Why would AI employees keep working toward such destruction and death?
What if the 80/20 problem is more reality, machine learning and LLMs can never get that last 20 percent working right, but now nobody knows how to finish the last chunks? Seems more like the last 10% of coders that dont die, should be charging hundreds of times more $/hr. Even this doesnt fix the problem because the knowledgeable will die off and nobody hired juniors for years.
You think housing prices can sustain their value in this nonsense? Old people who are still in charge of everything and own everything will destroy all of this before it starts affecting their assets
Honestly this just feels like a roundabout way of saying software development is dead (this leaves aside the validity of the point, just to point out a contradiction in the author's message where the author seems to be saying that software development is dead in substance even while denying that at the surface).
Let me rewrite this entirely just using typists, which is a profession that has definitely been killed by technology.
> You might be expecting that here is where I would start proclaiming the death of typists as an industry.... I'm not going to do that, because I absolutely don't believe it. Voice transcription and/or personal computers means that anything you know how to say can be transcribed very rapidly. Read that sentence carefully. If you know just what words needs to be transcribed to solve an issue you want, the angels will grant you that code at the cost of some computer hardware.... for some typists, this revolution is not going to go well. Omelets are being made, which means that eggs will be broken.... Those that succeed in making this transition are going to be those with higher-order skills and larger vision. Those who have really absorbed what it means to be writers first and typing guys second.... Those that succeed in making this transition are going to need to accept that they are businessmen just as much as they are typists.
It still works, but only because of an extremely expansive definition of a "typist" that includes being an actual writer or businessman.
If your definition of "software developer" includes "businessman" I think that's simply too broad a definition to be useful. What the author seems to be saying is that software development will simply become another skill of an all-around businessman via the help of AI rather than a specialized role. Which sure, sounds plausible, but definitely qualifies as the death of software development as a profession in my book, in the same way that personal computers have made transcribing one's words simply another skill of an all-around businessman rather than a specialized role.
(Again leaving aside the question of whether that's going to actually happen. Just saying that the future world the author is talking about is pretty much one where software development is dead.)
So do you imagine that AI will reach the point that a business guy will say make me a web site to blah blah blah, and the AI will say sure boss and it will appear? Sort of what a dev/team of devs/testers/product managers/BA's would do now? the current batch is a long way from this afaik
But that wasn't what I was trying to get at. My point is that this is what the author was predicting, and if that were to pass, that is more or less the death of software development as a profession, contrary to what the author says when he says "I'm not going to do that, because I absolutely don't believe it."
The other thing I've been thinking is that most corporates now are mainly software (so it's been said), if thats the case and software becomes cheaper the barrier to entry to compete with corporates lowers, they become ripe for disruption. Insurance comes to mind, same with banking, probably others, search? its already disrupted, new industries will probably arise to - data validation, for example, is going to be an issue in the age of AI. The idea that making web sites for a living for the next century was probably always a very wishful way of thinking, but the idea that software development will disappear is also naive imho. However it's yet to be proved that software dev is cheaper with AI.
There is lots of founder developers, there is a lot of people who can configure lots of complex tech but are just not strictly speaking coders.
The multitude of freely self taught programmers would suggest otherwise.
Vibe coding can work if both the programming techniques and problem domain are well understood by the LLM. For the work I do, this means front end stuff.
Back end stuff is where the problem domain sits. I spend so much time explaining the problem domain to the LLM that its best I just write it all myself instead of cleaning up the piles of code the LLM will spit out. So no vibing on the back end...just crack open the AI assistant for debugging.
I don't see companies replacing $150/hour programmers with AI. Not yet. I think what we are seeing is companies spending heaps of money/attention on AI to the point of not making hiring decisions on programmers.
Just prior to the first dot com boom, companies were reinventing themselves with systems replacing those built in the 60s through 80s. These new systems were sophisticated and if done right, game changers. The dot com boom hit with very simple tech: click -> load next static page. This consumed all the attention. I think thats what we are experiencing now more so than a clear job replacement.
Assuming this essay is prophetic: I'm glad I am no longer a professional programmer. I never wanted to be a businessman (which brings up visions of suits and midday bourbons).
It is encumbant on us as devs to use this tool and understand it.
The invention of the chainsaw did not eliminate the lumberjack as a profession. Lumberjacks learned how to become more productive with this dangerous new tool.
Not willing to accept ex-US devs can do a comparable job at half the price