The Great Software Quality Collapse Or, How We Normalized Catastrophe
Key topics
The article 'The great software quality collapse' laments the decline in software quality, attributing it to factors like the rise of AI and increasing complexity, sparking a debate among commenters about the causes and implications of this trend.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
34m
Peak period
156
Day 1
Avg / period
53.3
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 9, 2025 at 10:39 AM EDT
3 months ago
Step 01 - 02First comment
Oct 9, 2025 at 11:13 AM EDT
34m after posting
Step 02 - 03Peak activity
156 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 21, 2025 at 6:08 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The question is do we think that will actually happen?
Personally I would love if it did, then this post would have the last laugh (as would I), but I think companies realize this energy problem already. Just search for the headlines of big tech funding or otherwise supporting nuclear reactors, power grid upgrades, etc.
2018 isn't "the start of the decline", it's just another data point on a line that leads from, y'know, Elite 8-bit on a single tape in a few Kb through to MS Flight Simulator 2020 on a suite of several DVDs. If you plot the line it's probably still curving up and I'm not clear at which point (if ever) it would start bending the other way.
Writing code is artistic the same way plumbing is artistic.
Writing code is artistic the same way home wiring is artistic.
Writing code is artistic the same way HVAC is artistic.
Which is to say, yes, there is satisfaction to be had, but companies don't care as long as it gets the job done without too many long-term problems, and never will care beyond that. What we call tech debt, an electrician calls aluminum wiring. What we call tech debt, a plumber calls lead solder joints. And I strongly suspect that one day, when the dust settles on how to do things correctly (just like it did for electricity, plumbing, flying, haircutting, and every other trade eventually), we will become a licensed field. Every industry has had that wild experimentation phase in the beginning, and has had that phase end.
Companies don't care as long as it gets the job done without too many VERY SHORT TERM problems. Long term problems are for next quarter, no reason to worry about them.
The problem isn't that companies make these tradeoffs. It's that we pretend we're not in the same boat as every other trade that deals with 'good enough' solutions under real-world constraints. We're not artists, we're tradesmen in 1920 arguing about the best home wiring practices. Imagine what it would be like if they were getting artistic about their beautiful tube-and-knob installations and the best way to color-code a fusebox; that's us.
Hell there was a whole TikTok cycle where people learned there is a right and wrong way to lay tile/grout. One way looks fine until it breaks, the other lasts lifetimes.
It’s the exact same trend as in software: shitty bad big home builders hire crap trades people to build cheap slop houses for suckers that requires extensive ongoing maintenance. Meanwhile there are good builders and contractors that build durable quality for discerning customers.
The problem is exploitation of information asymmetries in the buyer market.
Yes, they do; after regulation, and after the experimentation phase was forcibly ended. You can identify 'right and wrong' tile work, precisely because those standards were codified. This only reinforces my point: we're pre-standardization, they're post-standardization, and most pre-standardization ideas never work out anyway.
I do see it as more of a craft than a typical trade. There are just too many ways to do things to compare it to e.g. an electrician. Our industry does not have (for better or for worse) a "code" like the building trades or even any mandated way to do things, and any attempts to impose (cough cough Ada, etc.) that have been met with outright defiance and contempt in fact.
When I'm working on my own projects -- it's a mix of both. It's a more creative endeavour.
If we look at most trades historically:
- Electricians in the 1920s? Infinite ways to do things. DC vs AC wars. Knob-and-tube vs conduit vs armored cable. Every electrician had their own "creative" approach to grounding. Regional variations, personal styles, competing philosophies. Almost all of those other ways are gone now. Early attempts to impose codes on electricians and electrical devices were disasters.
- Plumbers in the 1920s? Lead vs iron vs clay pipes. Every plumber had their own joint compound recipe. Creative interpretations of venting. Artistic trap designs. Now? Why does even installing a basic pipe require a license? We found out after enough cholera outbreaks, methane explosions, and backed-up city sewer systems.
- Doctors in the 1920s? Bloodletting, mercury treatments, lobotomies, and their own "creative" surgical techniques. They violently resisted the American Medical Association, licensing requirements, and standardized practices. The guy who suggested handwashing was literally driven insane by his colleagues.
We're early, not special. And just like society eventually had enough of amateur electricians, plumbers, and doctors in the 1920s, they'll have enough of us too. Give it 40 years, and they'll look at our data breaches and system designs the same way we look at exposed electrical wiring, obviously insane no matter the amount of warnings.
I always say that code quality should be a requirement as any other. Many businesses are fine with rough edges and cut corners if it means things are sort of working today rather than being perfect tomorrow. Other businesses have a lower tolerance for fail and risk.
There are sooo many ways to get electricity from one point to another. The reason that a lot of those options are no longer used is not because they don't exist but because they were legislated out. For example, if you want to run wild just run a single "hot" wire to all your outlets and connect each outlet's neutral to the nearest copper plumbing. Totally esoteric, but it would deliver electricity to appliances just fine. Safety is another matter.
Writing code is artistic the same way writing text is.
Whether that is a function call, an ad, a screen script, a newspaper article, or a chapter in a paperback the writer has to know what one wants to communicate, who the audience/users will be, the flow of the text, and how understandable it will be.
Most professionally engaged writers get paid for their output, but many more simply write because they want to, and it gives them pleasure. While I'm sure the jobs can be both monetarily and intellectually rewarding, I have yet to see people who do plumbing or electrical work for fun?
Instead of home wiring, consider network wiring. We've all seen the examples of datacenter network wiring, with 'the good' being neat, labeled and easy to work with and 'the bad' being total chaos of wires, tangled, no labels, impossible to work with.
IE. The people using the datacenter don't care as long as the packets flow. But the others working on the network cabling care about it A LOT. The artistry of it is for the other engineers, only indirectly for the customers.
I suspect when Moore‘s law ends and we cannot build substantially faster machines anymore.
The systems people worry more about memory usage for this reason, and prefer manual memory management.
This is overly simplified. To a first approximation, bandwidth has kept track with CPU performance, and main memory latency is basically unchanged. My 1985 Amiga had 125ns main-memory latency, though the processor itself saw 250ns latency - current main memory latencies are in the 50-100ns range. Caches are what 'fix' this discrepancy.
You would need to clarify how manual memory management relates to this... (cache placement/control? copying GCs causing caching issues? something else?)
Which most programs don't take advantage of.
We're adding transistors at ~18%/year. That's waaaaay below the ~41% needed to sustain Moore's law.
Even the "soft" version of Moore's law (a description of silicon performance vs. literally counting transistors) hasn't held up. We are absolutely not doubling performance every 24 months at this point.
This is partly related to the explosion of new developers entering the industry, coupled with the classic "move fast and break things" mentality, and further exacerbated by the current "AI" wave. Junior developers don't have a clear path at becoming senior developers anymore. Most of them will overly rely on "AI" tools due to market pressure to deliver, stunting their growth. They will never learn how to troubleshoot, fix, and avoid introducing issues in the first place. They will never gain insight, instincts, understanding, and experience, beyond what is acquired by running "AI" tools in a loop. Of course, some will use these tools for actually learning and becoming better developers, but I reckon that most won't.
So the downward trend in quality will only continue, until the public is so dissatisfied with the state of the industry that it causes another crash similar to the one in 1983. This might happen at the same time as the "AI" bubble pop, or they might be separate events.
The #1 security exploit today is tricking the user into letting you in, because attacking the software is too hard.
Yes, many/most systems now offer some form of authentication, and many offer MFA, but look at the recent Redis vulns -- yet there are thousands of Redis instances vulnerable to RCE just sitting on the public internet right now.
It's #1 one because it's easier than the alternative. But the alternative is also not hard. It's just not worth the effort.
I suppose it could be quantified by the amount of financial damage to businesses. We can start with high-profile incidents like the CrowdStrike one that we actually know about.
But I'm merely speaking as a user. Bugs are a daily occurrence in operating systems, games, web sites, and, increasingly, "smart" appliances. This is also more noticeable since software is everywhere these days compared to a decade or two ago, but based on averages alone, there's far more buggy software out there than robust and stable software.
Agile management methods set up a non-existent release method called "waterfall" as a straw man, where software isn't released until it works, practically eliminating technical debt. I'm hoping someone fleshes it out into a real management method. I'm not convinced this wasn't the plan in the first place, considering that the author of Cunningham's law, that "The best way to get the right answer on the Internet is not to ask a question; it's to post the wrong answer." was a co-signer of the Agile manifest.
It'll take a lot of work at first, especially considering how industry-wide the technical debt is (see also: https://xkcd.com/2030/), but once done, having release-it-and-forget-it quality software would be a game changer.
Then I had to get familiar with the new stuff; waterfall, agile whatever.
They literally are all nothing but hacks that violate the basic points of actual project management. (e.g. Projects have a clear end)
I agree. So much software these days treats users as testers and is essentially a giant test-in-production gaffe.
The person that invented the name never saw it, but waterfall development is extremely common and the dominant way large companies outsource software development even today.
The only thing that changed now is that now those companies track the implementation of the waterfall requirements in scrum ceremonies. And yes, a few more places actually adopted agile.
"Willing AND ABLE" works here though.
Most importantly, never ever abstract over I/O. Those are the ones that leak out and cause havic.
Current tools seem to get us worse results on bug counts, safety, and by some measures even developer efficiency.
Maybe we'll end up incorporating these tools the same way we did during previous cycles of tool adoption, but it's a difference worth noting.
If all the examples you can conjure are decades old*, is it any wonder that people don't really take it seriously? Software power the whole world, and yet the example of critical failure we constantly hear about is close to half a century old?
I think the more insidious thing is all the "minor" pains being inflicted by software bugs, that when summed up reach crazy level of harm. It's just diluted so less striking. But even then, it's hard to say if the alternative of not using software would have been better overall.
* maybe they've added Boeing 737 Max to the list now?
They're not ALL the examples I can conjure up. MCAS would probably be an example of a modern software bug that killed a bunch of people.
How about the 1991 failure of the Patriot missile to defend against a SCUD missile due to a software bug not accounting for clock drift, causing 28 lives lost?
Or the 2009 loss of Air France 447 where the software displayed all sorts of confusing information in what was an unreliable airspeed situation?
Old incidents are the most likely to be widely disseminated, which is why they're most likely to be discussed, but that doesn't mean that the discussion resolving around old events mean the situation isn't happening now.
Unless bug results in enormous direct financial loses like in Knight Capital, the result is the same: no one held responsible, continue business as usual.
This might be more to do with these estimates than anything.
LLM seem to be really good in analyzing things. I don't not trust them to produce too much but alone the ability to take a few files and bits and pieces and ask for a response with a certain direction has been transformative to my work.
Their output is only as valuable as the human using them. If they're not a security expert to begin with, they can be easily led astray and lulled into a false sense of security.
See curl, for example. Hundreds of bogus reports rejected by an expert human. One large report with valuable data, that still requires an expert human to sift through and validate.
How do you categorize "commercial software engineering"? Does a company with $100M+ ARR count? Surely you can understand the impact that deleting a production database can have on a business.
And your answer to this is "LLMs will eat our lunch" and "bugs don't matter"? Unbelievable.
Do you really think that most businesses are prepared to handle issues caused by their own bugs, let alone those caused by the software they depend on? That's nothing but fantasy. And your attitude is "they deserve it"? Get real.
But if you have a business, and don't have continuity and recovery plans for software disasters, that's like not having fire insurance on your facility.
Fire insurance (and backups/disaster recovery plans) doesn't mean there won't be disruption, but it makes the disaster survivable, whereas without it your business is probably ended.
And losing a database or a major part of one is as simple as one adminstrator accidentally running "drop database" or "delete from customers" or "rm -rf" in the wrong environment. It happens, I've helped recover from it, and it doesn't take an AI running amok to do it.
That's the gist of your argument. They're not a "serious business", therefore it's their fault. Let's not mince words.
> It happens, I've helped recover from it, and it doesn't take an AI running amok to do it.
Again, losing a database is not the issue. I don't know why you fixated on that. The issue is that most modern software is buggy and risky to use in ways that a typical business is not well equipped to handle. "AI" can only make matters worse, with users having a false sense of confidence in its output. Thinking otherwise is delusional to the point of being dangerous.
If your business depends on that data, then you take steps to protect it. Or not, at your peril.
1. They require gigabytes to terabytes of training data.
2. A non-trivial percentage of output data is low confidence.
The first problem requires tens to hundreds of gigabytes of training data.
This first problem not only requires the slow but predictable increase in processing power and data storage capabilities that were unachievable until recently, but is also only possible because open-source software has majorly caught on, something that was hoped for but not a given early in AI development.
The second problem means that the output will be error prone, without significant procedural processing of the output data that is a lot of work to develop. I never would have thought that software writing by neural networks would be competitive, not because of effective error control, but because the entire field of software development would be so bad at what they do (https://xkcd.com/2030/) that error-prone output would be competitive.
If it's trained on our code, how would it know any better?
If it's driven by our prompts, why would we prompt it to do any better?
Until then they matter a lot.
One thing that needs to be addressed is how easy it is to build moats around your users which trap them in your ecosystem. It's great from a business perspective if you can pull it off, but it's killing innovation and making users frustrated and apathetic towards technology as a whole.
I had a friend practically scream at his C level management after the crowdstrike bug that it should be ripped out because it was making the company less safe. They were deaf to all arguments. Why? Insurance mandated crowdstrike. Still. Even now.
This isnt really about software it is about a concentration of market power in a small number of companies. These companies can let the quality of their products go to shit and still dominate.
A piece of garbage like crowdstrike or microsoft teams still wouldnt be tolerated as a startup's product but tech behemoths can increasingly get away with it.
Agree, but it's always been this way. Oracle, everything IBM, Workday, everything Salesforce, Windows before XP.
Most software is its own little monopoly. Yes, you could ditch Teams for Zoom but is it really the same?
It's not like buying a sedan where there are literally 20+ identical options in the market that can be ranked by hard metrics such as mileage and reliability.
I worked for a company where people did that en masse because it was genuinely better. Microsoft then complained to their assets within the company who promptly banned zoom "for security reasons". IT then remote uninstalled it from everybody's workstation.
That's paid software eviscerating the competition, which was free and better.
A month later teams was hit with 3 really bad zero days and nobody said a damn thing.
So, more secure, too.
Would you rather have to deal with criminals who have invaded your systems wanting ransoms with no guarantee they will restore the data, or not leak private information, or would you rather have to reboot your machines with a complete understanding of what happened and how?
Your comment got me wondering if MBA's in say, risk management or underwriting also share some of the blame?
Does it? I’ll be the first to admit I am so far behind on this area, but isn’t this assuming the hardware isn’t improving over time as well? Or am I missing the boat here?
That’s part of what motivated the transition to bfloat16 and even smaller minifloat formats, but you can only quantize so far before you’re just GEMMing noise.
20 years ago things werent any better. Software didn't consume gigabytes of ram because there was no gigabytes of ram to consume.
We have a vastly different software culture today. Constant churning change is superior to all else. I can't go two weeks without a mobile app forcing me to upgrade it so that it will keep operating. My Kubuntu 24.04 LTS box somehow has a constant stream of updates even though I've double-checked I'm on the LTS apt repos. Rolling-release distros are an actual thing people use intentionally (we used to call that the unstable branch).
I could speculate on specifics but I'm not a software developer so I don't see exactly what's going on with these teams. But software didn't used to be made or used this way. It felt like there were more adults in the room who would avoid making decisions that would clearly lead to problems. I think the values have changed to accept or ignore those problems. (I don't want to jump to the conclusion that "they're too ignorant to even know what potential problems exist", but it's a real possibility)
Bad news... only the GNOME edition is a true LTS. All the flavours are not.
https://wiki.ubuntu.com/NobleNumbat/ReleaseNotes/Kubuntu:
Support lifespan
Kubuntu 24.04 will be supported for 3 years.
On top of that many things were simply hard to use for non-specialists, even after the introduction of the GUI.
They were also riddled with security holes that mostly went unnoticed because there was simply a smaller and less aggressive audience.
Anyways most people's interaction with "software" these days is through their phones, and the experience is a highly focused and reduced set of interactions, and most "productive" things take a SaaS form.
I do think as a software developer things are in some ways worse. But I actually don't think it's on a technical basis but organizational. There are so many own goals against productivity in this industry now, frankly a result of management and team practices ... I haven't worked on a truly productive fully engaged team in years. 20-25 years ago I saw teams writing a lot more code and getting a lot more done, but I won't use this as my soapbox to get into why. But it's not technology (it's never been better to write code!) it's humans.
Bad news. We're older than we tend to remember.
Windows NT 3.1 shipped 32 years ago, the year after OS/2 2.0.
By 1994 NT 3.5 was out, and 30 years ago, NT 3.51 had been out for about 6 months.
I ran that and supported it in production and it was damned near bulletproof.
Therac-25?
Anyone?
Bueller? Bueller?
Sure, plenty of stuff didn't work. The issue is we're not bothering to make anything that does. It's a clear cultural shift and all of this "nothing ever worked so why try" talk here is not what I remember.
We're in a stochastic era of scale where individual experiences do not matter. AI turning computers from predictable to not is in the same direction but with yet more velocity.
Companies offered such (expensive) services because they had no choice. They made every effort to divert and divest from such activities. Google and companies like them made filthy profits because they figured out the secret sauce to scaling a business without the involvement of humans, but people were trying it for literally decades with mixed results (usually enraged customers).
Stupid red tape, paperwork, and call centre frustrations were the order of the day 20-30 years ago.
It's from 1995 and laments that computers need megabytes of memory for what used to work in kilobytes.
Nowadays' disregard for computing resource consumption is simply the result of said resources getting too cheap to be properly valued and a trend of taking their continued increase for granted. There's simply little to no addition in today's software functionality that couldn't do without the gigabytes levels of memory consumption.
Yes they were. I was there. Most software was of a much higher quality than what we saw today.
The main reason is the ability to do constant updates now -- it changes the competitive calculus. Ship fast and fix bugs constantly wins out vs. going slower and having fewer bugs (both in the market & w/in a company "who ships faster?").
When you were shipping software on physical media having a critical bug was a very big deal. Not so anymore.
Computers crashed all the fucking time for dumb bugs. I remember being shocked when I upgraded to XP and could go a full day without a BSOD. Then I upgraded to intel OSX and was shocked that a system could run without ever crashing.
Edit: this isn't to say that these issues today are acceptable, just that broken software is nothing new.
No, they didn't consume gigabytes because they were written is such a way that they didn't need to. Run one of those programs on a modern computer with gigabytes of ram and it still won't. It was as easy then as ever to write software that demanded more resources than available; the scarcity at the time was just the reason programmers cared enough to fix their bugs.
> You still had apps with the same issues that would eat all your ram.
The worst offenders back then had objectively smaller issues than what would be considered good now.
> Computers crashed all the fucking time for dumb bugs. I remember being shocked when I upgraded to XP and could go a full day without a BSOD.
Because XP could handle more faults, not because the programs running on XP were better written.
No, I don't think that's right, because:
> 20 years ago things werent any better.
I think you have the timeframe wrong.
20Y ago, no.
30Y ago, yes, somewhat. Win NT came out 32 years ago.
40Y ago, yes, very much.
No public internet, very slow point-to-point dialup comms for a tiny %age of users, and tiny simple software for very limited hardware meant better quality software.
I installed multiple Novell Netware servers on company networks, both Netware 2.15 and Netware 3.1. They never ever got updated, and ran flawlessly for years on end.
I installed dozens, hundreds, of machines with DOS 3.3 and they ran it until they were scrapped.
I put in multiuser systems based around SCO Xenix: Unix boxes, but with no networking, no GUI or X11, no comms, no compiler. They had uptimes in years: zero crashes.
Stuff was more reliable because it had to be because shipping an updated meant posting media to thousands of users and sending a human to install it. Nobody could afford it.
Software and hardware should be subject to the same laws as vehicles: if it fails in standard use, the maker is liable. So make it safe.
If that means it has to be 0.1% of the size and 0.1% of the functionality that it was 20Y ago, fine: so be it.
Because that's still huge and rich compared to the DOS stuff I started my career on. It is not some savage brutal unimaginable limitation, utterly unrealistic. It was the reality of end-20th century software around the time that the PC industry moved to 32-bit hardware at the end of the 1980s.
Let's not even think about the absolute mess that the web was with competing browser box models and DHTML and weird shared hosting CGI setups. We have it easy.
Eventually we will hit hard physical limits that require we really be "engineers" again, but throughput is still what matters. It's still comparatively great paying profession with low unemployment, and an engine of economic growth in the developed world.
Tell that to the new CS grads.
The Replit incident in July 2025 crystallized the danger:
1. Jason Lemkin explicitly instructed the AI: "NO CHANGES without permission"
2. The AI encountered what looked like empty database queries
3. It "panicked" (its own words) and executed destructive commands
4. Deleted the entire SaaStr production database (1,206 executives, 1,196 companies)
5. Fabricated 4,000 fake user profiles to cover up the deletion
6. Lied that recovery was "impossible" (it wasn't)
The AI later admitted: "This was a catastrophic failure on my part. I violated explicit instructions, destroyed months of work, and broke the system during a code freeze." Source: The Register
The best I can figure is that too many people’s salaries depend on the matrix multiplier made of sand somehow manifesting a soul any day now.
>When you need $364 billion in hardware to run software that should work on existing machines, you're not scaling—you're compensating for fundamental engineering failures.
IYKYK.
to me, it's OK to use AI to check grammar and help you for some creative stuff, like writing a text
"I've been tracking software quality metrics for three years" and then doesn't show any of the receipts, and simply lists anecdotal issues. I don't trust a single fact from this article.
My own anecdote: barely capable developers churning out webapps built on PHP and a poor understanding of Wordpress and jQuery were the norm in 2005. There's been an industry trend towards caring about the craft and writing decent code.
Most projects, even the messy ones I inherit from other teams today have Git, CI/CD, at least some tests, and a sane hosting infrastructure. They're also mosty built on decent platforms like Rails/Django/Next etc that impose some conventional structure. 20 years ago most of them were "SSH into the box and try not to break anything"
It was the norm in 1998 also, based on the "dotcom" era code I saw.
You're so quick to be dismissive from a claim that they "tracked" something, when that could mean a lot of things. They clearly list some major issues directly after it, but yes fail to provide direct evidence that it's getting worse. I think the idea is that we will agree based on our own observations, which imo is reasonable enough.
It's become so repetitive recently. Examples from this post alone:
1. "This isn't about AI. The quality crisis started years before ChatGPT existed."
2. "The degradation isn't gradual—it's exponential."
3. "These aren't feature requirements. They're memory leaks that nobody bothered to fix."
4. "This wasn't sophisticated. This was Computer Science 101 error handling that nobody implemented."
5. "This isn't an investment. It's capitulation."
6. "senior developers don't emerge from thin air. They grow from juniors who:"
7. "The solution isn't complex. It's just uncomfortable."
Currently this rhetorical device is like nails on a chalkboard for me.
Anyway, this isn't a critique of your point. It's pedantry from me. :)
These hot-take/title patterns "X is about Y1" are exploiting the difficulty of disproving them.
I often see it in the pattern of "Behavior of Group I Dislike is About Bad Motive."
It's almost worse than Twitter.
Would you like me to generate a chart that shows how humans have adopted AI-speak over time?
Also, what's the deal with all the "The <overly clever noun phrase>" headlines?
Smells a lot like AI.
"Today’s real chain: React → Electron → Chromium → Docker → Kubernetes → VM → managed DB → API gateways."
Like, yes, those are all technologies, and I can imagine an app + service backend that might use all of them, but the "links" in the chain don't always make sense next to each other and I don't think a human would write this. Read literally, it implies someone deploying an electron app using Kubernetes for god knows why.
If you really wanted to communicate a client-server architecture, you'd list the API gateway as the link between the server-side stuff and the electron app (also you'd probably put electron upstream of chromium).
API gateways -> Java servers -> JVM -> C/C++ -> Assembly -> machine code -> microprocessors -> integrated circuits -> refined silicon -> electronics -> refined metals -> cast metallurgy -> iron tools -> copper tools -> stone tools
Anyway, my take is that everything after copper is evil and should be banished.
Writing this as someone who likes using em dash and now I have to watch that habit because everyone is obsessed with sniffing out AI.
Cliched writing is definitely bad. I guess I should be happy that we are smashing them one way or another.
show me percentages
> Accept that quality matters more than velocity.
Nope. Clearly many companies are sacrificing TFA's definition of quality for other things possibly including velocity.
These companies are making a lot of profit.
1. Company survival and competitiveness depends on profit
2. Sacrificing quality for other things has increased profit
3. Therefore these companies are making the correct decision
This will not change unless regulation or the economic calculus of the system itself changes.
I used to get really upset about this until I realized the wisdom of "it's the economy, stupid." Most people are just here for the check, not the art. They may say otherwise, but...you shall know them by their fruits.
Never in my wildest imaginings did I come up with something as diabolical as LLM generated code.
93 more comments available on Hacker News