The Software Engineers Paid to Fix Vibe Coded Messes
Key topics
The article discusses the rise of 'vibe coding' where non-technical individuals use AI tools to generate code, and the subsequent need for skilled software engineers to fix the resulting messes, sparking a debate on the role of AI in software development.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
5h
Peak period
10
15-18h
Avg / period
3.6
Based on 40 loaded comments
Key moments
- 01Story posted
Sep 13, 2025 at 5:26 PM EDT
4 months ago
Step 01 - 02First comment
Sep 13, 2025 at 10:07 PM EDT
5h after posting
Step 02 - 03Peak activity
10 comments in 15-18h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 15, 2025 at 12:17 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Hallucinations
Context limits
Lack of test coverage and testing-based workflow
Lack of actual docs
Lack of a spec
Great README; cool emoji
You're better off feeding them a few files to work with, in isolation, if you can.
What does the (Copilot) /tests command do, compared to a prompt like "Generate tests for #symbolname, run them, and modify the FUT function under test and run the tests in a loop until the tests pass"?
Documentation is probably key to the Django web framework's success, for example.
Resources useful for learning to write great docs: https://news.ycombinator.com/item?id=23945815
"Ask HN: Tools to generate coverage of user documentation for code" https://news.ycombinator.com/item?id=30758645
You have to tell it to validate its own work by adding to, refactoring, and running the tests before it replies.
Most junior developers do care and would never dump partial solutions on a prompter as though they're sufficient like LLMs.
Every time I remember to get `make test-coverage` working and have myself or the LLM focus on lines that aren't covered by tests.
Junior or Senior, an employee wouldn't turn in such incomplete, not compiling assignments that % of the time; even given inadequate prompts as specifications.
A human software developer doesn't code in the void, he interacts with others.
The same when you have an AI coder, you interact with it. It's not fire and forget.
I don't think this is entirely true. In a lot of cases vibe coding something can be a good way to prototype something and see how users respond. Obviously don't do it for something where security is a concern, but that vibe-coded skin cancer recognition quiz that was on the front page the other day is a good example.
This was a constant pattern in software engineering even before LLMs, but LLMs are making it much worse, and I think it's very head-in-the-sand behavior to ignore that. It's akin to going "well, you can't blame the Autopilot because the person should have been fully-attentive ready to react at any millisecond". That's not how humans work, and good engineering is supposed to take real-world human behavior into consideration
Right now, vibe coding just means there might be a lot more of this, assuming vibe coding succeeds well enough to compete with the situations I described.
But at another scale.
I tell my CS students who ask if there will be any junior positions for them when they graduate:
There will be an entire new industry of people who vibed 1000 lines of MVP and now are stuck with something they can’t debug. It’s not called a junior developer, but it’s called someone who actually knows programming.
Also, they will continue to deliver code that is full of security holes, because programming teachers are often not competent to teach those aspects, and IT security professionals who teach tend to be poor programmers or paper pushers.
You clearly haven't worked with humans.
The first type of merge request is one that should be generated by an LLM and the second is one that should be generated by a human.
Instead I get neither but I get efficiency so someone can deliver at the last minute. And so I can can go mop up the work later or my job is hell the next time “we just need to get this out the door”.
THANK YOU LLMS
> AI now lets anyone write software, but it has limits. People will call upon software practitioners to fix their AI-generated code.
https://www.slater.dev/about-that-gig-fixing-vibe-code-slop/
The real problem arises when non-technical people use an LLM to generate a full project from scratch. The code may work, but it’s often unmaintainable. These people sometimes believe they’re geniuses and view software engineers as blockers, dismissing their concerns as mere technical “mumbo jumbo.”
But unlike that six year gap during the tech nuclear winter (2000-2006) when you could literally follow those over-confident $10/hr kids around cleaning up one botched effort to port custom Windows apps to LAMP after another, this time it will be different. The LLMs are trained largely on the European-dominated code bases on Github and it's just enough to keep the "vibe coders" out of real bad trouble (like porting a financial application from Visual BASIC into PHP which has different precision floating point resolution between distributions/releases or de-normalizing structured customer data and storing it in KV pairs "because everybody is doing it so relational databases must be obsolete".) The work to cleanup their "vibe coded" mess will not be as intense (especially considering LLMs will help), but there will be a lot more of it this time around and re-hosting it more economically will be a Thing.
Sadly, American businesses will discover they don't need trillion parameter LLMs (due to MoE, quantization, agentic mini-models, etc.) and the supply of acceptable vector processing chips will catch up to demand (bringing prices down for "on prem" deployments) and that "AI snake oil factor" (non-deterministic behavior and hallucinations) will become more than a concern expressed over weekend C-suite golf games and yacht excursions (you know, where someone always gets fired to set an example of what happens when you don't make your numbers). AI had been dead so long that the top C-suites can't even remember the details of how/why it died anymore (hint: you could get fired for even saying "AI" up until the 2000 Crash giving rise to the synonym "ML" as a more laser focused application of AI), just that they don't trust it. The astonishing demonstrations at OpenAI, Anthropic, xAI, Google and Meta are enough to cause C-suites to write a few checks, causing a couple of ramps in the stock market, but those projects by and large are NOT working out due to the same 'ole same 'ole and I fear this entire paradigm will suffer the same fate as IBM Watson. The stock market may well crash again because of this horsepucky even though there IS true potential with this technology, just as with Web 1.0. (All it needs for that is a catalyst event --maybe not Bill Gates throwing a chair, maybe something in the dispute between Sammy and Elon.) Same as it ever was.