Vibe Coding Is Mad Depressing
Key topics
The world of "vibe coding" - where AI-assisted tools help developers get started quickly - is sparking heated debate, with some swearing by its prototyping benefits, while others lament the convoluted, unmaintainable code it often produces. As one developer noted, using AI-assisted coding tools like Cursor can be a letdown, especially when the generated code becomes too complex to decipher. While some see "vibe coding" as a useful starting point, others warn that relying solely on it can lead to a dead-end, with rewriting from scratch being the only viable option. The discussion highlights the tension between the excitement around AI-assisted coding and the practical challenges it poses.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
23m
Peak period
128
0-12h
Avg / period
22.9
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 10, 2025 at 10:50 PM EST
27 days ago
Step 01 - 02First comment
Dec 10, 2025 at 11:13 PM EST
23m after posting
Step 02 - 03Peak activity
128 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 16, 2025 at 1:04 PM EST
21 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
When you write the code, you understand it. When you read the code produced by an agent, you may eventually feel like you understand it, but it's not at the same deep level as if your own brain created it.
I'll keep using new tools, I'll keep writing my own code too. Just venting my frustrations with agentic coding because it's only going to get worse.
As for the feelings that using LLM has when it one shots your project start (and does a pretty good job), have a German word:
Automatisierungskummer
(automation sorrow) • Kummer is emotional heaviness, a mild-to-deep sadness.
Its hard to know what things will look like in 20 years but people may miss the time when AI cost nothing, or very little, and less fettered. I think probably not- it would be like being nostalgic for really low-res, low frame youtube videos, but nostalgia is pretty unpredictable and somepeople love those old FMV games.
I remember the feeling of realizing that I had terrible taste just like everyone else and I was putting huge amounts of effort into trying to do seamless tiling background images that still looked awful and distracting and ruined the contrast. And also the feeling of having no idea what to talk about or why anyone would care.
Now I have way too much to talk about — so much that I struggle to pick something and actually start writing — and I'm still not sure why anyone would care. But at least I've learned to appreciate plain, solid-colour backgrounds.
Put it into Google and you will see.
*as an aside, this reminds me of the classic joke where the client asks for the price list for a developer's services:
I do it: $500
I do it, but you watch: $750
I do it, and you help: $1,000
You do it yourself: $5,000
You start it, and you want me to finish it: $10,000
https://files.catbox.moe/1d87t7.jpg
I guess that, with vibe coding, it is very easy for every client to become like this.
Also the worst kind of tech line-manager - they were promoted from engineer, but still want to argue about architecture, having arrived at their strong opinion within the 15 minutes they perused the design document between meeting. If you're such a manager, you need to stop, if you're working with one, change teams or change jobs - you cannot win.
It's far from trivial when there are multiple, clearly-communicated trade-offs (documented in the design doc!) that, but you happen to land on opposite ends of value-judgements (e.g. pattern A is easier to maintain based on my experience with the codebase & bugs that have popped up, but manager thinks pattern B is simpler to implement, but brittle). The debate wastes time, and signals mistrust when you're the staff engineer, or the design was OK'd by staff/rest of team
That isn't unique to "clients." It's human nature. Human's don't know what they don't know.
See: various exploits since computers were a thing.
“We use premiere.” Cool. I use Resolve. If we aren’t collaborating on the edit then this is an irrelevant conversation. You want a final product, that’s what you hired me for my dude. If you want me to slot into your existing editing pipeline that’s a totally different discussion.
If you pay me for a 30s highlight, you get a 30s highlight. If you don’t like the highlight itself that’s a different discussion.
2009 anyone? https://theoatmeal.com/comics/design_hell
For example, "Can't we just add a button that does this?"
I never faced or witnessed that in software dev.
History doesn’t repeat itself, but it definitely rhymes – I can’t wait for the modern versions of this.
We don't have to seek it out, it finds us.
I was involved in such an attempt but it never got off the ground.
DonHopkins on Feb 16, 2022 | prev | next [–]
When I implemented the pixelation censorship effect in The Sims 1, I actually injected some random noise every frame, so it made the pixels shimmer, even when time was paused. That helped make it less obvious that it wasn't actually censoring penises, boobs, vaginas, and assholes, because the Sims were actually more like smooth Barbie dolls or GI-Joes with no actual naughty bits to censor, and the players knowing that would have embarrassed the poor Sims.
[...]
The other nasty bug involving pixelization that we did manage to fix before shipping, but that I unfortunately didn't save any video of, involved the maid NPC, who was originally programmed by a really brilliant summer intern, but had a few quirks:
A Sim would need to go potty, and walk into the bathroom, pixelate their body, and sit down on the toilet, then proceed to have a nice leisurely bowel movement in their trousers. In the process, the toilet would suddenly become dirty and clogged, which attracted the maid into the bathroom (this was before "privacy" was implemented).
She would then stroll over to toilet, whip out a plunger from "hammerspace" [1], and thrust it into the toilet between the pooping Sim's legs, and proceed to move it up and down vigorously by its wooden handle. The "Unnecessary Censorship" [2] strongly implied that the maid was performing a manual act of digital sex work. That little bug required quite a lot of SimAntics [3] programming to fix!
[1] Hammerspace: https://tvtropes.org/pmwiki/pmwiki.php/Main/Hammerspace
[2] Unnecessary Censorship: https://www.youtube.com/watch?v=6axflEqZbWU
[3] SimAntics: https://news.ycombinator.com/item?id=22987435 and https://simstek.fandom.com/wiki/SimAntics
Exactly this. It's like when patients print off articles they read on WebMD to present to their doctor as a self-diagnosis. I'm glad they feel invested, but they aren't the professional.
Asking such clients why are we here? What have previous attempts (becuase they have been done) provided and not provided, and why do you think they did or didn't have long term viability so we didn't need to talk.
This is less about coding and helping people learn how to think about where and how things cna fit in.
It's great to go fast with vibe coding, especially if you like disposable code that you can iterate with. In the hands of an a developer they might be able to try more things or get more done in some way, but maybe not all the ways especially if the client isn't clear.
The ability of the client ot explain what they want well with good external signals and how well they know how to ask will often be a huge indicator long before they try to pull you into their web of creating spider diagrams like the spiders who have taken something.
At this point, the level of puffery is on par with claiming a new pair of shoes will turn you into an Olympic athlete.
People are doing this because they’re told it works, and showing up to run a marathon with zero training because they were told the shoes are enough.
Some people may need to figure out the problem here for themselves.
Indeed, [1]
> researchers found that searching symptoms online modestly boosted patients’ ability to accurately diagnose health issues without increasing their anxiety or misleading them to seek care inappropriately [...] the results of this survey study challenge the common belief among clinicians and policy-makers that using the Internet to search for health information is harmful.
[0] https://www.cbc.ca/radio/whitecoat/man-googles-rash-discover...
[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC8084564/
I have something that about a quarter percent of individuals have in the US. A young specialist would know how to treat based on guidelines but beyond that there's little benefit in keeping up to date with the latest research unless it's a special interest for them (unlikely).
Good physicans are willing to read what their patients send them and adjust the care accordingly. Prevention in particular is problematic in the US. Informed patients will have better outcomes.
But I bet what happens more often is patients showing up with random unsubstantiated crap they found on Reddit or a content farm, and I can understand health care providers getting worn down by that sort of thing. I have a family member who believed he had Morgellon’s Disease, and talking to him about it was exhausting.
Your family member... mistakenly believed that he had a psychiatric condition involving a mistaken belief?
Does your family member have the sores?
Similarly, it appears that some doctors are willing to accept that they have limited amount of time to learn about specific topics, and that a research-oriented and intelligent patient very interested in few topics can easily know more about it. In such a case a conducive mutual learning experience may happen.
One doctor told me that what he is offering is statistical advice, because some diseases may be very rare and so it makes sense to rule out more common diseases first.
Other doctors may become defensive, if they have the idea that the doctor has the authority and patients should just accept that.
To me it's more like the board, in some small way, being shaken up, and what I mostly see is an opportunity for consultancies to excel at interfacing with clients who come to them with LLM code and LLM-generated ideas.
now they can come with a broken prototype clearly displaying their four killer features,
and I can wire up the rest,
as well as include auth, an Admin dashboard, and a database and stuff?
Yep. Sign me up.
Should make it easier to sort by what technical skill-level of client you want, too.
I'm not saying we need to dismiss people for using LLMs at all, for better or for worse we live in a world where LLMs are here to stay. The annoying people would have found a way to be annoying even without AI, I'm sure.
A freelance developer (or a doctor) is familiar with working within a particular framework and process flow. For any new feature, you start by generating user stories, work out a high level architecure, think about
It's mostly a unidirectional flow. Now when the client starts giving you code, it turns into a bidirectional flow. You can't just copy/paste the code and call it done. You have to go in the reverse direction: read the code to parse out what the high level architecture is, which user stories it implements and which it does not. After that you have to go back in the forward direction to actually adapt and integrate the code. The client thinks they've made the developer's job easier, but in many ways they've actually doubled the cognitive load. This is stressful and frustrating for the developer.
Charge more and/or set expectations up front.
I treat my doctor as a subject matter expert/collaborator, which means that if I come to him with (for example) "what if it's lupus?" and he says "it's probably not lupus", I usually let the matter drop.
If I'm working on your project I'm usually dedicated to it 8 hours a day for months.
I do agree this is not new, I had clients with some development experience come up with off the cuff suggestions that just waste everyone's time and are really disrespectful (like how bad at my job do you think I am if you think I didn't try the obvious approach you came up with while listening to the problem). But AI is going to make this much worse.
There is no best practices anymore, no proper process, no meaningful back and forth.
There absolutely is and you need to work with the tools to make sure this happens. Else chaos will ensue.
Been working with these things heavily for development for 6-12 months. You absolutely must code with them.
Ah yes a supabase backed, hallucinated data model with random shit, and a copy paste UI. Zero access control or privacy, 1% of features, no files uploading or playback or calling.
“Can you scale this to 1M users by end of the week? Something similar to WhatsApp or Telegram or Signal”
Sybau mf
What does this mean?
Like a neck tattoo, but the text form.
Me: hey make this, detailed-spec.txt
AI: okidoki (barfs 9k lines in 15 minutes) all done and tested!
Me looks at the code, that has feature-sounding names, but all features are stubs, all tests are stubs, and it does not compile.
Me: it does not compile.
AI: Yes, but the code is correct. Now that the project is done, which of these features you want me to add (some crazy list)
Me: Please get it to compile.
AI: You are absolutely right! This is an excellent idea! (proceeds to stub and delete most of what it barfed). I feel really satisfied with the progress! It was a real challenge! The code you gave me was very poorly written!
... and so on.
Meaning, is the answer in the field I'm not an expert of good, or am I simply being fooled by emoji and nice grammar?
I have not experienced the level of malice and sweet-talking work avoidance from anyone. It apologizes like an alcoholic, then proceeds doubling down.
Can you force it to produce actually useful code? Yes, by repeatedly yelling at it to please follow the instructions. In the process, it will break, delete, or implement hard to find bugs in rest of the codebase.
I'm really curious, if anyone actually has this thing working, or they simply haven't bothered to read the generated code
With anything above a toy project, you need to be really good with context window management. Usually this means using subagents and scoping prompts correctly by placing the CLAUDE.md files next to the relevant code. Your main conversation's context window usage should pretty much never be above 50%. Use the /clear command between unrelated tasks. Consider if recurring sequences of tool calls could be unified into a single skill.
Instead of sending instructions to the agent straight away, try planning with it and prompting it to ask your questions about your plan. The planning phase is a good place to give Claude more space to think with "think > think hard > ultrathink". If you are still struggling with the agent not complying, try adding emplasis with "YOU MUST" or "IMPORTANT".
I think like any tool it's has it's pros and cons and the more you use it the more you figure out how to make the best use out of it and when to give up.
It wasn't super bad at converting the code but even it struggled with some of the logic. Luckily, I had it design a test suite to compare the outputs of the old application and the new one. When it couldn't figure out why it was getting different results, it would start generating hex dumps comparisons, writing small python programs, and analyzing the results to figure out where it had gone wrong. It slowly iterated on each difference until it had resolved them. Building the code, running the test suite comparing the results. Some of the issues are likely bugs in the original code (that it fixed) but since I was going for byte-for-byte perfection it had to re-introduce them.
The issues you describe I have seen but not with the right technology and not in a while.
I have seen AI agents fall into the exact loop that GP discussed and needed manual intervention to fall out of.
Also blindly having the AI migrate code from "spaghetti C" to "structured C++" sounds more like a recipe for "spaghetti C" to "fettuccine C++".
Sometimes its hidden data structures and algorithms you want to formalize when doing a large scale refactor and I have found that AIs are definitely able to identify that but it's definitely not their default behaviour and they fall out of that behaviour pretty quickly if not constantly reminded to do so.
What do you mean? Are you under the impression I'm not even reading the code? The code is actually the most important part because I already have working software but what I want is working software that I can understand and work with better (and so far, the results have been good).
"This looks good", vs "Oh that is what this complex algorithm was" is a big difference.
Effectively, to review that the code is not just being rewritten into the same code but with C++ syntax and conventions means you need to understand the original C code, meaning the hard part was not the code generation (via LLM or fingers) but the understanding and I'm unsure the AI can do the high level understanding since I have never gotten it to produce said understanding without explicitly telling it.
Effectively, "x.c, y.c, z.c implements a DSL but is convoluted and not well structured, generate the same DSL in C++" works great. "Rewrite x.c, y.c, z.c into C++ buildings abstractions to make it more ergonomic" generally won't recognise the DSL and formalise it in a way that is very easy to do in C++, it will just make it "C++" but the same convoluted structure exists.
Ok. Let me be more specific then. I'm "understanding" the code since that's the point.
> I'm unsure the AI can do the high level understanding since I have never gotten it to produce said understanding without explicitly telling it.
My experience has been the opposite: it often starts by producing a usable high-level description of what the code is doing (sometimes imperfectly) and then proposes refactors that match common patterns -- especially if you give it enough context and let it iterate.
> "Rewrite x.c, y.c, z.c into C++ buildings abstractions to make it more ergonomic" generally won't recognise the DSL and formalise it in a way that is very easy to do in C++, it will just make it "C++" but the same convoluted structure exists.
That can happen if you ask for a mechanical translation or if the prompt doesn't encourage redesign. My point was literally make it well-designed idiomatic C++ and it did that. Inside of the LLM training is a whole bunch of C++ code and it seems to be leaning on that.
I did direct some goals (e.g., separating device-specific code and configuration into separate classes so adding a device means adding a class instead of sprinkling if statements everywhere). But it also made independent structural improvements: it split out data generation vs file generation into pipeline/stream-like components and did strict separation of dependencies. It's actually well designed for unit testing and mocking even though I didn't tell it I wanted that.
I'm not claiming it has human-level understanding or that it never makes mistakes — but "it can't do high-level understanding" doesn't match what I'm seeing in practice. At minimum, it can infer the shape of the application well enough to propose and implement a much more ergonomic architecture, especially with iterative guidance.
I had to have it introduce some "bugs" for byte-for-byte matching because it had generalized some of the file generation and the original C code generated slightly different file structures for different devices. There's no reason for this difference; it's just different code trying to do the same thing. I'll probably remove these differences when the whole thing is done.
So effectively it was at least partly guided refactoring. Not blind vibe coding.
It doesn't really matter what we told it do; a task is a task. But clearly how each LLM performed that task very different for me than the OP.
You migrated code from one of the simplest programming languages to unarguably the most complex programm language in existence. I feel for you; I really do.
How did you ensure that it didn't introduce any of the myriad of footguns that C++ has that aren't present in C?
I mean, we're talking about a language here that has an entire book just for variable initialisation - choose the wrong one for your use-case and you're boned! Just on variable initialisation, how do you know it used the correct form in all of the places?
It's actually far easier to me to tell that it's not leaking memory or accessing some unallocated data in the C++ version than the C version.
A simple language just pushes complexity from the language into the code. Being able to represent things in a more high-level way is entirely the point of this exercise because the C version didn't have the tools to express it more cleanly.
I had a similar issue with GNU plot. The LLM-suggested scripts frequently had syntax errors. I say: LLMs are awesome when they work, else they are a time suck / net negative.
Was this a local model?
The niche is "the same boring CRUD web app someone made in 2003 but with Tailwind CSS".
If you have a CLI, you can even script this yourself, if you don't trust your tool to actually try to compile and run tests on its own.
It's a bit like a PR on github from someone I do not know: I'm not going to actually look at it until it passes the CI.
What good is AI as a tool if it can get not on the same page as you
Imagine negotiating with a hammer to get it to drive nails properly
These things suck as tools
I had to rewrite several vibe coded projects from scratch due to this effect. It's useful as a prototyping tool but not a complete productionizing tool.
Specifically: what if I just started downloading repo’s and aggressively copying and pasting to my needs… I’d get a whole bunch of code kinda quick, it’d mostly work.
It feels less interactive, but shares a high level similarity in output and understanding.
As a developer that has spent far too much of my career maintaining or upgrading companies' legacy code, my biggest fear with the LLM mania is not that my skills go away, but become in so much higher demand in an uncomfortable way because the turn around time between launch and legacy code becomes much shorter and the management that understands why it is "legacy code"/"tech debt" shrinks because it is neither old or in obviously dead technologies. "Can you fix this legacy application, it was launched two days ago and nobody knows what it does? Management says there's no budget for this. Good luck."
Being effective with the code to get the same things done is. That requires a new kind of driving for a new kind of vehicle.
https://www.uceprotect.net/en/index.php?m=7&s=8 -- "pay us to fix a problem that we've caused, and if you have the gall to call it what it is (extortion), then we'll publish your email and be massive dicks about it"
(To be clear, not all spam blacklists are scams - just UCEPROTECTL3 specifically)
What I'm saying is that I can't access this website from my work laptop - it shows me branded blocked page.
I'm not 100% sure, but I think there is a policy set up on Zcaler, blocking access to the domains defined in some sort of blacklist. The reason why I assumed it's UCEPROTECTL3 is because it's the only positive result I got at online blacklist lookup against gmnz.xyz.
And no, I don't feel comfortable sharing my employer.
I started wondering if this person was actually a developer here. Maybe just a typo, or maybe a dialect thing, but does anyone actually use "codes" as a plural?
It’s somehow ironic though that his written output could’ve been improved by running it through an AI tool.
I mean, it could've been homogonized by running it through an AI tool. I don't think there's a guarantee that it would've been an improvement. Yes, it probably could've helped refine away phrases that give away a non-native English speaker, but it also would've sanded down and ground away other aspects of the personality of the author. Is that an improvement? I'm not so sure.
The thing is: I know you might read that and think I'm anti-AI. In this specific situation, at my company: We gave nuclear technology to a bunch of teenagers, then act surprised when they blow up the garage. This is a political/leadership problem; because everything, nine times out of ten, is a political/leadership problem. But the incentives just aren't there yet for generalized understanding of the responsibility it requires to leverage these tools in a product environment that's expected to last years-to-decades. I think it will get there, but along that road will be gallons of blood from products killed, ironically, by their inability to be dynamic and reliable under the weight of the additive-biased purple-tailwind-drenched world of LLM vibeput. But, there's probably an end to that road, and I hope when we get there I can still have an LLM, because its pretty nice to be able to be like "heyo, i copy pasted this JSON but it has javascript single quotes instead of double quotes so its not technically JSON, can you fix that thanks"
The people who think FizzBuzz is a leetcode programmer question are now vibecoding the same trash as always, except now they think they are smart x10 developers for forcing you to review and clean up their trash.
For those who have swallowed the AI panacea hook line and sinker. Those that say it's made me more productive or that I no longer have to do the boring bits and can focus on the interesting parts of coding. I say follow your own line of reasoning through. It demonstrates that AI is not yet powerful enough to NOT need to empower you, to NOT need to make you more productive. You're only ALLOWED to do the 'interesting' parts presently because the AI is deficient. Ultimately AI aims to remove the need for any human intermediary altogether. Everything in between is just a stop along the way and so for those it empowers stop and think a little about the long term implications. It may be that for you right now it is comfortable position financially or socially but your future you in just a few short months from now may be dramatically impacted.
As someone said "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes".
I can well imagine the blood draining from peoples faces, the graduate coder who can no longer get on the job ladder. The law secretary whose dream job is being automated away, a dream dreamt from a young age. The journalist whose value has been substituted by a white text box connected to an AI model.
I don't have any ideas as to what should be done or more importantly what can be done. Pandora's box has been opened, Humpty Dumpty has fallen and he can't be put back together again. AI feels like it has crossed the rubicon. We must all collectively await to see where the dust settles.
AI is just the next step and not even a particularly large leap. We already needed less law secretaries due to advances of technology. We killed most journalism two decades ago. Art and Music had Photoshop and autotune. Now we've actually achieved something we've literally been striving for since the dawn of computing -- the ability to speak natural language to a computer and have it do what we ask. But it's just one more step.
That pattern is bigger than any one of us and it's not a moral judgment. It's simply part of what technology does and has always done. AI is a continuation of that same trend we've all participated in, whether directly or indirectly. My point is that to stop now and say "look at all these jobs being eliminated by computers" is several decades too late.
I do think there is a qualitative difference in AI as compared to previous automation changes. This qualitative difference and its potential impacts beyond the obvious (job losses) is what is more worrying. The societal impact of AI slop, the impact on human intellectual efforts, pursuits, value and meaning are very concerning.
I wonder about that bit, TBH.
If you're 10x more productive at generating lines of code because you're mostly just reviewing, just how carefully are you reviewing? If you're taking the time to spec out stuff in great detail, then iterate on the many different issues with the LLM code, then finally reviewing when it passes the tests ... how are you getting to 10x and not 2x?
TBH, for those people who really are able to create 10x as much code with the LLM, their employment is actually more precarious than those who aren't doing that - it means your problem domain is so shallow that an LLM can hold both it and the code in a single context window.
Also, is it just me or has the feeling of victory gone away completely 100% ever since AI became a thing? I used to sweat and struggle, and finally have my breakthrough, the "I'm invicible!" Boris moment before the next thing came into my task inbox.
I don't feel that high anymore. I only recently realized this.
It rarely builds a good rapport with clients if you start explaining why their ideas on "improvements" are really not that good. Anyway, I would listen to them, nod, and do nothing as to their ideas. I would just stick to mine concept without wasting time for random client's "improvements". Funny thing is that clients usually, after more consideration and time would come on their own to the result I came to and presented to them—they just needed time to understand that their "improvements" aren't relevant.
If they insisted on implementing their "improvements" I'd do it for additional price—most often for them to just see that it wasn't good idea to start with and get back to what I already did before.
So, sometimes, ignoring client's ideas really saves a lot of time.
[1] https://en.wikipedia.org/wiki/IKEA_effect
The worst was pushing the tail into the tree. My original code was pretty slow, but every time AI changed more than 4 lines it introduced subtle bugs.
unfortunately this problem preceeds AI, and has been worsened by it.
i've seen instances of one-file, in-memory hashmap proof-of-concept implementations been requested to be integrated in semi-large evolving codebases with "it took me 1 day to build this, how long will it take to integrate" questions
This doesn't read like a vibe-coding problem, and more of a client boundaries problem. Surely you could point out they are paying you for your expertise, and to supersede your best practices with whatever AI churns out is making the job they are paying you to do even harder, and frankly a little disrespectful ("I know better").
Every. single. time. we hit an interface problem he would say “if you don’t understand the error feel free to use ChatGPT”. Dude it’s bare metal embedded software I WROTE the error. Also, telling someone that was hired because of their expertise to chatgpt something is crazy insulting.
We are in an era of empowered idiots. People truly feel that access to this near infinite knowledge base means it is an extension of their capabilities.
And I am now thinking to specialize in the field: they already know how f*d they are and they are going to pay a lot (or: they have no other opportunity). Something what looked like million-dollar idea created for pennies 3 months later is unbearable, already rotting pale of insanity which no junior human developer or even AI code assistant is able to extend. But they already have investors or clients who use it.
And for me, with >20 years of coding experience, this is a lot of fun cleaning it to the state when it is manageable.
Reality check: none of that ever existed, unless either the client mandated it (as a way to tightly regulate output quality from cheaper developers) or the developer mandated it (justifying their much higher prices and value to the customer).
Other than that: average customer buying code from average developer means:
- git was never even considered
- if git was ever used, everything is merged into "master" in huge commits
- no scheduled reviews, they only saw each other when it's time for the next quarterly/monthly payment and the client was shown (but not able to use) some preview of what's done so far
On other hand -- another customer of mine built a few internal tools with vibe code (& yes he does have subscription to my low code service) but then when newer requests came for upgrade thats where his vibe coded app started acting up. His candid feedback was -- for internal tools vibe code doesnt work.
As a service provider for low code --> we are now providing full fledged vibe code tooling on top. While I dont know how customers who do not wish to code and just have the software will be able to upkeep these softwares without needing professionals.
Not really worth working on any of these project.