Vibe Engineering
Key topics
The article discusses 'vibe engineering', a term coined to describe the practice of using AI tools like LLMs to assist in software development, sparking a heated debate among commenters about the term's accuracy and implications.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
20m
Peak period
89
6-12h
Avg / period
16
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 7, 2025 at 10:55 AM EDT
3 months ago
Step 01 - 02First comment
Oct 7, 2025 at 11:15 AM EDT
20m after posting
Step 02 - 03Peak activity
89 comments in 6-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 11, 2025 at 1:18 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
> I’ve tried in the past to get terms like AI-assisted programming to stick, with approximately zero success. May as well try rubbing some vibes on it and see what happens.
Getting good results out of that team is hard, because the bottleneck is how quickly you can review their workflow and point them in new directions.
Understanding techniques like TDD, CI, linting, specification writing, research spikes etc turns out to be key to unlocking that potential. That's why experienced software engineers have such a big advantage, if they choose to use it.
First, and most important, I have actually started a number of projects that have only lived in my head historically. Instead of getting weighed down in “ugh I don’t want to write a PDF parser to ingest that data” or whatever, my attitude has become “well, why not see if an AI assistant can do this?” Getting that sort of initial momentum for a project is huge.
Secondly, AI assistants have helped me stretch outside of my comfort zone. I don’t know SwiftUI, but it’s easy enough to ask an AI assistant to put things together and see what happens.
Both these cases refer almost necessarily to domains I’m not an expert in. And I think that’s a bigger factor in side projects than in day jobs, since in your day job, it’s more expected that you are working in an area of expertise.
Perhaps an exception is when your day job is at a startup, where everyone ends up getting stretched into domains they aren’t experts in.
Anyways, my story is, of course, just another anecdote. But I do think the step function of “would never have started without AI assistance” is a really important part of the equation.
1. Learning curve: Just like any skill there is a learning curve on how to get high quality output from an LLM.
2. The change in capabilities since recent papers were authored. I started intensively using the agentic coding tools in May. I had dabbled with them before that, but the Claude 3.7 release really changed the value proposition. Since May with the various Claude 4, 4.1 and 4.5 (and GPT-5) the utility of the agentic tools has exploded. You basically have to discard any utility measure before that inflection point, it just isn't super informative.
Likewise real world architects have the skills to design a building but do not care or know how to build it, relying on engineers for that.
I think it’s important to distinguish because we’re increasingly seeing a trend towards final product over production, meaning these “vibe” people want the tool in the end and consider the steps in between to be just busywork and AI can do for them.
That’s closer to product design than to engineering. If I can imagine Monalisa and write that thought to paper, communicating that thought and getting a painter to paint it for me does not make me Da Vinci.
Da Vinci himself likely had dozens of nameless assistants laboring in his studio on new experiments with light and color, new chemistry, etc. Da Vinci was Da Vinci because of his vision and genius, not because of his dexterity with his hands.
> The AI adds a ton. It really is like having a whole team of extra coders available, all of which can type faster than you.
Funny thing is, the least time consuming aspect of making programs is encoding solutions in source form. For example, a reasonable typist can produce thousands of text lines per workday if they know what must be typed (such as transcribing documents).
What takes longest when producing programmatic solutions is understanding what must be typed in the first place. After that, the rest is just an exercise in typing and picking good file/type/variable names.
Clearly I’m not in marketing.
Regardless, I’m delighted that this has gotten people to ‘independently discover’ software engineering best practices on their own.
“casing”?
Especially a term which comes across as so demeaning and devaluing to engineers (like me and yourself!)
I absolutely do not want my non-engineer friends and colleagues think I am "vibe engineering", it sounds trivial and dumbs down the discipline.
I personally believe being an engineer of some kind requires work, learning, patience, discipline, and we should be proud of being engineers. There's no way in hell I would go around and saying I'm a "vibe engineer" now. It would be like going around and saying I'm a vibe architect! Who would want to live in a skyscraper designed by a "vibe architect" ??
But the act of working on GitHub Actions could be referred to as "continuous integration engineering", just like the act of figuring out how best to build software engineering processes around LLM tools could be called "vibe engineering".
It's utterly ridiculous. It feels like the PMs and higher-ups have no idea how much tech debt we're creating right now. For the past few weeks, going to work has felt like going back to school, everyone's showing off their "homework", and whoever has the coolest vibecoded instructions.md or pipeline gets promoted.
I'm slowly burning out, and I was one of the people who actually liked the technology behind all this.
Meanwhile, I have a client project where my counterpart is definitely senior to me and excitedly shares how AI is helping them solve novel problems each week!
I am currently kind of an anti-AI black sheep in engineering department because I refuse to fully embrace the exponentials and give in to the vibes.
I avoid burnout by simply switching off my brain from all this noise about vibe coding - i have thought hard and long, i know the way this is being implemented is wrong, i know they will create problems for themselves down the road (they already have, the signs are already there), i will be here to dig them out when the time comes.
So far I don't see anyone shipping faster or better with AI than I can manually, so I'm good.
I am on the stupid side, personally.
I've used GPT to rapidly get up to speed with just about every aspect of circuit design, CAD, CNC... the list is long. Coding is often involved in most of these domains now, but if everything is assumed to be code-first, it leaves people who are doing different work with a constrained and apparently shrinking adjective namespace.
I'm now imagining me dying as a vibe-engineered truck has a steering/brake failure and crashes into me sending flying through the vibe-engineered papier-mâché bridge guardrails, and feeling sweet sweet release as I plummet to my doom.
Look, if you enjoy calculating a table of dozens of resistor value combinations for a feedback network that prefers reels you have on your PnP, you keep knocking yourself out.
In the example I cited, verifying a ratio isn't the hard part. It's running the dozens of permutations (smart) or hundreds of permutations (naive) that an LLM can do in 90 seconds that saves me hours of boring work. It's actually so repetitive that I'm likely to have made the same kind of mistakes you're alluding to.
As always, I end with encouragement: if you want to do everything the long and hard way, I'm not here to change your mind. You will have to stop being upset that others are moving much faster than you, though. It's a choice.
You have not met my cow orkers...
The reality is the tools are really useful when used as tools, like a power drill vs. a screw driver.
Vibing implies backseat driving which isn't what using the tools proficiently is like. The better term would be 'assisted' or 'offloaded'.
Same thing with the term 'engineering'. That's a fairly new term that implies being engineers which we are not. We haven't studied to be engineers, nor have real engineering degrees. We've called ourselves that because we were doing much more than the original job of programmer and felt like we deserved a raise.
'LLM extended programming' is not as catchy but more relevant to what I observe people doing. It's valuable, it saves time and allows us to learn quicker, very powerful if used properly. Calling it 'vibe engineering' is a risky proposition as it may just make people's eyes roll and restrict us to a lesser understanding.
I have fairly decent engineering credentials, but when the task fits, I prefer to vibe code.
Uh, speak for yourself. There are countries where being a software engineer does indeed imply that you studied engineering and hold a "real" engineering degree.
Also, Hillel Wayne's "Are We Really Engineers" is worth reading:
https://www.hillelwayne.com/post/are-we-really-engineers/
As "coders" or "programmers", some of us should answer the question "are you an engineer?" with a proud "of course not!" (That's me.) And some of us should answer, equally proudly, "of course I am!"
Hillel Wayne's series is great.
Just don't capitalize it in Oregon.
Instead of just vibing something out, pushing it to prod and seeing the problems. Or not even checking...
For instance, I recently had to replace a hard-coded parameter with something specifiable on the command line, in an unfamiliar behemoth of a Java project. The hard-coded value was literally 20 function calls deep in a heavily dependency-injected stack, and the argument parser was of course bespoke.
Claude Code oneshotted this in about 30 seconds. It took me all of 5 minutes to read through its implementation and verify that it correctly called the custom argument parser and percolated its value down all 20 layers of the stack. The hour of my time I got back from having to parse through all those layers myself was spent on the sort of programming I love, the kind that LLMs are bad at: things like novel algorithm development, low-level optimizations, designing elegant and maintainable code architecture, etc.
For more complex modifications, I would have taken the time to better internalize the code architecture myself. But for a no-brainer case like this, an LLM oneshot is perfect.
It's not so trivial to verify that the change doesn't cause problems elsewhere, where it also should have been propagated.
You raise a good point: an important skill in effectively using LLMs for coding is both being able to recognize ahead of time that cases like this are indeed simple, but also recognizing after the fact that the code is more complex than you initially realized and you can't easily internalize the (side) effects of what the LLM wrote, warranting a closer look.
It is very easy to notice at work who actually likes building software and wants to make the best product and who is there for the money, wants to move on, hard code something and get away with the minimal amount of work, usually because they don't care much. That kind of people love vibe coding.
Agentic coding is just doing for development what cloud computing did for systems administration. Sure, I could spend all day building and configuring Linux boxes to deploy backend infrastructure on if the time and budget existed for me to do that, and I'd have fun doing it, but what's more fun for me is actually launching a product.
Not much to base that on other than vibes, though :)
One of the most underrated skills in effectively using gen-AI for coding is knowing ahead of time whether it will take longer to carefully review the code it produces, versus writing it from scratch yourself.
[0] https://www.joelonsoftware.com/2000/04/06/things-you-should-...
[1] https://mattrickard.com/its-hard-to-read-code-than-write-it
[2] https://trishagee.com/presentations/reading_code/
[3] https://idiallo.com/blog/writing-code-is-easy-reading-is-har...
[...] https://www.google.com/search?q=it%27s+harder+to+read+code+t...
Today: 'code is cheap, show me the talk'
For the experienced lot of us, I've heard many call it "hyper engineering"
The AI agent in Cursor with Gemini (I'm semi-new to all of this) is legit.
I can try things out and see for myself and get new ideas for things. Mostly I just ask it to do things, it does it; for specific things I just highlight it in the editor and say "Do it this way instead" or "for every entry in the loop, add to variable global_var only if the string matches ./cfg/strings.json" I _KNOW_ I can code that.
But I like my little clippy.
We didnt stop calling them Framers or Finish Carpenters when they got electric saws and nail guns.
Tooling does not change the job requirements.
If using LLMs makes you slower or reduces the quality of your output, your professional obligation is to notice that and change how you use them.
If you can't figure out how to have them increase both the speed and the quality of your work, you should either drop them or try and figure out why they aren't working by talking to people who are getting better results.
My largest project is a year old, it's full-stack JavaScript, and I consciously use patterns, structures, and diligently add documentations right from the beginning for the code base to be as LLM friendly as possible.
I see great results on refactoring with limited scope, scaffolding test cases (I still choose to write my own tests but LLMs can also generate very good tests if I explicitly point to existing tests of highly related code, such as some repository methods), documenting functions, etc. but I'm just not seeing the kind of quality that people claim that LLMs can do for them on complex tasks.
I want to believe that LLMs are actually capable of doing what at least a good junior engineer can do but I'm not seeing that in my own experience. Whenever we point out these issues we are encountering, we just basically get the "git gud" response with no practical details on what we can actually dp to get the results that people claim to be getting. Then people start blaming our lack of structures, patterns, problems with our prompts, the language, our stack, etc. when we complain about the "git gud" response being too vague. Nobody claiming to be seeing great results seems to want to do a comprehensive write-up or, better still, a stream of their entire workflow to teach others how to do actual, good engineering with LLMs on real-world problems either -- they all just want to give high level details and assert success.
On top of that, the fact that none of the people I know in engineering working in both large organizations and respectable startups that are pushing AI are seeing that kind of results naturally makes me even more skeptical of claims of success. What I'm often hearing from them are mediocre engineers thinking that they are being productive but actually just offloading the work to their colleagues through review, and nobody seems to be seeing tangible returns from using AI in their workflow but people in C-suites are pushing AI anyway.
If just about anything can be "your fault", how can anyone claiming that LLMs are great for real engineering without showing evidence be so confident that what they're claiming but not showing is actually the case.
I feel like every time I comment on anything related to your blog posts I probably came across as belligerent and get down voted but I really don't intend to.
You want something to inspire engineers to do their best work.
When you can expand your capabilities using the power of AI, then yeah, you can do your best work; hence augmented engineering.
But vibing? Not so much.
I guess AE could also stand for Advanced Engineering, after all the AI gives you the power to access and understand the latest in engineering knowledge, on demand, which you can then apply to your work.
> gives you the power to access and understand the latest in engineering knowledge, on demand, which you can then apply to your work.
Gives you access to the power to access and {mis}understand the {most repaeted over the last 1-10 years} engineering {errors, myths, misunderstandings, design flaws}, on demand, which you then can apply to your work {to further bias the dataset for future models to perpetuate the slop}.
Do NOT trust AI agents. Check their work at every level, find any source they claim to use, and check its sources to ensure that itself isn't AI too. They lie beyond their datasets, and their datasets are lying more for every minute that pass.
Now, seriously though, no tools is perfect, and I agree we should not trust it blindly, but leaving aside AI Agents, LLMs are very helpful in illuminating one's path, by consulting a large body of knowledge on demand, particularly when dealing with problems which might be new to you, but which have already been tackled one way or another by other people in the industry (provided they're in the training set of course).
Yes, there's always the risk of perpetuating existing slop. But that is the risk in any human endeavor. The majority of people mostly follow practices and knowledge established by the few. How many invent new things?
To be honest, I haven't yet used AI agents, I'm mostly just using LLMs as a dialogue partner to further my own understanding and to deepen my knowledge. I think we're all still trying to figure it out how to best use it.
I don't think it necessarily deserves a special name. It is just engineering. You don't say book assisted engineering when you use a book as a reference. It is just engineering.
> But vibing? Not so much.
Just call it yolo engineering. Or machine outsourced irresponsible lmao engineering.
> I guess AE could also stand for Advanced Engineering, after all the AI gives you the power to access and understand the latest in engineering knowledge, on demand, which you can then apply to your work.
Oh god.
Being good at vibe coding is just being good at coding, the best practices still apply. I don't feel we need another term for it. It'll just be how almost everyone writes code in the future. Just like using an IDE.
> The developer does not review or edit the code, but solely uses tools and execution results to evaluate it and asks the LLM for improvements. Unlike traditional AI-assisted coding or pair programming, the human developer avoids examination of the code, accepts AI-suggested completions without human review, and focuses more on iterative experimentation than code correctness or structure.
https://en.wikipedia.org/wiki/Vibe_coding
Having someone cook my dinner's ingredients is just (me) being a good cook ...
How does "vibe coding" embody "best practices" as the industry generally defines the latter term?
As I understand the phrase "vibe coding", it implies focusing solely on LLM prompt formulation and not the specifics of the generated source.
> It'll just be how almost everyone writes code in the future. Just like using an IDE.
The flaw with this analogy is that a qualified developer does not require an IDE in order to be able to do their job.
The part about past management experience being a key skill surprised me but now it makes sense.
I really do usually have 3 different projects in flight for at least 6 hours a day. I'd write a blog post but I keep expecting someone else will write the essential same post tomorrow. :)
p.s. The first 2 months was exhausting but now it's slightly less exhausting. Make no mistake, it is an extreme transition to make.
It is imperative that you do not kill me when delivering my breakfast!
You must not make your own doors by punching holes in the wall!
It is critical that you remember that humans cannot regrow limbs!
“One bird was conditioned to turn counter-clockwise about the cage, making two or three turns between reinforcements. Another repeatedly thrust its head into one of the upper corners of the cage. A third developed a 'tossing' response, as if placing its head beneath an invisible bar and lifting it repeatedly. Two birds developed a pendulum motion of the head and body, in which the head was extended forward and swung from right to left with a sharp movement followed by a somewhat slower return.”
“The experiment might be said to demonstrate a sort of superstition. The bird behaves as if there were a causal relation between its behavior and the presentation of food, although such a relation is lacking.”
https://en.wikipedia.org/wiki/B._F._Skinner
Thus what you are doing is closer to pigeons bobbing their heads to attempt to influence the random reward machine than it is to engineering.
For example saying things like “It is critical that you don’t delete working code” might actually be a valid general technique, or it might have just been something that appeared to work because of randomness, or it might be something that is needed for current models but won’t be necessary in a few months.
The nature of LLMs makes correctly identifying superstition nearly impossible. And the speed with which new models are released makes trying to do so akin to doing physics in a universe where the laws of nature are constantly changing.
You’re an alchemist mixing gunpowder and sacrificing chickens to fire spirits, not an engineer, and for the foreseeable future you have no hope of becoming an engineer.
I’m also highlighting the insanely addictive nature of random rewards.
Also I’ve been a software engineer for 15 years so I think I don’t “have no hope of becoming a software engineer”, no personal attacks please.
I see how if you can't really code, or you're new to a domain, then it can make a huge difference getting you started, but if you know what you're doing I find you hit a wall pretty quickly trying to get it to actually do stuff. Sometimes things can go smoothly for a while, but you end up having to micromanage the output of the agent too much to bother. Or sacrifice code quality.
Prior to vibe-coding, it would've been an arduous enough task that I would've done one implementation, looked at the time it took me and the output, and decided it was probably good enough. With vibe-coding, I was able to prototype three different approaches which required some heavy lifting that I really didn't want to logic out myself and get a feel for if any of the results were more compelling than others. Then I felt fine throwing away a couple of approaches because I only spent a handful of minutes getting them working rather than a couple of hours.
But if I give it a code example that was written by humans and ask it to explain the code, it gives pretty good explanations.
It's also good for questions like "I'm trying to accomplish complicated task XYZ that I've never done before, what should I do?", and it will give code samples that get me on the right path.
Or it'll help me debug my code and point out things I've missed.
It's like a pair programmer that's good for bouncing ideas, but I wouldn't trust it to write code unsupervised.
Have you isolated if you're properly honing in on the right breadth of context for the planned implementation?
> […]
> Or it'll help me debug my code and point out things I've missed.
I made both of these statements myself and later wondered why I had never connected them.
In the beginning, I used AI a lot to help me debug my own code, mostly through ChatGPT.
Later, I started using an AI agent that generated code, but it often didn’t work perfectly. I spent a lot of time trying to steer the AI to improve the output. Sometimes it worked, but other times it was just frustrating and felt like a waste of time.
At some point, I combined these two approaches: I cleared the context, told the AI that there was some code that wasn’t working as expected, and asked it to perform a root cause analysis, starting by trying to reproduce the issue. I was very surprised by how much better the agent became at finding and eventually fixing problems when I framed the task from this different perspective.
Now, I have commands in Claude Code for this and other due diligence tasks, and it’s been a long time since I last felt like I was wasting my time.
We need more empirical evidence. And historically we’re really bad at running such studies and they’re usually incredibly expensive. And the people with the money aren’t interested in engineering. They generally have other motives for allowing FUD and hype about productivity to spread.
Personally I don’t see these tools going much further than where they are now. They choke on anything that isn’t a greenfield project and consistently produce unwanted results. I don’t know what magic incantations and combinations of agents people have got set up but if that’s what they call “engineering,” these days I’m not sure that word has any meaning anymore.
Maybe these tools will get there one day but don’t go holding your breath.
That was true 8 months ago. It's not true today, because of the one-two punch of modern longer-context "reasoning" models (Claude 4+, GPT-5+) and terminal-based coding agents (Claude Code, Codex CLI).
Setting those loose an an existing large project is a very different experience from previous LLM tools.
I've watched Claude Code use grep to find potential candidates for a change I want to make, then read the related code, follow back the chain of function calls, track down the relevant tests, make a quick detour to fetch the source code of a dependency directly from GitHub (by guessing the URL to the raw file) in order to confirm a detail, make the change, test the change with an ad-hoc "python -c ..." script, add a new automated test, run the tests and declare victory.
That's a different class entirely from what GPT-4o was able to do.
The skill ceiling is high, it turns out. It's just deceptive, because it's so easy to get going. Ultra accessible foot gun, lots of work to point it in the right direction reliably and repeatedly. Significant benefits of you manage though.
I've gotten more relaxed about it now though. People will either get it or they don't.
here is an example of mostly automated work. It's a small feature but it was done perfectly
I have a disclosures section on my blog here: https://simonwillison.net/about/#disclosures
I was decomissioning some code and I made the mistake of asking for an "exhaustive" analysis of the areas I needed to remove. Sonnet 4.5 took 30 minutes looking around and compiling a detailed report on exactly what needed to be removed from this very very brownfield project and after I reviewed the report, it one shot the decommisioning of the code (in this case I was using CLaude in the Cursor tooling at work). It was overkill, but impressive how well it mapped all the ramifications in the code base by greping around.
For stuff that I’m bad at? Probably more than 1000%. I’ve used it to make a web app, write some shader code, and set up some rtc streaming from unreal engine to the browser. I doubt I would have done them at all otherwise tbh. I just don’t have the energy and interest to conclude that those particular ventures were good uses of my time.
And you can do this for anything
Anything that's been done before. Otherwise we'd probably start with making nuclear fusion work, then head off into the stars...
You've always been able to read books. What you're talking about is skipping the slow learning step and instead generating a mashup of tons of prior art. I don't think it helps you learn. It sounds like it's for things you specifically don't want to learn.
Congrats, you now have a job similar to a factory worker turning a handle every day. Gone is that feeling of growth, that feeling of "getting it" and seeing new realms of possibility in front of you. Now all you can do is beg for more grease on your handle.
Learning by getting something to work and tweaking it is massively more effective than grinding against a wall of impassable errors while you’re just trying to get started. You don’t become a good programmer by reading a book.
> The narrative synthesis presented negative associations between GPS use and performance in environmental knowledge and self-reported sense of direction measures and a positive association with wayfinding. When considering quantitative data, results revealed a negative effect of GPS use on environmental knowledge (r = −.18 [95% CI: −.28, −.08]) and sense of direction (r = −.25 [95% CI: −.39, −.12]) and a positive yet not significant effect on wayfinding (r = .07 [95% CI: −.28, .41]).
https://www.sciencedirect.com/science/article/pii/S027249442...
Keeping the analogy going: I'm worried we will soon have a world of developers who need GPS to drive literally anywhere.
No, I just put in less effort to arrive at the same point and do no more.
Vibe coding is different because it's the "dictated but not read" of coding. Yes, I was around when the LLM was writing the code, and I vaguely instructed it on what to write, but I make no assurances on the quality of the output.
In Claude Code for example I define a research sub-agent and let it do the majority of "research" type tasks. Especially when the research is tangential to what ever my objective is. Even if it is critical, I'll usually ask to have it do a first pass.
Once tooling becomes reliable and predictable enough, and the quality of the output consistent enough, using it is not a leap. Early compilers had skeptics, and GCC still has some bugs [1]
1. https://bugs.launchpad.net/ubuntu/+source/gcc-8/+bug/2101084
I was hoping that "vibe engineering" was going to be designing g bridges in the same way people think they can build apps with vibe coding
That would be alarming
Seriously now, I think the whole industry suffers from too many buzzwords and whacky terminology.
*The job hasn't changed*. As mentioned, all those things from the past are still the most important thing (version control, being good at testing, knowing when to outsource, etc).
It's just coding (which is something that was never about typing characters, ever).
Don’t mind me, I’m just vibing.
[1] https://simonwillison.net/2025/Jun/27/context-engineering/
The only issue with "Handled Programming" is I don't like how it fits for a name.
Vibe is much too unserious to pass my check for a way of professionally doing something and it also does not reflect the level of engagement I have with the AI and code since I'm putting together specs and otherwise deeply engaging with the model to produce the output I want.
I'd offer a new term: curmudgeon coding. This pre-dates LLMs and is the act of engineers endlessly clutching pearls over new technology and its branding. It's a reflexive reaction to marketing hype mixed with a conservative by default attitude. Think hating on "NoSQL". Validity of said hate aside, it's definitely "a type" of developer who habitually engages in whinging.
559 more comments available on Hacker News