My Article on Why AI Is Great (or Terrible) or How to Use It
Key topics
The debate rages on: is using AI to create something truly "creating" or just commissioning a task? Commenters passionately argue that designing a system or prompt for an AI is distinct from the actual work being done, likening it to hiring someone to paint a masterpiece or construct a building. While some find joy in architecting systems or leveraging AI to deliver value quickly, others counter that it's not the same as hands-on craftsmanship, with one commenter quipping that a CNC lathe's automated cuts aren't equivalent to an artist's original work. As perspectives clash, a surprising consensus emerges: many people enjoy the high-level thinking and problem-solving aspects of tech, but not necessarily the nitty-gritty details of coding.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
69
0-3h
Avg / period
13.3
Based on 160 loaded comments
Key moments
- 01Story posted
Jan 9, 2026 at 1:17 PM EST
2d ago
Step 01 - 02First comment
Jan 9, 2026 at 2:55 PM EST
2h after posting
Step 02 - 03Peak activity
69 comments in 0-3h
Hottest window of the conversation
Step 03 - 04Latest activity
Jan 11, 2026 at 12:02 PM EST
3h ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
But saying that AI development is more fun because you don’t have to “wrestle the computer” is, to me, the same as saying you’re really into painting but you’re not really into the brush aspect so you pay someone to paint what you describe. That’s not doing, it’s commissioning.
What if I have a block of marble and a vision for the statue struggling from inside it and I use an industrial CNC lathe to do my marble carving for me. Have I sculpted something? Am I an artist?
What if I'm an architect? Brunelleschi didn't personally lay all the bricks for his famous dome in Florence --- is it not architecture? Is it not art?
I would also call designing a system to be fed into an LLM designing. But I wouldn’t call it programming.
If people are more into the design and system architecture side of development, I of course have no problem with that.
What I do find baffling, as per my original comment, is all the people saying basically “programming is way more fun now I don’t have to do it”. Did you even actually like programming to begin with then?
Of course not everyone who programs AI style hate programming, but I do think your take explains a large chunk of zealotry: It has become Us v. Them for both sides and each is staking out their territory. Telling the vibe coder they are not programming hurts their feelings much like telling a senior developer all their accumulated experience and knowledge is useless if not today, for sure some day soon!
I think it's legitimate that someone might enjoy the act of creation, broadly construed, but not the brick-by-brick mechanics of programming.
Some people find software architecture and systems thinking more fun than coding. Some people find conducting more fun than playing an instrument. It's not too mysterious.
I don't mind ops code though. I dislike building software as in products, or user-facing apps but I don't mind glue code and scripting/automation.
Don't ask me to do leetcode though, I'll fail and hate the experience the entire time.
I understand where comedians get their source material now.
Indeed, of all the possible things to say!
AI "development" /is/ wrestling the computer. It is the opposite of the old-fashioned kind of development where the computer does exactly what you told it to. To get an AI to actually do what I want and nothing else is an incredibly painful, repetitive, confrontational process.
You very likely have some of these toil problems in your own corner of software engineering, and it can absolutely be liberating to stop having to think about the ape and the jungle when all you care about is the banana.
Using English, with all its inherent ambiguity, to attempt to communicate with an alien (charitably) mind very much does /not/ make this task any easier if the thing you need to accomplish is of any complexity at all.
Claude will understand and carry out this fairly complex task just fine, so I doubt you have actually worked with it yet.
This just isn't the case.
English can communicate very simply a set of "if.. then.." statements and an LLM can convert them to whatever stupid config language with I'm dealing with today.
I just don't care if Cloudflare's wrangler.toml uses emojis to express cases or AWS's Cloudformation required some Shakespearean sonnet to express the dependencies in whatever the format of the day is.
Or don't get me started on trying to work out which Pulami Google module I'm supposed to use for this service. Ergh.
I can express very clearly what I want, let a LLM translate it then inspect the config and go "oh that's how you do that".
It's great, and is radically easier than working through some docs written by a person who knows what they are doing and assumed you do too.
Sanchez's Law of Abstraction applies. You haven't abstracted anything away, just added more shit to the pile.
This is not your average abstraction layer.
No, it is not. What you are doing is something not too different from asking your [insert here freelance platform] hired remote dev to make an app and enter a cycle of testing the generated app and giving feedback, it is not wrestling the computer.
For people like me, anything that makes the computer more human-like is a step in the wrong direction, and feels much more like wrestling.
I don't care if you use AI but leave me alone. I'm plenty fast without it and enjoy the process this author callously calls "wrestling with computers."
Of course this isn't going to help with the whole "making me fast at things I don't know" but that's another can of worms.
At the same time, one of the best developers I worked with was a two-finger typist who had to look at the keyboard. But again, I don't care if you're going to use AI (well, that's not entirely true but not going to get into it) but the tone of this article that "You should learn it, " I take issue with.
I think it’s a bit like a gambling addiction. I’m riding high the few times it pays off, but most of the time it feels like it’s just on the edge of paying off (working) and surely the next prompt will push it over the edge.
I feel this exactly. I’ve been one of the biggest champions of the tech in my org in spite of the frequent pain I feel from it.
just.. uninstall it? i've removed all ai tooling from both personal+work devices and highly recommend it. there's no temptation to 'quickly pull up $app just to see' if it doesn't exist
The OP is right and I feel this a lot: when Claude pulls me into a rabbit hole, convinces me it knows where to go, and then just constantly falls flat on its face and we waste like several hours together, with a lot of all caps prompts from me towards the end. These sessions last in a way that he mentions: "maybe its just a prompt away from working"
But I would never delete CC because there are plenty of other instances where it works excellent and accelerates things quite a lot. And additionally, I know we see a lot of "coding agents are getting worse!" and "METR study proves all you AI sycophants are deluding yourselves!" and I again understand where these come from, agree with some of the points they raise, but honestly: my own personal perception (which I argue is pretty well backed up by benchmarks and by Claude's own product data which we don't see -- I doubt they would roll out a launch without at least one or more A/B tests) is that coding agents are getting much better, and that as a verifiable domain these "we're running out of data!" problems just aren't relevant here. The same way alphago gets superhuman, so will these coding agents, it's just a matter of when, and I use them today because they are already useful to me.
It does _feel_ like the value and happiness will come some versions down the road when I can actually focus on orchestration, and not just bang my head on the table. That’s the main thing that keeps me from just removing it all in personal projects.
I do this a lot and it’s super helpful.
https://pivot-to-ai.com/2025/06/05/generative-ai-runs-on-gam...
I am also now experimenting with my own version of opencode and I change models a lot, and it helps me learn how each model fails at different tasks, and it also helps me figure out the most cost effective model for each task. I may have spent too much time on this.
In both cases, it works because I can mostly detect when the output is bullshit. I'm just a little bit scared, though, that it will stop working if I rely too much on it, because I might lose the brain muscles I need to detect said bullshit.
I love this job but I can absolutely get people saying that AI helps them not "fight" the computer.
Once you've done it, you'll hopefully never have to do it again (or at worse be derivatives). Over time you'll have a collection of 'how to do stuff'.
I think this is the path to growth. Letting a LLM do it for you is equivalent to it solving a hard leetcode problem. You're not really taxing your brain.
But things like "hey this array of objects I have, I need sorted by this property" are not hard leetcode problems
They're precisely the kind of tedious, but not taxing, problems that we prefer to farm out to someone else. Like asking a junior to do it.
Then we're just puppetmasters pulling the strings (which some think this is the way the industry is going).
And for me (and other ops folks here I'd presume), that is the fun part. Sad, from my career perspective, that it's getting farmed out to AI, but I am glad it helps you with your side projects.
It's like the articles point: we don't do assembly anymore and no one considers gcc to be controversial and no one today says "if you think gcc is fun I will never understand you, real programming is assembly, that's the fun part"
You are doing different things and exercising different skillsets when you use agents. People enjoy different aspects of programming, of building. My job is easier, I'm not sad about that I am very grateful.
Do you resent folks like us that do find it fun? Do you consider us "lesser" because we use coding agents? ("the same as saying you’re really into painting but you’re not really into the brush aspect so you pay someone to paint what you describe. That’s not doing, it’s commissioning.") <- I don't really care if you consider this "true" painting or not, I wanted a painting and now I have a painting. Call me whatever you want!
The compiler reliably and deterministically produces code that does exactly what you specified in the source code. In most cases, the code it produces is also as fast/faster than hand written assembly. The same can't be said for LLMs, for the simple reason that English (and other natural languages) is not a programming language. You can't compile English (and shouldn't want to, as Dijkstra correctly pointed out) because it's ambiguous. All you can do is "commission" another
> Do you resent folks like us that do find it fun?
For enjoying it on your own time? No. But for hyping up the technology well beyond it's actual merits, antagonizing people who point out it's shortcomings, and subjecting the rest of us to worse code? Yeah, I hold that against the LLM fans.
> But for hyping up the technology well beyond it's actual merits, antagonizing people who point out it's shortcomings, and subjecting the rest of us to worse code? Yeah, I hold that against the LLM fans.
Is that what I’m doing? I understand your frustration. But I hope you understand that this is a straw man: I can straw man the antagonists and AI-hostile folks but the point is the factions and tribes are complex and unreasonable opinions abound. My stance is that people can dismiss coding agents at their peril, but it’s not really a problem: taking the gcc analogy, in the early compiler days there was a period where compilers were weak enough that assembly by hand was reasonable. Now it would be just highly inefficient and underperformant to do that. But all the folks that lamented compilers didn’t crumble away, they eventually adapted. I see that analogy as being applicable here, it may be hard to see the insanity of coding agents because we’re not time travelers from 2020 or even 2022 or 3. But this used to be an absurd idea and is now very serious and highly adopted. But still quite weak!! Still we’re missing key reliability and functionality and capabilities. But if we got this far this fast, and if you realize that coding agent training is not limited in the same way that e.g. vanilla LLM training is by being a verifiable domain, we seem to be careening forward. But by nature of their current weakness, absolutely it is reasonable not to use them and absolutely it is reasonable to point out all of their flaws.
Lots of unreasonable people out there, my argument is simply: be reasonable.
Novelty isn't necessarily better as a replacement of what exists. Example: blockchain as fancy database, NFTs, Internet Explorer, Silverlight, etc.
> Is that what I’m doing?
Initially I'd have been reluctant to say yes, but this very comment is laced with assertions that we'd better all start adopting LLMs for coding or we're going to get left behind [0]
> taking the gcc analogy, in the early compiler days there was a period where compilers were weak enough that assembly by hand was reasonable. Now it would be just highly inefficient and underperformant to do that
No matter how good LLMs get at translating english into programs, they will still be limited by the fact that their input (natural language) isn't a programming language. This doesn't mean it can't get way better, but it's always going to have some of the same downsides of collaborating with another programmer.
[0] This is another red flag I would hope programmers would have learned to recognize. Good technology doesn't need to try to threaten people into adopting it.
> No matter how good LLMs get at translating english into programs, they will still be limited by the fact that their input (natural language) isn't a programming language.
Right but engineers routinely convert natural language + business context into formal programs, arguably an enormously important part of creating a software product. What's any different here? Like a programmer, the creation process is two-way. The agent iteratively retrieves additional information, asks questions, checks their approach, etc etc.
> [0] This is another red flag I would hope programmers would have learned to recognize. Good technology doesn't need to try to threaten people into adopting it.
I think I was either not clear or you misread my comment: you're not going to get left behind any more than you want to. Jump in when you feel good about where the technology is and use it where you feel it should be used. Again: if you don't see value in your own personal situation with coding agents, that is objectively a reasonable stance to hold today.
You're never really wrestling the computer. You're typically wrestling with the design choices and technological debt of decisions that were in hindsight bad ones. And it's always in hindsight, at the time those decision always seem smart.
Like with the rise of frameworks, and abstractions who is actually doing anything with actual computation?
Most of the time it's wasting time learning some bs framework or implementing some other poorly designed system that some engineer that no longer works at the company created. In fact the entire industry is basically just one poorly designed system with technological debt that grows increasingly burdensome year by year.
It's very rarely about actual programming or actual computation or even "engineering". But usually just one giant kludge pile.
Well, I'll have to take their word for it that they're passionate about maximizing shareholder value by improving key performance indicators, I know I personally didn't sign up for being in meetings all day to leverage cross functional synergies with the goal of increasing user retention in sales funnels, or something along those lines.
I'm not passionate about either that or mandatory HR training videos.
Creating software has a similar number of steps. AI tools now make some of them much (much) easier/optional.
You've got a good analogy there though, because many great and/or famous painters have used teams of apprentices to produce the work that bears their names.
I'm reminded also of chefs and sous-chefs, and of Harlan Mill's famous "chief surgeon plus assistants" model of software development (https://en.wikipedia.org/wiki/Chief_programmer_team). The difference, of course, being that the "assistants" in the current moment are mechanical ones.
(as for how fun this is or isn't - personally I can't tell yet. I don't enjoy the writing part as much - I'd rather write code than write prompts - but then I again, I don't enjoy writing grunt code, boilerplate, etc., and there's less of that now, - and I don't enjoy having to learn tedious details of some tech I'm not actually interested in in order to get an auxiliary feature that I want, and there's orders of magnitude less of that now, and then there are the projects and programs that simply would never exist at all if not for this new mechanical help in the earliest stages, and that's fun - it's a lot of variables to add up and it's all in flux. Like the French Revolution, it's too soon to tell! - https://quoteinvestigator.com/2025/04/02/early-tell/)
i like what software can do, i don't like writing it
i can try to give the benefit of the doubt to people saying they don't see improvements (and assume there's just a communication breakdown)
i've personally built three poc tools that proved my ideas didn't work and then tossed the poc tools. ive had those ideas since i knew how to program, i just didn't have the time and energy to see them through.
The “lone genius” image is largely a modern romantic invention.
I have found in my software writing experience that the majority of what I want to write is boiler plate with small modifications but most of the problems are insanely hard to diagnose edge cases and I have absolutely no desire nor is it a good use of time in my opinion to deal with structural issues in things that I do not control.
The vast majority of code you do not control because you aren’t the owner of the framework or library your language or whatever and so the Bass majority of software engineering is coming up with solutions to foundational problems of the tools you’re using
The idea that this is the only true type of software engineering is absurd
True software engineering is systems, control and integration engineering.
What I find absolutely annoying is that there’s this rejection of the highest level Hofstetter level of software architecture and engineering
This is basically sneered at over the idea of “I’m gonna go and try to figure out some memory management module because AMD didn’t invest in additional SOC for the problems that I have because they’re optimized for some Business goals.”
It’s frankly junior level thinking
Programming a system at a low-level from scratch is fun. Getting CSS to look right under a bunch of edge cases - I won't judge that programmer too harshly for consulting the text machine.
This is especially true considering it's these shallow but trivia-dominated tasks which are the least fun and also which LLMs are the most effective at accomplishing.
AI is more fun for programmers that should've gone into management instead, and prefer having to explain things in painstaking detail in text, rather than use code. In other words, AI is for people that don't like programming that much.
Why would you even automate the most fun part of this job? As a freelance consultant, I'd rather have a machine to automate the whole boring business side so I could just sit in front of my computer and write stuff with my own hands.
I am happy to accept that some people still prefer to write out their code by hand… that’s ok? Keep doing it if you want! But I would ask yourself why you are so offended by people that would prefer to automate much of that, because you seem to be offended. Or am I misreading your intention?
And hey, I still enjoy solving interesting problems with code. I did advent of code this year with no LLM assistance and it was great fun. But most professional software development doesn’t have that novelty value where you get to think about algorithms and combinatorical puzzles and graphs and so on.
Before anyone says it, sure, there is a discussion to be had about AI code quality and the negative effects of all this. A bad engineer can use it to ship slop to production. Nobody is denying that. But I think that’s a separate set of questions.
Finally, I’m not sure painting is the best analogy. Most of us are not creating works of high art here. It’s a job, to make things for people to use, more akin to building houses than painting the Sistine Chapel.
I've coded professionally for 30 years (ergh!). I'm ok at it.
But I love building things with AI. I haven't had this much fun since the early 2000s.
I like this. I'm going to see if my boss will go for me changing my title from Solutions Architect to Solutions Commissioner. I'll insist people refer to me as "Commissioner ajcp"
At home, I never had the time/will to be as thorough. Too many other things to do in life. Pre-LLMs, most of my personal scripts are just - messy.
One of the nice things with LLM assisted coding is that it almost always:
1. Gives my program a nice interface/UI
2. Puts good print/log statements
3. Writes tests (although this is a hit or miss).
Most of the time it does it without being asked.
And it turns out, these are motivation multipliers. When developing something, if it gives me good logs, and has a good UI, I'm more likely to spend time developing it further. Hence, coding is now more joyful.
And it turns out, these tend to
Development is solely to exchange labor for money.
I haven’t written a single line of code “for fun” since 1992. I did it for my degree between 1992-1996 while having fun in college and after that depending on my stage in life, dating, hanging out with friends, teaching fitness classes and doing monthly charity races with friends, spending time with my wife and (step)kids, and now enjoying traveling with my wife and friends, and still exercising
In b4 someone mentions some famous artists had apprentices under them.
But we are, even as of Opus 4.5, so wildly far away from what the author is suggesting. FWIW my experience is working in the AI/ML space at a major tech company and as a maintainer + contributor of several OSS projects.
People are blindly trusting LLMs and generating mountains of slop. And the slop compounds.
But I still write my own code. If I'm going to be responsible for it, I'm going to be the one who writes it.
It's my belief that velocity up front always comes at a cost down the line. That's been true for abstractions, for frameworks, for all kinds of time-saving tools. Sometimes that cost is felt quickly, as we've seen with vibe coding.
So I'm more interested in using AI in the research phase and to increase the breadth of what I can work on than to save time.
Over the course of a project, all approaches, even total hand-coding with no LLMs whatever, likely regress to the mean when it comes to hours worked. So I'd rather go with an approach that keeps me fully in control.
Why not output everything in C and ASM for 500x performance? Why use high level languages meant to be easier for humans? Why not go right to the metal?
If anyone's ever tried this, it's clear why: AI is terrible at C and ASM. But that cuts into what AI is at its core: It's not actual programming, it's mechanical reproduction.
Which means its incapabilities in C and ASM don't disappear when using it for higher-level languages. They're still there, just temporarily smoothed over due to larger datasets.
I haven't tried C or ASM yet, but it has been working very well with a C++ project I've been working on, and I'm sure it would do reasonably well with bare-bones C as well.
I'd be willing to bet it would struggle more with a lower-level language initially, but give it a solid set of guardrails with a testing/eval infrastructure and it'll get its way to what you want.
Qt in your example is a part. You're application is the whole. If you replaced Qt with WxWidgets, is your application still the same application?
But to answer your question, to replace Qt with you're own piecemeal code doesn't do anything more to Qt than replacing it with WxWidgets would: nothing. The Qt code is gone. The only way it would ship-of-theseus itself into "still being Qt, despite not being the original Qt" would be if Qt required all modifications to be copyright-assigned and upstreamed. That is absurd. I don't think I've ever seen a license that did anything like that.
Even though licenses like the GPL require reciprocal FOSS release in-kind, you still retain the rights to your code. If you were ever to remove the GPL'd library dependency, then you would no longer be required to reciprocate. Of course, that would be a new version of your software and the previous versions would still be available and still be FOSS. But neither are you required to continue to offer the original version to anyone new. You are only required to provide the source to people who have received your software. And technically, you only have to do it when they ask, but that's a different story.
It's going to take the same amount of time creating a program in C as it does in Python.
Because the conciseness and readability of the code that I use is way more important than execution speed 99% of the time.
I assume that people who use AI tools still want to be able to make manual changes. There are hardly any all or nothing paradigms in the tech world, why do you assume that AI is different?
You aren't supposed to make corrections, review it, or whatever.
It wasn't even long ago that we thought developer experience and capacity for abstraction (which is easier to achieve in higher level languages) was paramount.
Those tides have shifted over the past 6 weeks. I'm increasingly seeing serious, experienced engineers who are using AI to write code and are not reviewing every line of code that they push, because they've developed a level of trust in the output of Opus 4.5 that line-by-line reviews no longer feel necessary.
(I'm hesitant to admit it but I'm starting to join their ranks.)
Do I think Opus 4.5 would always make that mistake? No. But it does indicate that the output of even SotA models needs careful review if the code actually matters.
And now, I have a tool to do a (shuffled if I want) beat-matched mix of all the tracks in my db which match a certain tag expression. "(dnb | jungle) & vocals", wait a few minutes, and play a 2 hour beat-matched mix, finally replacing mpd's "crossfade" feature. I have a lot of joy using that tool, and it was definitely fun having it made. clmix[1] is now something I almost use daily to generate club-style mixes to listen to at home.
[1] https://github.com/mlang/clmix
Here's a C session that I found quite eye-opening the other day: https://gisthost.github.io/?1bf98596a83ff29b15a2f4790d71c41d...
Drop Python: Use Rust and Typescript
https://matthewrocklin.com/ai-zealotry/#big-idea-drop-python...
It did ok at that.
We'll - Doom runs so ok enough for what I wanted anyway.
No, it's not a copy of other WASM stdlib implementations.
At work the projects are huge (200+ large projects in various languages, C#, TypeScript front-end libs, Python, Redis, AWS, Azure, SQL, all sorts of things).
AI can go into huge codebases perfectly fine and get a root cause + fix in minutes - you just need to know how to use it properly.
Personally I do "recon" before I send it off into the field by creating a markdown document explaining the issue, the files involved, and any "gotchas" it may encounter.
It's exactly the same as I would do with another senior software engineer. They need that information to figure out what is going on.
And with that? They will hand you back a markdown document with a Root Cause Analysis, identify potential fixes, and explain why.
It works amazingly well if you work with it as a peer.
Apparently, so do humans.
Recent manager quote from a CRUD shop: "I don't understand why you are so negative on AI. As far as I can tell, AI is better than our Hyderabad team, I don't have to manage it, and it's in my time zone to boot."
I was agog.
"Our ability to zoom in and implement code is now obsolete Even with SOTA LLMs like Opus 4.5 this is downright untrue. Many, many logical, strategic, architectural, and low level code mistakes are still happening. And given context window limitations of LLMs (even with hacks like subagents to work around this) big picture long-term thinking about code design, structure, extensibility, etc. is very tricky to do right."
If you can't see this, I have to seriously question your competence as an engineer in the first place tbh.
"We already do this today with human-written code. I review some code very closely, and other code less-so. Sometimes I rely on a combination of tests, familiarity of a well-known author, and a quick glance at the code to before saying "sure, seems fine" and pressing the green button. I might also ask 'Have you thought of X' and see what they say.
Trusting code without reading all of it isn't new, we're just now in a state where we need to review 10x more code, and so we need to get much better at establishing confidence that something works without paying human attention all the time.
We can augment our ability to write code with AI. We can augment our ability to review code with AI too."
Later he goes onto suggest that confidence is built via TDD. Problem is... if the AI is generating both code and tests, I've seen time and time again both in internal projects and OSS projects how major assumptions are incorrect, mistakes compound, etc.
> If you can't see this, I have to seriously question your competence as an engineer in the first place tbh.
I can't agree more strongly. I work with a number of folks who say concerning things along the lines of what you describe above (or just slightly less strong). The trust in a system that is not fully trustworthy is really shocking, but it only seems to come from a particular kind of person. It's hard to describe, but I'd describe it as: people that are less concerned with the contents of the code versus the behaviour of the program. It's a strange dichotomy, and surprising every time.
I mean, if you don't get the economics of a reasonably factored codebase vs one that's full of hacks and architecturally terrible compromises - you're in for a VERY bad time. Perhaps even a company-ending bad time. I've seen that happen in the old days, and I expect we're in the midst of seeing a giant wave of failures due to unsustainably maintained codebases. But we probably won't be able to tell, startups have been mostly failing the entire time.
These are exactly the types of people who LOVE ai because it produces code of similar quality an functionality that they would produce by hand.
And that's what it feels like now. We have the "old school" developers who consider CS to be equivalent to math, and we have these other people like you mention who are happy if the code seems to work 'enough'. "Hackers" have been around for decades but in order to get anything real done, they generally had to be smart enough to understand the code themselves. Now we're seeing the rise of the unskilled hacker, thanks to AI...is this creating the next generation of script kiddies?
So no, we haven't gone from "making software as good and efficient as possible", that was always a niche.
And I asked codex to fix them for me, first attempt was to add comments to disable the rules for the whole file and just mark everything as any.
Second attempt was to disable the rules in the eslint config.
It does the same with tests it will happily create a work around to avoid the issue rather than fix the issue.
"The skillset you've spend decades developing and expected to continue having a career selling? The parts of it that aren't high level product management and systems architecture are quickly becoming irrelevant, and it's your job to speed that process along" isn't an easy pill to swallow.
Embedded in this, is the assumption that many SWEs can actually do those roles better than existing specialists.
If they can't - end of the line
This simply is a mediocre take, sometimes I feel like people never actually coded at all to have such opinions
Please don't do this here. Thoughtful criticism is fine on this site but snark and name-calling are not.
https://news.ycombinator.com/newsguidelines.html
AI Horseless Carriages - https://news.ycombinator.com/item?id=43773813 - April 2025 (478 comments)
Or maybe it's analogous to the skeuomorphic phase of desktop software. Clumsy application of previous paradigm to new one; new wine in old bottles; etc.
You're what, 250 years behind at this point?
Since the dawn of the industrial revolution there is a general trend that fewer can make more with less. And really even bigger than AI were fast fuel based transportation and then global networks. Long before we started worrying about genAI, businesses have been consolidating down to a few corporations that make enough to supply the world from a singular large factories.
We fought the war against companies. Companies won.
Now you're just at the point where the fabric makers were, where the man with the pick axe was, where the telephone switch operator was, where the punch card operator was.
Maybe don't speak for all of us.
Or you do, but you believe it's worth it because your software helped more patients, or improved the overall efficiency and therefore created more demand and jobs - a belief many pro-AI people hold as well.
Patient outcomes are significantly better with modern technology.
> You just don't care about them.
Really, letting people live nearly full lives instead of dying in their 40s. I must be heartless.
Much of the software written historically is to automate stuff people used to do manually.
I'd wager you use email, editors, search engines, navigation tools and much more. All of these involved replacing real jobs that existed. When was the last time you consulted a city map?
Experienced engineers can successfully vibe code? By definition it means not reading the output.
If you’re not reading your output, then why does skill level even matter?
Few thoughts here.
Experience helps you "check" faster that what you asked for is actually what was delivered. You "know" what to check for. You know what a happy path is, and where it might fail. You're more likely to test outside the happy path. You've seen dozens of failure modes already, you know where to look for.
Experience also allows you to better define stuff. If you see that the output is mangled, you can make an educated guess that it's from css. And you can tell the model to check the css integration.
Experience gives you faster/better error parsing. You've seen thousands of them already. You probably know what the error means. You can c/p the error but you can also "guide" the model with something like "check that x is done before y". And so on.
Last, but not least, the "experience" in actually using the tools gives you a better understanding of their capabilities and failure modes. You learn where you can let it vibe away, or where you need to specify more stuff. You get a feeling for what it did from a quick glance. You learn when to prompt more and where to go with generic stuff like "fix this".
Do we want everyone to operate at PM level? The space for that is limited. Its easy to say you enjoy vibe coding when you are high up the chain but for most devs we are not as experienced or lucky to be able to feel stable when workflows change every day.
But I dont feel I have enough data to believe whether vibe coding or hand coding is better, I am personally doing tedious task with AI, and still writing code by hand all the time.
49 more comments available on Hacker News