90%
Posted3 months agoActive3 months ago
lucumr.pocoo.orgTechstory
skepticalmixed
Debate
80/100
AI CodingSoftware DevelopmentProductivity
Key topics
AI Coding
Software Development
Productivity
The author claims that 90% of their code is now generated by AI, sparking a discussion about the role of AI in software development and its potential impact on productivity and code quality.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
31m
Peak period
9
18-24h
Avg / period
3.9
Comment distribution35 data points
Loading chart...
Based on 35 loaded comments
Key moments
- 01Story posted
Sep 29, 2025 at 6:57 AM EDT
3 months ago
Step 01 - 02First comment
Sep 29, 2025 at 7:28 AM EDT
31m after posting
Step 02 - 03Peak activity
9 comments in 18-24h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 2, 2025 at 6:46 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45412263Type: storyLast synced: 11/20/2025, 5:42:25 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
>That said, none of this removes the need to actually be a good engineer. If you let the AI take over without judgment, you’ll end up with brittle systems and painful surprises (data loss, security holes, unscalable software). The tools are powerful, but they don’t absolve you of responsibility.
I feel the same. AI is "the code monkey". Like very inexperienced that works hard and fast, has learned a lot but can't put it into practice. They need constant supervision and review.
This will be very challenging for inexperienced programmers. Normally learn by coding. You write code for fun or for money, get review from more experienced, ask questions and improve. Now a new programmer is expected to review AI generated code and learn programming and managing AI.
I'm feeling this for art. AI generated art still has obvious issues, but it produces works much better than a beginner. You also can't meaningfully reprompt or beat the model in to fixing the issues. So seemingly the only way to get actually good art is to grind through hundreds of hours of getting worse results than you could generate in seconds until you beat the models.
How many people are going to push through the painful unsatisfying work to eventually become experienced now.
Even more frustrating: if those hundreds of hours of practice are 8 hours per week, by the time you've gotten OK the SOTA GenAI will have become noticeably better.
It doesn't look like the undergrad-to-industry pipeline works anymore for programmers.
Those who teach themselves out of passion will always have a place. But for most coders, a graduate or PhD may become necessary. There just isn't may not be profitable niche for a CS grad out of undergrad in the private sector with AI.
I was already reading similar stories something like a decade ago in law, with the claim then being that the first thing most law graduates did to learn the ropes and get practical experience was being automated by simple file search and almost everything being digitised (might be "digital discovery", but also I might be conflating terms).
Last I heard (may be out of date already), robotics and computer vision is currently a dichotomy of either {not good enough} and/or {not fast enough} to be a "junior gardener" or a "junior hairdresser", so this (probably) isn't yet true for all roles, but I suspect it may be true for most* desk jobs.
* not all, most: if you need a human face somewhere, the GenAI real-time conversation agents do still sometimes mess up and TTS out the | and < that come from the LLM, but other than that…
If you can make a "production grade … application" (irregardless of language details or anything else) in 50 hours, by hand and without AI assistance, you're an unusual person. Most "agile" places I've been, even a single sprint is longer than that.
None of these blog posts would be needed hyping up vibe coding if people actually built something.
If you’re actually curious (instead of just trolling or pretending everyone who’s using ai to code is having mass hallucinations), you can join my talk at Google Brussels next week. Sign up here: https://gdg.community.dev/events/details/google-gdg-cloud-be...
40000 lines of code for sending and receiving emails? Hmmh...
If it were a solo coder writing a half a million lines of code application in the span of a few months by themselves, they would either be a savant or a complete lunatic.
Lots of the work after the initial code generation is cajoling it into writing something shorter then going back and just doing that.
Food remains valuable even though production is increasingly automated; there's not much difference in value of hand-picked {insert name of soft fruit whose harvesting has yet to be automated here} over a machine that sticks a net around the base of a tree and shakes it until the fruit falls off, but there sure is a difference in price.
For e.g. software: when I'm buying, I don't care in the slightest if it was made by one single human, a team of humans of my nationality, contracted and outsourced to the lowest bidder in a developing nation, a sentient typewriter, or an evolutionary algorithm — I care that it solves the problem that I had and for which I entered the market looking to buy.
If the only thing you can imagine doing with a thing is selling it, as in you cannot imagine using it, price and value become the same.
> Price of goods must come down to the cost of making it plus margins
The difference is "profit" or "loss" depending which is higher.
Buyers will (in general) only buy when they value a thing more than the price charged.
The advantage of spending a day or two of figuring something out is that is (mostly) a one-time process, after that you've learnt something you can apply later again and again. Taken to its extreme, you spend a few years learning programming when your agent can do it for you in far less time, but as this post states, this kind of work wouldn't have been possible (or at least reliable) had the author not taken the time earlier to learn programming, systems architecture, etc... themselves.
I'm not saying that AI can't help you learn something, but I think when you measure its success in time saving, learning gets unknowingly pushed to the back as a waste of time.
Most of these things that would take me a day or two to figure out, take this time because of the process. Not because of any learning happening. I have enough experience that learning can happen in a very, very small amount of time and explanations.
AI increases the amount of things I learn because it decreases the amount of time I need to spend to get to the signal. But this doesn’t apply to everyone; less experienced developers need the practice and for this your comment does apply I think.
> Is 90% of code going to be written by AI? I don’t know. What I do know is, that for me, on this project, the answer is already yes. [...] At the same time, for me, AI doesn’t own the code. I still review every line, shape the architecture, and carry the responsibility for how it runs in production. But the sheer volume of what I now let an agent generate would have been unthinkable even six months ago.
Written by Armin Ronacher of Flask, Jinja, and general Python fame.
An LLM is essentially a statistical text engine that can produce convincing code for any problem for which there are already similar solutions. Most projects have many such problems, and some projects involve 100% solved problems that just need to be packaged into a new solution.
However, there is a certain class of problems that are too technically innovative and novel. It is often difficult to even describe these problems in human language. AI will mostly hallucinate for such a class of problems, which will actually slow down a competent programmer because the necessary training data is missing.