The Learning Loop and Llms
Posted2 months agoActive2 months ago
martinfowler.comTechstoryHigh profile
calmmixed
Debate
80/100
LlmsProgrammingSoftware DevelopmentAutomation
Key topics
Llms
Programming
Software Development
Automation
The article discusses the role of Large Language Models (LLMs) in software development, sparking a discussion on the nature of programming, learning, and automation.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
51m
Peak period
44
0-2h
Avg / period
9.3
Comment distribution74 data points
Loading chart...
Based on 74 loaded comments
Key moments
- 01Story posted
Nov 6, 2025 at 5:05 PM EST
2 months ago
Step 01 - 02First comment
Nov 6, 2025 at 5:56 PM EST
51m after posting
Step 02 - 03Peak activity
44 comments in 0-2h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 7, 2025 at 3:08 PM EST
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45841056Type: storyLast synced: 11/20/2025, 3:44:06 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I guess there is stuff like SquareSpace. No idea how good it is though. And FrontPage back in the day but that sucked.
The ability to drop components in and then move the window around and have them respond as they will in the live program was peak WYSIWYG UI editing. I have not experienced better and I doubt I will.
But what alternatives are really left behind here that you view as superior?
To me, it is obvious the entire world sees a very high value in how much power can be delivered in a tiny payload via networked JS powering an HTML/CSS app. Are there really other things that can be viewed as equally powerful to HTML which are also able to pack such an information dense punch?
Er, no. Go watch some major site load.
I think you and I both know a 200kB gzipped web app can be a very powerful tool, so I don't understand what angle you're approaching this from.
This at first blush smells like "Don't write code that writes code," which... Some of the most useful tools I've ever written are macros to automate patterns in code, and I know that's not what he means.
Perhaps a better way to say it is "Automating writing software doesn't remove the need to understand it?"
Martin Fowler isn't the author, though. The author is Unmesh Joshi.
LLMs only really help to automate the production of the least important bit. That's fine.
Before the pearl clutching starts about Mr lack of coding ability. I started coding in 1986 in assembly on an Apple //e and spent the first dozen years of my career doing C and C++ bit twiddling
You don't learn these things by writing code? This is genuinely interesting to me because it seems that different groups of people have dramatically different ways of approaching software development
For me, the act of writing code reveals places where the requirements were underspecifed or the architecture runs into a serious snag. I can understand a problem space at a high level based on problem statements and UML diagrams, but I can only truly grok it by writing code.
If you approach AI as an iterative process where you're architecting an application just as you would without AI, but using the tool to speed up parts of the process like writing one method or writing just the tests for the part you're building right now, then AI becomes a genuinely useful tool.
For example, I've been using AI today to build some metrics tooling, and most of what I did with it was using it to assist in writing code to access an ancient version of a build tool that I can't find the documentation for because it's about 30 versions out of date. The API is wildly different to the modern one. Claude knows it though, so I just prompt it for methods to access data from the API that I need and only that. The rest of the app is my terrible Python code. Without AI this would take me 4 or 5 times longer if I could even do it at all.
This is absolutely not the case. My first startup was an attempt to build requirements management software for small teams. I am acutely aware that there is a step between "an idea" and "some code" where you have to turn the idea into something cohesive and structured that you can then turn into language a computer can understand. The bit in the middle where you write down what the software needs to do in human language is the important part of the process - you will throw the code away by deleting it, refactoring it, improving it, etc. What the code needs to do doesn't change anywhere near as fast.
Any sufficiently experienced developer who's been through the fun of working on an application that's been in production for more than a decade where the only way to know what it does is by reading the code will attest to the fact that the code is not the important part of software development. What the code is supposed to do is more important, and the code can't tell you that.
The first one is mostly requiring experienced humans, the alter one is boring and good to automate.
The problem is with all the in between. And in getting people to be able to do the first. There AI can be a tool and a distraction.
The art of knowing what work to keep, what work to toss to the bot, and how to verify it has actually completed the task to a satisfactory level.
It'll be different than delegating to a human; as the technology currently sits, there is no point giving out "learning tasks". I also imagine it'll be a good idea to keep enough tasks to keep your own skills sharp, so if anything kinda the reverse.
I feel like maybe I'm preaching to the choir by saying this on HN, but this is what Paul Graham means when he says that languages should be as concise as possible, in terms of number of elements required. He means that the only thing the language should require you to write is what's strictly necessary to describe what you want.
For some reason johnwheeler editorialized it, and most of the comments are responding to the title and not the contents of the article (though that's normal regardless of whether the correct title or a different one is used, it's HN tradition).
[The title has been changed, presumably by a mod. For anyone coming later it was originally incorrect and included statements not present in the article.]
Similarly there are some engineering departments that absolutely do design everything first and only then code it up, and if there are problems in the coding stage they go back to design. I'm not saying they're efficient or that this is best practice, but it suits some organisations.
[0] https://en.wikipedia.org/wiki/Learning_styles there's a ton of different approaches to this, and a lot of it is now discredited. But the core concept: that people learn differently, isn't disputed.
That sounds like a slow-motion experiment, not a lack of experimentation.
Maybe there's a broader critique of LLMs in here: if you outsource most of your intellectual activity to an LLM, what else is left? But I don't think this is the argument the author is making.
Also, fun to see the literal section separator glyphs from "A Pattern Language" turn up.
To be fair I have this with my own code, about 3 days after writing it.
Why is the current level of language abstraction the ideal one for learning, which must be preserved? Why not C? Why not COBOL? Why not assembly? Why not binary?
My hypothesis is that we can and will adapt to experience the same kind of learning OP describes at a higher level of abstraction, specs implemented by agents.
It will take time to adapt to that reality, and the patterns and practices we have today will have to evolve. But I think OP's view is too short-sighted, rooted in what they know and are most comfortable with. We're going to need to be uncomfortable for a bit.
Well i do this but i force it to make my code modular and i replace whole parts quite often, but it's tactical moves in an overall strategy. The LLM generates crap, however, it can replace crap quite efficiently with the right guidance.
Once you can show, without doubt, what you should do software engineers have very little value. The reason they are still essential is that product choices are generally made under very ambiguous conditions. John Carmack said "If you aren't sure which way to do something, do it both ways and see which works better."[1] This might seem like it goes against what I am saying but actually narrowing "everything possible" to two options is huge value! That is a lot of what you provide as an engineer and the only way you are going to hone that sense is by working on your company's' product in production.
[1] https://aeflash.com/2013-01/john-carmack.html
This is... only true in a very very narrow sense. Broadly, it's our job to create assembly lines. We name them and package them up, and even share them around. Sometimes we even delve into FactoryFactoryFactory.
> The people writing code aren't just 'implementers'; they are central to discovering the right design.
I often remember the title of a paper from 40 years ago "Programming as Theory Building". (And comparatively-recently discussed here [0].)
This framing also helps highlight the strengths and dangers of LLMs. The same aspects that lead internet-philosophers into crackpot theories can affect programmers creating their no-so-philosophical ones. (Sycophancy, false appearance of authoritative data, etc.)
[0] https://news.ycombinator.com/item?id=42592543