Understanding Spec-Driven-Development: Kiro, Spec-Kit, and Tessl
Posted3 months agoActive3 months ago
martinfowler.comTechstory
calmmixed
Debate
60/100
Spec-Driven DevelopmentAI-Assisted DevelopmentSoftware Development Methodologies
Key topics
Spec-Driven Development
AI-Assisted Development
Software Development Methodologies
The article explores Spec-Driven Development (SDD) and its associated tools, Kiro, Spec-Kit, and Tessl, sparking a discussion on the benefits and challenges of SDD in AI-assisted development.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
2h
Peak period
12
15-18h
Avg / period
3.6
Comment distribution32 data points
Loading chart...
Based on 32 loaded comments
Key moments
- 01Story posted
Oct 16, 2025 at 5:36 PM EDT
3 months ago
Step 01 - 02First comment
Oct 16, 2025 at 7:46 PM EDT
2h after posting
Step 02 - 03Peak activity
12 comments in 15-18h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 18, 2025 at 8:15 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45610996Type: storyLast synced: 11/20/2025, 5:33:13 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
All the tutorials I've found are little more than "here's how to install it - now let's make a todo list app from scratch!!"
Would be great to see how others are handling real world use cases like making incremental improvements or refactorings to a huge legacy code base that didn't start out as a spec driven development hello world project.
Following a BDD approach with a coding CLI works a lot better, as it documents the features as code rather than verbose markdown files no one will read.
Having a checklist for an AI to follow makes sense, but that's why agents.md exists. Once the coding patterns and NFRs are documented in it, the agent follows them as well as they would follow a separate markdown spec.
Now I'm left trying to define/design what a "spec" for communication between humans and coding agents would look like, to power what Birgitta called spec anchored.
I feel that now with AI this is something that we have to finally do. Define how we write out a spec and record an architecture semi-formally, and in a way that is human-readable and human-manageable. And in a way that can 1) be consumed partially by an LLM context, rather than entirely (because it may be too big), and 2) have that partial ingestion be enough for it to do real work, either on the spec itself on or on the code, without deviating from the core intentions and architecture.
We tried and failed with the UML and Rational Rose type stuff, I think because it didn't record intentions well enough, was mostly pictures and not words, and seemed to be something that you would create after you finishd a project rather than fill in the details and guide you while you were building it. Hence, the whole idea fell away because it wasn't useful for anything but documentation, maintenance or refactoring; you were already selling the product before the spec became at all useful.
I'm left looking at vague leftfield ideas like https://c4model.com/.
Guess what: As memory banks grow or accumulate the AI gets confused and doesn't quite deliver.
So far, a human that actually knows their product still prevails and is necessary to actually guide any AI effort. AIs have been trying to bullshit me so much it's not even funny any longer. Of course they all apologize and figure out reality when I guide them but that doesn't change the facts. And I simply can't read all the documents the AIs write for themselves to correct all of them and even if I did I wouldn't be sure enough that they'd improve significantly enough for me to try and spend this mind bogglingly boring amount of time to help this thing that's supposed to take my job ....
The right way to do "memory" is to feed it to a "metacognition/default mode" network that builds a theory of mind / task ideation structure async from the main agent, then injects context relevant steering into the agent for each prompt based on this metamodel. So, "agentic memory" basically.
Kiro, your new corporate project manager.
Waterfall anyone?!
[0] https://en.wikipedia.org/wiki/Big_design_up_front
[1] https://agilemanifesto.org/
Whereas agile doesn't care what language you build your software in. It's about taking managers out of the picture; encouraging developers to get involved with what are normally considered "managerial" tasks. The 12 Principles goes into more detail about the things developers might need to do if there are no managers.
But I don't trust LLMs to program anything critical, and only do sandbox/tests/demo's. Things where code quality is less important.
This gives you verifiable set of spec documents (BDD reports for integration tests, acceptance tests, domain requirements, etc with green/red status), to iterate and collaborate on without requiring undue upfront work separated from the actual product. ‘Agile’, JIT, YAGNI-aware, specifications, no waterfall necessary.
The question of how much detail to include in a spec is really hard. We actually split it into two levels - an input prompt describing details the user cares about in that component and an output spec describing what was built to allow verification.
Also in case somebody wants to try a spec-as-source tool, we'd love feedback: https://specific.dev
Anything short of that and the spec is the spec, the source is the source.
Now you get to learn about what good code looks like, like the rest of us!
And I think that opens up a very interesting question about quality. If the human never touches the code, then "good code" gets replaced with "good specs" instead, and I don't think anybody knows what constitutes good specs in that context right now!
A design document can add color, but only the code tells you what the application does
Always made it too complex, and at some point it wasn't worth correcting it anymore.
I don't like the part that tries to leave no knot untied, which creates that sledgehammer for cracking a nut, as mentioned in the article. But I am sure it's easy to add another custom slash command like "/experiment" or "/stub" that would bring those context management benefits without the bloat, in situations when you don't know yet what and how you want to build something.
And then maybe "/wrap-up" to tie all the untied knots once you're sufficiently happy. Kinda like surgeon stepping aside after the core part of the operation.
So much simpler to just iterate without the puzzle box of tasks. "a sledgehammer to crack a nut"
I've been using Speckit for the last two weeks with Claude Code, on two different projects. Both are new code bases. It's just me coding on these projects, so I don't mind experimenting.
The first one was just speckit doing its thing. It took about 10 days to complete all the tasks and call the job done. When it finished, there was still a huge gap. Most tests were failing, and the build was not successful. I had to spend an equally long, excruciating time guiding it on how to fix the tests. This was a terrible experience, and my confidence in the code is low because Claude kept rewriting and patching it with many fixes to one thing, breaking another.
For the second project, I wanted to iterate in smaller chunks. So after SpecKit finished its planning, I added a few slash commands of my own. 1) generate a backlog.md file based on tasks.md so that I don't mess with SpecKit internals. 2) plan-sprint to generate a sprint file with a sprint goal and selected tasks with more detail. 3) implement-sprint broadly based on the implement command.
This setup failed as the implement-sprint command did not follow the process despite several revisions. After implementing some tasks, it would forget to create or run tests, or even implement a task.
I then modified the setup and created a subagent to handle task-specific coding. This is easy, as all the context is stored in SpecKit files. The implement-sprint functions as an orchestrator. This is much more manageable because I get to review each sprint rather than the whole project. There are still many cases where it declares the sprint as done even though tests still fail. But it's much easier to fix, and my level of trust in the code is significantly higher.
My hypothesis now is that Claude is bed at TDD. It almost always has to go back and fix the tests, not the implementation. My next experiment is going to be to create the tests after the implementation. This is not ideal, but at this point, I'd rather gain velocity, since it would be faster for me to code it myself.