Turning Claude Code Into My Best Design Partner
Posted5 months agoActive4 months ago
betweentheprompts.comTechstoryHigh profile
calmpositive
Debate
40/100
AI-Assisted DevelopmentClaude CodeSoftware Engineering
Key topics
AI-Assisted Development
Claude Code
Software Engineering
The author shares their experience using Claude Code as a design partner, and the discussion revolves around the benefits and challenges of using AI-assisted development tools.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
29m
Peak period
71
0-12h
Avg / period
16.8
Comment distribution84 data points
Loading chart...
Based on 84 loaded comments
Key moments
- 01Story posted
Aug 24, 2025 at 4:06 AM EDT
5 months ago
Step 01 - 02First comment
Aug 24, 2025 at 4:35 AM EDT
29m after posting
Step 02 - 03Peak activity
71 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 30, 2025 at 6:22 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45002315Type: storyLast synced: 11/20/2025, 4:53:34 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
My usage is nearly the same as OP. Plan plan plan save as a file and then new context and let it rip.
That's the one thing I'd love, a good cli (currently using charm and cc) which allows me to have an implementation model, a plan model and (possibly) a model per sub agent. Mainly so I can save money by using local models for implementation and online for plans or generation or even swapping back. Charm has been the closest I've used so far allowing me to swap back and forth and not lose context. But the parallel sub-agent feature is probably one of the best things claudecode has.
(Yes I'm aware of CCR, but could never get it to use more than the default model so :shrug:)
They are well documented because you need context for the LLM to be performant. And they are well tested because the cost of producing test got lower since they can be half generated, while the benefit of having tests got higher, since they are guard rails for the machine.
People constantly say code quality is going to plummet because of those tools, but I think the exact opposite is going to happen.
IOW there’s very clear ROI on docs today, it wasn’t so earlier.
This is the downside of living in a world of tweets, hot takes and content generation for the sake of views. Prompt engineering was always important, because GIGO has always been a ground truth in any ML project.
This is also why I encourage all my colleagues and friends to try these tools out from time to time. New capabilities become aparent only when you try them out. What didn't work 6mo ago has a very good chance of working today. But you need a "feel" for what works and what doesn't.
I also value much more examples, blogs, gists that show a positive instead of a negative. Yes, they can't count the r's in strawberry, but I don't need that! I don't need the models to do simple arithmetic wrong. I need them to follow tasks, improve workflows and help me.
Prompt engineering was always about getting the "google-fu" of 10-15 years ago rolling, and then keeping up with what's changed, what works and what doesn't.
But I’m negatively surprised with the amount of money CC costs. Just a simple refactoring cost me about 5min + 15min review and 4usd, had I done it myself it might have taken 15-20min as well.
How much money do you typically spend on features using CC? Nobody seems to mention this
That being said, Claude Code produces the best code I've seen from an AI coding agent, and the subscriptions are a steal.
Could you spend that 15-20min on some other task while this one works in the background?
I treat it as pair programming with a junior programmer on speed!
Due to oversupply. First you needed humans who can code. But now you need scalable compute.
Equivalent would be hiring those people to wave a flag infront of a car. They are replaced bt modern cars, but you dont get to receive the flag wavers wage as captured value for long if at all.
https://support.anthropic.com/en/articles/11145838-using-cla...
It is a project for model based governance of Databricks Unity Catalog, with which I do have quite a bit of experience, but none of the tooling feels flexible enough.
Eventually I ended up with 3 different subagents that supported in the development of the actual planning files; a Databricks expert, a Pydantic expert, and a prompt expert.
The improvement on the markdown files was rather significant with the aid of these. Ranging from old pydantic versions and inconsistencies, to me having some misconceptions about unity catalog as well.
Yesterday eve I gave it a run and it ran for about 2 hours with me only approving some tool usage, and after that most of the tools + tests were done.
This approach is so different than I how used to do it, but I really do see a future in detailed technical writing and ensuring we're all on the same page. In a way I found it more productive than going into the code itself. A downside I found is that with code reading and working on it I really zone in. With a bunch of markdown docs I find it harder to stay focused.
Curious times!
Waterfall ~1970, Agile ~2001, Continuous (DevOps) ~2015, Autonomous Dev ~2030, Self-Evolving Systems ~2040, Goal-Directed Ecosystems ~2050+
That waterfall was what everyone did before agile is a myth. AI produces the recent popular viewpoint as truth whether it is true or not.
I plan to do a more detailed write down sometime next week or the week after when I've "finished" my 100% vibe coded website.
This kind of AI-driven development feels very similar to that. By forcing you to sit down and map the territory you're planning to build in, the coding itself becomes secondary, just boilerplate to implement the design decision you've made. And AI is great at boilerplate!
To me it's always felt like waterfall in disguise and just didn't fit how I make programs. I feel it's just not a good way to build a complex system with unknown unknowns.
That the AI design process seems to rely on this same pattern feels off to me, and shows a weakness of developing this way.
It might not matter, admittedly. It could be that the flexibility of having the AI rearchitect a significant chunk of code on the fly works as a replacement to the flexibility of designing as you go.
It’s a totally waste of time to do TDD to only find out you made a bad design choice or discovered a conflicting problem.
TDD ought to let you make a bad design decision and then refactoring it while keeping the test as is.
TDD states the opposite.
TDD is very hard to do right and takes a lot of discipline. If i hadn’t worked for a company that did 100% XP, i would not have either believed it could be so effective. (Best Enterprise software i’ve ever seen and written)
In a way, it is funny. You can practise XP with your AI as pair.
Maybe it should be done that way with AI: experiment with AI if you need to, then write a plan with AI, then let the AI do the implementation.
I'm deep into the crank/lone genius territory with my BitGrid project, doing things with code that nobody else would be silly enough to waste time on. If it's just copy/pasting code from some latent space, I have no idea where it's getting it.
When's the last time you wrote code to directly handle all the bits to multiply two floating point numbers? I've never done it.
It's interesting because I remember having discussions with a colleague who was a fervent proponent of TDD where he said that with that approach you "just let the tests drive you" and "don't need to sit down and design your system first" (which I found a terrible idea).
The idea is that you let the code drive the system and do not optimize prematurely. Sometimes developers design parts that are not needed, and often not in the first phase.
It is a way to evolve a system unbiased. Of course there is a trade-off. If the refactoring/change is very expensive, like a database schema change in production, it is good to spent a lot of upfront design. Takes experience to see where you can relax and where you need to be careful.
One of the goals I remember was to think from the outside to the inside: you first create a test which asserts your most outter API as a unit test with inputs as you want to use it.
Now you keep adding functionality until it passes, creating new tests whenever you make a new boundary/function/API
This supposedly makes it easier to design it well, because you don't have to keep everything in scope and instead only tackle one layer at a time - like an onion from the outside to the core. And you always design the APIs as they make sense, not as is technically easiest because you can just define it however you want, and then think about implementing it.
E.g. https://www.infoq.com/presentations/tdd-ten-years-later/
(I've certainly seen it done though, with predicable result.)
On the contrary, in my experience it's much more important to "play" with a concept and see it working. Too many engineers think they're going to architect a perfect solution without ever getting code on the page.
A slapdash prototype is worth the weight of 100 tests and arch diagrams.
Note: I'm not saying the latter is not important. My comment is, it's ok (and encouraged) to do potentially throwaway work to understand the domain better.
They sure do in my experience.
> On the contrary, in my experience it's much more important to "play" with a concept and see it working...
I agree with all that. That's the point: figure out what you're trying to do before building it. Of course you will not know everything up-front, and of course you would try things out to learn and progress, and, for anything that it's tiny, of course it makes sense to do this iteratively, working from the most pressing/important/risky points earlier.
Or, at least, it seems obvious to me.
In my anecdotal case: I behave like the former in some cases (crafting) and the latter in others (travel planning)
I wouldn't say one way is always better than the other.
https://pages.cs.wisc.edu/~remzi/Naur.pdf
The act of programming is building the theory of what the program does, so that you acquire new knowledge of doing things. It's not just text production.
>"[...] not any particular knowledge of facts, but the ability to do certain things, such as to make and appreciate jokes, to talk grammatically, or to fish."
Which is why re-building a program from scratch is so tempting: you've figured out the theory as you went along, now you can build the real thing.
Here is my playbook: https://nocodo.com/playbook/
But since then I have come to have it always write ARCHITECTURE.md and IMPLEMENTATION.md when doing a new feature and CLAUDE-CONTINUE.md. All three live in the resp. folder of the feature (in my case, it's often a new crate or a new module, as I write Rust).
The architecture one is usually the result of some back and forth with CC much like the author describes it. Once that is nailed it writes that out and also the implementation. These are not static ofc, they may get updated during the process but the longer you spend discussing with CC and thinking about what you're doing the less likely this is necessary. Really no surprise there -- this works the same way in meat space. :]
I have an instruction in CLAUDE.md that it should update CLAUDE-CONTINUE.md with the current status, referencing both the other documents, when the context is 2% away from getting compacted.
After the compact it reads the resp. CLAUDE-CONTINUE.md (automatically, since it's referenced in CLAUDE.md) and then usually continues as if nothing happened. Without this my mileage varied as it needs to often read a lot of code (again) first and calibrated to what parts of architecture and implementation it did, before the compact.
I often also have it write out stuff that is needed in dependencies that I maintain or that are part of the project so then it creates ARCHITECTURE-<feature>-<crate>.md and I just copy that over to the resp. repo and tell another CC instance there to write the implementation document and send it off.
A lot of stuff I do is done via Terry [1] and this approach has worked a treat for me. Shout out to these guys, they rock.
Edit: P.S. I have 30+ years R&D experience in my field so I have deep understanding of what I do (computer graphics, system programming, mostly). I have quite a few friends with a decade or less of R&D experience and they struggle to get the same amount of shit done with CC or Ai.
The models are not there yet, you need the experience. I also mainly formulate concisely what I want and what the API should look and the go back and forth with CC, not start with a fuzzy few sentences and cross my fingers that what it comes up with is something I may like and can then mold a tad.
I also found that not getting weird bugs that the model may chase for several "loops" seem correlated with the amount of statically-typed code. I.e. I've been recently working on a Python code base that interfaces with Rust and the number of times CC shot itself in the foot because it assumed a foo was a [foo] and stuff like that is just astounding. This obviously doesn't happen in Rust, the language/compiler catches it and the model 'knows' it can't get away with it so it seems to exercises more rigor (but I may be 'hallucinating' that).
TLDR; I came to the conclusion that statically-typed languages get you higher returns with these models for this reason.
[1] https://www.terragonlabs.com/
Details what algorithms/approaches to use etc. The reason is that often a single context is not enough and when CC continues the CLAUDE-CONTINUE tells the model What it should do. Not Why (architecture) and How (implementation).
The architecture file is usually more abstract/high level and may also contain info about how the stuff integrates with other parts of the codebase etc.
I have something similar to that where all I do is list out the key types, structs, enums and traits, accompanied by comments describing what they are. I broke it down into four sections corresponding to different layers of abstraction.
But I noticed that over time the LLM will puff up the size and start putting implementations into it, so some prompting discipline is required to keep things terse and inline.
Is your ARCHITECTURE.md similar to mine or is it more like a UML diagram or perhaps an architectural spec in a DSL?
For anything bigger than small size features, I always think about what I do and why I do things. Sometimes in my head, sometimes on paper, a Confluence page or a white board.
I don't really get it. 80 % of software engineering is to figure out what you need and how to achieve this. You check with the stake holders, write down the idea of what you want to do and WHY you want to do it. You do some research.
Last 20 % of the process is coding.
This was always the process. You don't need AI for proper planning and defining your goals.
In almost all these cases, development process is a mix of coding & discovering, updating the mental model of the code on the go. It almost never starts with docs, spec or tests. Some projects are good for TDD, but some don't even need it.
And even for these use-cases, using AI coding agents changes the game here. Now it does really matter to first describe the idea, put it into spec, and verbalize everything in your head that you think will matter for the project.
Nowadays, the hottest programming language is English, indeed.
And of course for most things, there's a pretty obvious way it's probably going to work, no need to spend much time on that.
You can get an extra 15-20% out of it if you also document the parts of the codebase you expect to change first. Let the plan model document how it works, architecture and patterns. Then plan your feature with this in the context. You'll get better code out of it.
Also, make sure you review, revise and/or hand edit the docs and plans too. That pays significant dividends down the line.
So; I have Gemini write up plans for something, having it go deep and be as explicit as possible in its explanations.
I feed this into CC and have it implement the change in my code base. This has been really strong for me in making new features or expanding upon others where I feel something should be considerably improved.
The product I’ve built from the ground up over the last 8w is now in production and being actively demoed to clients. I am beyond thrilled with my experience and its output. As I’ve mentioned before on HN, we could have done much of this work ourselves with our existing staff, but we could not have done the front end work. What I feel might have taken well over a year and way more engineering and data science effort was mostly done in 2m. Features are added in seconds rather than hours.
I’m amazed by CC and I love reading these articles which help me to realize my own journey is being mirrored by others.
Assumptions without evaluation are not trustworthy.
With OpenAI I find ChatGPT just slows to a crawl and the chat becomes unresponsive. Asking it to make a document, to import into a new chat, helps with that.
On a human level, it makes me think that we should do the same ourselves. Reflect, document and dump our ‘memory’ into a working design doc. To free up ourselves, as well as our LLMs.
Need sales forecasting? This used to be an enterprise feature that 10 years ago would have needed a large team to implement correctly. Claude implements a docker container in one afternoon.
It really changes how I see software now. Before there were NDAs and intellectual property and companies too great care not to leak their source code.
Now things have changed, have a complex ERP system that took 20 years to develop? Well, claude can re-implement it in a flash. And write documentation and tests for it. Maybe it doesn't work quite that well yet, but things are moving fast.
doing it for the LLM really highlights that limitation. they arent trained statefully, not at the foundation model, where it matters. that state gets reproduced on top of the model in the form of "reasoning" and "chain of thought" but that level of scaffolding is a classic example of the bitter lesson. like semantic trees of old.
the representation learning + transformer model needs to be evolved to handle state, then it should be able to do these things itself
Having tried non-agent LLMs for code, things tend to break and quickly devolve. The agent mode of working with LLMs to build code is a step change improvement for me. I'm not a python programmer, but have been working on a pile of new code that it's built for me, and I'm fairly impressed at what's been achieved in the past week.
Once I get done, and can run a small LLM in my emulated BitGrid, then I'll back off and try to grok the code. It's been a series of small exploratory steps, with a few corrections on my part to keep the overall design going where I want. I'm much more hopeful about the future of "LLM as programming buddy", now that I've actually used an agent like this.
Does anyone else here use the Visual Studio Code/ChatGPT5 combo?
It becomes a collaborative design partner.”
You’re completely right!
You can't.
So I tried to reach support. There's no email, no phone number, just a THIRD-PARTY AI chatbot.
Well guess what, the Send Message button in the text field is disabled.
This is infuriating and puts me off the whole product and maybe I'll just file a chargeback.