The Highest Quality Codebase
Key topics
As developers weigh in on what makes a high-quality codebase, a lively debate unfolds around the capabilities and limitations of AI coding assistants like Claude. Some users praise Claude's ability to save time, while others note that it struggles with context overload and becomes less effective as tasks grow more complex. A consensus emerges that Claude excels with small, focused tasks, but can become a hindrance when asked to handle larger, more nuanced projects, prompting users to spend more time specifying requirements than writing code themselves. The discussion highlights the trade-offs of relying on AI-powered coding tools and the importance of understanding their strengths and weaknesses.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
53m
Peak period
105
66-72h
Avg / period
20
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 8, 2025 at 4:33 PM EST
about 1 month ago
Step 01 - 02First comment
Dec 8, 2025 at 5:25 PM EST
53m after posting
Step 02 - 03Peak activity
105 comments in 66-72h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 12, 2025 at 7:45 PM EST
29 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I disagree, it's very useful even in languages that have exception throwing conventions. It's good enough for the return type for Promise.allSettled api.
The problem is when I don't have the result type I end up approximating it anyway through other ways. For a quick project I'd stick with exceptions but depending on my codebase I usually use the Go style ok, err tuple (it's usually clunkier in ts though) or a rust style result type ok err enum.
Embracing a functional style in TypeScript is probably the most productive I've felt in any mainstream programming language. It's a shame that the language was defiled with try/catch, classes and other unnecessary cruft so third party libraries are still an annoying boundary you have to worry about, but oh well.
The language is so well-suited for this that you can even model side effects as values, do away with try/catch, if/else and mutation a la Haskell, if you want[1].
[1] https://effect.website/
I think it suffers from performance anxiety...
----
The only solution I have found is to - rewrite the prompt from scratch, change the context myself, and then clear any "history or memories" and then try again.
I have even gone so far as to open nested folders in separate windows to "lock in" scope better.
As soon as I see the agent say "Wait, that doesnt make sense, let me review the code again" its cooked
It’s REAL FUCKING TEMPTING to say ”hey Claude, go do this thing that would take me hours and you seconds” because he will happily, and it’ll kinda work. But one way or another you are going to put those hours in.
It’s like programming… is proof of work.
Vibe coding though, super deceptive!
All LLMs degrade in quality as soon as you go beyond one user message and one assistant response. If you're looking for accuracy and highest possible quality, you need to constantly redo the conversations from scratch, never go beyond one user message.
If the LLM gets it wrong in their first response, instead of saying "No, what I meant was...", you need to edit your first response, and re-generate, otherwise the conversation becomes "poisoned" almost immediately, and every token generated after that will suffer.
I'm not sure there's much to learn here, besides it's kinda fun, since no real human was forced to suffer through this exercise on the implementor side.
Which describes a lot of outsourced development. And we all know how well that works
You can improve processes and teach the humans. The junior will become a senior, in time. If the processes and the company are bad, what's the point of using such a context to compare human and AI outputs? The context is too random and unpredictable. Even if you find out AI or some humans are better in such a bad context, what of it? The priority would be to improve the process first for best gains.
A human with no traning will perform worse.
Yes.
Take away everything else, there’s a product that is really good at small tasks, it doesn’t mean that changing those small tasks together to make a big task should work.
I don't mean the code producers, I mean the enterprise itself is not intelligent yet it (the enterprise) is described as developing the software. And it behaves exactly like this, right down to deeply enjoying inflicting bad development/software metrics (aka BD/SM) on itself, inevitably resulting in:
https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...
Your point stands uncontested by me, but I just wanted to mention that humans have that bias too.
Random link (has the Nature study link): https://blog.benchsci.com/this-newly-proven-human-bias-cause...
https://en.wikipedia.org/wiki/Additive_bias
"Hey claude, I get this error message: <X>", and it'll often find the root cause quicker than I could.
"Hey claude, anything I could do to improve Y?", and it'll struggle beyond the basics that a linter might suggest.
It suggested enthusiastically a library for <work domain> and it was all "Recommended" about it, but when I pointed out that the library had been considered and rejected because <issue>, it understood and wrote up why that library suffered from that issue and why it was therefore unsuitable.
There's a significant blind-spot in current LLMs related to blue-sky thinking and creative problem solving. It can do structured problems very well, and it can transform unstructured data very well, but it can't deal with unstructured problems very well.
That may well change, so I don't want to embed that thought too deeply into my own priors, because the LLM space seems to evolve rapidly. I wouldn't want to find myself blind to the progress because I write it off from a class of problems.
But right now, the best way to help an LLM is have a deep understanding of the problem domain yourself, and just leverage it to do the grunt-work that you'd find boring.
I follow WET principles (write everything twice at least) because the abstraction penalty is huge, both in terms of performance and design, a bad abstraction causes all subsequent content to be made much slower. Which I can't afford as a small developer.
Same with most other "clean code" principles. My codebase is ~70K LoC right now, and I can keep most of it in my head. I used to try to make more functional, more isolated and encapsulated code, but it was hard to work with and most importantly, hard to modify. I replaced most of it with global variables, shit works so much better.
I do use partial classes pretty heavily though - helps LLMs not go batshit insane from context overload whenever they try to read "the entire file".
Models sometimes try to institute these clean code practices but it almost always just makes things worse.
I think, if you're writing code where you know the entire code base, a lot of the clean principles seem less important, but once you get someone who doesn't, and that can be you coming back to the project in three months, suddenly they have value.
what the person you replied to had claude do is relatively simple and structured, but to that person what claude did is "automagic".
People already vastly overestimate AI's capabilities. This contributes to that.
Very easy to write it off when it spins out on the open-ended problems, without seeing just how effective it can be once you zoom in.
Of course, zooming in that far gives back some of the promised gains.
Edit: typo
The love/hate flame war continues because the LLM companies aren't selling you on this. The hype is all about "this tech will enable non-experts to do things they couldn't do before" not "this tech will help already existing experts with their specific niche," hence the disconnect between the sales hype and reality.
If OpenAI, Anthropic, Google, etc. were all honest and tempered their own hype and misleading marketing, I doubt there would even be a flame war. The marketing hype is "this will replace employees" without the required fine print of "this tool still needs to be operated by an expert in the field and not your average non technical manager."
As we speak, my macOS menubar has an iStat Menus replacement, a Wispr Flow replacement (global hotkey for speech-to-text), and a logs visualizer for the `blocky` dns filtering program -- all of which I built without reading code aside from where I was curious.
It was so vibe-coded that there was no reason to use SwiftUI nor open them in Xcode -- just Swift files compiled into macOS apps when I nix rebuild.
The only effort it required was the energy to QA the LLM's progress and tell it where to improve.
Where do my 20 years of software dev experience fit into this except for aesthetic preferences?
Isn't that the point they are making?
I would hazard a guess that your knowledge lead to better prompts, better approach... heck even understanding how to build a status bar menu on Mac OS is slightly expert knowledge.
You are illustrating the GP's point, not negating it.
You're imagining that I'm giving Claude technical advice, but that is the point I'm trying to make: I am not.
This is what "vibe-coding" tries to specify.
I am only giving Claude UX feedback from using the app it makes. "Add a dropdown that lets me change the girth".
Now, I do have a natural taste for UX as a software user, and through that I can drive Claude to make a pretty good app. But my software engineering skills are not utilized... except for that one time I told Claude to use an AGDT because I fancy them.
Your 20 years is assisting you in ways you don't know; you're so experienced you don't know what it means to be inexperienced anymore. Now, it's true you probably don't need 20 years to do what you did, but you need some experience. Its not that the task you posed to the LLM is trivial for everyone due to the LLM, its that its trivial for you because you have 20 years experience. For people with experience, the LLM makes moderate tasks trivial, hard tasks moderate, and impossible tasks technically doable but still hard.
For example, my MS students can vibe code a UI, but they can't vibe code a complete bytecode compiler. They can use AI to assist them, but it's not a trivial task at all, they will have to spend a lot of time on it, and if they don't have the background knowledge they will end up mired.
Your mom wouldn't vibe-code software that she wants not because she's not a software engineer, but because she doesn't engage with software as a user at the level where she cares to do that.
Consider these two vibe-coded examples of waybar apps in r/omarchy where the OP admits he has zero software experience:
- Weather app: https://www.reddit.com/r/waybar/comments/1p6rv12/an_update_t...
- Activity monitor app: https://www.reddit.com/r/omarchy/comments/1p3hpfq/another_on...
I'm curious what domain you think someone must be an expert in to come up with these prompts:
- "I want a waybar app that shows me the current weather"
- "Now make it show weather in my current location"
- "Color the temperatures based on hot vs cold"
- "It's broken please find out why"
Which is a prompt that someone with experience would write. Your average, non-technical person isn't going to prompt something like that, they are going to say "make it so I can change the settings" or something else super vague and struggle. We all know how difficult it is to define software requirements.
Just because an LLM wrote the actual code doesn't mean your prompts weren't more effective because of your experience and expertise in building software.
Sit someone down in front of an LLM with zero development or UI experience at all and they will get very different results. Chances are they won't even specify "macOS menu bar app" in the prompt and the LLM will end up trying to make them a webapp.
Your vibe coding experience just proves my initial point, that these tools are useful for those who already have experience and can lean on that to craft effective prompts. Someone non-technical isn't going to make effective use of an LLM to make software.
I'm wondering if you've tried vibe coding something, because you seem to think that technical prompts are necessary to get results?
Also, you specifed "non-experts". Are you saying that a prompt like "make a macOS weather app for me" and "make an options menu that lets me set my location" are in the expert's domain?
The LLM prompt space is an ND space where you can start at any point, and then the LLM carves a path through the space for so many tokens using the instructions you provided, until it stops and asks for another direction. This frames LLM prompt coding as a sort of navigation task.
The problem is difficult because at every decision point, there's an infinite number of things you could say that could lead to better or worse results in the future.
Think of a robot going down the sidewalk. It controls itself autonomously, but it stops at every intersection and asks "where to next boss?" You can tell it either to cross the street, or drive directly into traffic, or do any number of other things that could cause it to get closer to its destination, further away, or even to obliterate itself.
In the concrete world, it's easy to direct this robot, and to direct it such that it avoids bad outcomes, and to see that it's achieving good outcomes -- it's physically getting closer to the destination.
But when prompting in an abstract sense, its hard to see where the robot is going unless you're an expert in that abstract field. As an expert, you know the right way to go is across the street. As a novice, you might tell the LLM to just drive into traffic, and it will happily oblige.
The other problem is feedback. When you direct the physical robot to drive into traffic, you witness its demise, its fate is catastrophic, and if you didn't realize it before, you'd see the danger then.
But the LLM tells you anything and as a non expert, you can't tell its drive right into traffic. The whole output chain is now completely and thoroughly off the rails, but you can't see the smoldering ruins of your navigation instructions because it's told you "Exactly, you're absolutely right!"
Else Visual Basic and Dreamweaver would have killed software engineering in the 90s.
Also, I didn't make them. A clanker did.
I'm not sure you're interacting with single claim I've made so far.
One under-discussed lever that senior / principal engineers can pull is the ability to write linters & analyzers that will stop junior engineers ( or LLMs ) from doing something stupid that's specific to your domain.
Let's say you don't want people to make async calls while owning a particular global resource, it only takes a few minutes to write an analyzer that will prevent anyone from doing so.
Avoid hours of back-and-forth over code review by encoding your preferences and taste into your build pipeline and stop it at source.
I am phenomenally productive this way, I am happier at my job, and its quality of work is extremely high as long as I occasionally have it stop and self-review it's progress against the style principles articulated in its AGENTS.md file. (As it tends to forget a lot of rules like DRY)
There is enough work for all of us to be handsomely paid while having fun doing it :) Just find what you like, and work with others who like other stuff, and you'll get through even the worst of problems.
For me the fun comes not from the action of typing stuff with my sausage fingers and seeing characters end up on the screen, but basically everything before that and after that. So if I can make "translate what's in my head into source on disk something can run" faster, that's a win in my book, but not if the quality degrades too much, so tight control over it still not having to use my fingers to actually type.
Having said that I used to be deep into coding and back then I am quite sure that I would hate AI coding for me. I think for me it comes down to – when I was learning about coding and stretching my personal knowledge in the area, the coding part was the fun part because I was learning. Now that I am past that part I really just want to solve problems, and coding is the means to that end. AI is now freeing because where I would have been reluctant to start a project, I am more likely to give it a go.
I think it is similar to when I used to play games a lot. When I would play a game where you would discover new items regularly, I would go at it hard and heavy up until the point where I determined there was either no new items to be found or it was just "more of the same". When I got to that point it was like a switch would flip and I would lose interest in the game almost immediately.
Most are not paid for results, they're paid for time at desk and regular responsibilities such as making commits, delivering status updates, code reviews, etc. - the daily activities of work are monitored more closely than the output. Most ESOP grant such little equity that working harder could never observably drive an increase in its value. Getting a project done faster just means another project to begin sooner.
Naturally workers will begin to prefer the motions of the work they find satisfying more than the result it has for the business's bottom line, from which they're alienated.
Wow. I've read a lot of hacker news this past decade, but I've never seen this articulated so well before. You really lifted the veil for me here. I see this everywhere - people thinking the work is the point.
https://en.wikipedia.org/wiki/Marx%27s_theory_of_alienation
This gets us to the rule number one of being successful at a job: Make sure your manager likes you. Get 8 layers of people whose priority is just to be sure their manager likes them, and what is getting done is very unlikely to have much to do with shareholder value, customer happiness, or anything like that.
Im on the side of only enjoy coding to solve problems and i skipped software engineering and coding for work explicitly because i did not want to participate in that dynamic of being removed from the problems. instead i went into business analytics, and now that AI is gaining traction I am able to do more of what I love - improving processes and automation - without ever really needing to "pay dues" doing grunt work I never cared to be skilled at in the first place unless it was necessary.
Sometimes you can, sometimes you have to break the problem apart and get the LLM to do each bit separately, sometimes the LLM goes funny and you need to solve it yourself.
If enough people can make the product faster, then competition will drive the price down. But the ability to charge less is not at all an obligation to charge less.
ultimately i wonder how long people will need devs at all if you can all prompt your wishes
some will be kept to fix the occasional hallucination and that's it
It's typically been productive to care about the how, because it leads to better maintainability and a better ability to adapt or pivot to new problems. I suppose that's getting less true by the minute, though.
Sometimes, you strike gold, so there's that.
And I do think there's more to it than preference. Like there's actual bugs in the code, it's confusing and because it's confusing there's more bugs. It's solving a simple problem but doing so in an unnecessarily convoluted way. I can solve the same problem in a much simpler way. But because everything is like this I can't just fix it, there's layers and layers of this convolution that can't just be fixed and of course there's no proper decoupling etc so a refactor is kind of all or nothing. If you start it's like pulling on a thread and everything just unravels.
This is going to sound pompous and terrible but honestly some times I feel like I'm too much better than other developers. I have a hard time collaborating because the only thing I really want to do with other people's code is delete it and rewrite it. I can't fix it because it isn't fixable, it's just trash. I wish they would have talked to me before writing it, I could have helped then.
Obviously in order to function in a professional environment i have to suppress this stuff and just let the code be ass but it really irks me. Especially if I need to build on something someone else made - itsalmost always ass, I don't want to build on a crooked foundation. I want to fix the foundation so the rest of the building can be good too. But there's no time and it's exhausting fixing everyone else's messes all the time.
I'm talking about simple stuff that people just can't do right. Not complex stuff. Like imagine some perfect little example code on the react docs or whatever, good code. Exemplary code. Trivial code that does a simple little thing. Now imagine some idiot wrote code to do exactly the same thing but made it 8 times longer and incredibly convoluted for absolutely no reason and that's basically what most "developers" do. Everyone's a bunch of stupid amateurs who can't do simple stuff right, that's my problem. It's not understandable, it's not justifiable, it's not trading off quality for speed. It's stupidity, ignorance and lazyness.
That's why we have coding interviews that are basically "write fizzbuzz while we watch" and when I solve their trivial task easily everyone acts like I'm Jesus because most of my peers can't fucking code. Like literally I have colleagues with years of experience who are barely at a first year CS level. They don't know the basics of the language they've been working with for years. They're amateurs.
And most importantly I just design it well from the start, it's not that hard to do. At least for me.
Of course we all make mistake, there's bugs in my code too. I have made choices I regret. But not on the level that I'm talking about.
I usually attribute it to people being lazy, not caring, or not using their brain.
It's quite frustrating when something is *so obviously* wrong, to the point that anyone with a modicum of experience should be able to realize that what was implemented is totally whack. Please, spend at least a few minutes reviewing your work so that I don't have to waste my time on nonsense.
> You've really hit the crux of the problem and why so many people have differing opinions about AI coding.
Part of it perhaps, but there's also a huge variation in model output. I've been getting some surprisingly bad generations from ChatGPT recently, though I'm not sure if that's ChatGPT getting worse or me getting used to a much higher quality of code from Claude Code which seems to test itself before saying "done". I have no idea if my opinion will flip again now 5.2 is out.
And some people are bad communicators, an important skill for LLMs, though few will recognise it because everyone knows what they themselves meant by whatever words they use.
And some people are bad planners, likewise an important skill for breaking apart big tasks that LLMs can't do into small ones they can do.
Many engineers walk a path where they start out very focussed on programming details, language choice, and elegant or clever solutions. But if you're in the game long enough, and especially if you're working in medium-to-large engineering orgs on big customer-facing projects, you usually kind of move on from it. Early in my career I learned half a dozen programming languages and prided myself on various arcane arts like metaprogramming tricks. But after a while you learn that one person's clever solution is another person's maintainability nightmare, and maybe being as boring and predictable and direct as possible in the code (if slightly more verbose) would have been better. I've maintained some systems written by very brilliant programmers who were just being too clever by half.
You also come to realize that coding skills and language choice don't matter as much as you thought, and the big issues in engineering are 1) are you solving the right problem to begin with 2) people/communication/team dynamics 3) systems architecture, in that order of importance.
And also, programming just gets a little repetitive after a while. Like you say, after a decade or so, it feels a bit like "more of the same." That goes especially for most of the programming most of us are doing most of the time in our day jobs. We don't write a lot of fancy algorithms, maybe once in a blue moon and even then you're usually better off with a library. We do CRUD apps and cookie-cutter React pages and so on and so on.
If AI coding agents fall into your lap once you've reached that particular variation of a mature stage in your engineering career, you probably welcome them as a huge time saver and a means to solve problems you care about faster. After a decade, I still love engineering, but there aren't may coding tasks I particularly relish diving into. I can usually vaguely picture the shape of the solution in my head out the gate, and actually sitting down and doing it feels rather a bore and just a lot of typing and details. Which is why it's so nice when I can send Claude to do it instead, and review the results to see if they match what I had in mind.
Don't get me wrong. I still love programming if there's just the right kind of compelling puzzle to solve (rarer and rarer these days), and I still pride myself on being able to do it well. Come the holidays I will be working through Advent of Code with no AI assistance whatsoever, just me and vim. But when January roles around and the day job returns I'll be having Claude do all the heavy lifting once again.
1) When I already have a rough picture of the solution to some programming task in my head up front, I do not particularly look forward to actually going and doing it. I've done enough programming that many things feel like a variation on something I've done before. Sometimes the task is its own reward because there is a sufficiently hard puzzle to solve. Mostly it is not and it's just a matter of putting in the time. Having Claude do most of the work is perfect in those cases. I don't think this is particularly anything to do with working on a ball of mud: it applies to most kinds of work on clean well-architected projects as well.
2) I have a restless mind and I just don't find doing something that interesting anymore once I have more or less mastered it. I'd prefer to be learning some new field (currently, LLMs) rather than spending a lot of time doing something I already know how to do. This is a matter of temperament: there is nothing wrong with being content in doing a job you've mastered. It's just not me.
Every time I think I have a rough picture of some solution, there's always something in the implementation that proves me wrong. Then it's reading docs and figuring whatever gotchas I've stepped into. Or where I erred in understanding the specifications. If something is that repetitive, I refactor and try to make it simple.
> I have a restless mind and I just don't find doing something that interesting anymore once I have more or less mastered it.
If I've mastered something (And I don't believe I've done so for pretty much anything), the next step is always about eliminating the tedium of interacting with that thing. Like a code generator for some framework or adding special commands to your editor for faster interaction with a project.
1. If you don't care about code and only care about the "thing that it does when it's done", how do you solve problems in a way that is satisfying? Because you are not really solving any problem but just using the AI to do it. Is prompting more satisfying than actually solving?
2. You claim you're done "learning about coding and stretching my personal knowledge in the area" but don't you think that's super dangerous? Like how can you just be done with learning when tech is constantly changing and new things come up everyday. In that sense, don't you think AI use is actually making you learn less and you're just justifying it with the whole "I love solving problems, not code" thing?
3. If you don't care about the code, do the people who hire you for it do? And if they do, then how can you claim you don't care about the code when you'll have to go through a review process and at least check the code meaning you have to care about the code itself, right?
1. The problem solving is in figuring out what to prompt, which includes correctly defining the problem, identifying a potential solution, designing an architecture, decomposing it into smaller tasks, and so on.
Giving it a generic prompt like "build a fitness tracker" will result in a fully working product but it will be bland as it would be the average of everything in its training data, and won't provide any new value. Instead, you probably want to build something that nobody else has, because that's where the value is. This will require you to get pretty deep into the problem domain, even if the code itself is abstracted away from you.
Personally, once the shape of the solution and the code is crystallized in my head typing it out is a chore. I'd rather get it out ASAP, get the dopamine hit from seeing it work, and move on to the next task. These days I spend most of my time exploring the problem domain rather than writing code.
2. Learning still exists but at a different level; in fact it will be the only thing we will eventually be doing. E.g. I'm doing stuff today that I had negligible prior background in when I began. Without AI, I would probably require an advanced course to just get upto speed. But now I'm learning by doing while solving new problems, which is a brand new way of learning! Only I'm learning the problem domain rather than the intricacies of code.
3. Statistically speaking, the people who hire us don't really care about the code, they just want business results. (See: the difficulty of funding tech debt cleanup projects!)
Personally, I still care about the code and review everything, whether written by me or the AI. But I can see how even that is rapidly becoming optional.
I will say this: AI is rapidly revolutionizing our field and we need to adapt just as quickly.
My comment was based on you saying you don't care about the code and only what it does. But now you're saying you care about the code and review everything so I'm not sure what to make out of it. And again, I fundamentally disagree that reviewing code will become optional or rather should become optional. But that's my personal take.
I'm not the person you originally replied to, so my take is different, which explains your confusion :-)
However I do increasingly get the niggling sense I'm reviewing code out of habit rather than any specific benefit because I so rarely find something to change...
> And if you're really going too deep into the problem domain, what is the point of having the code abstracted?
Let's take my current work as an example: I'm doing stuff with computer vision (good old-fashioned OpenCV, because ML would be overkill for my case.) So the problem domain is now images and perception and retrieval, which is what I am learning and exploring. The actual code itself does not matter as much the high-level approach and the component algorithms and data structures -- none of which are individually novel BTW, but I believe I'm the only one combining them this way.
As an example, I give a high-level prompt like "Write a method that accepts a list of bounding boxes, find all overlapping ones, choose the ones with substantial overlap and consolidate them into a single box, and return all consolidated boxes. Write tests for this method." The AI runs off and generates dozens of lines of code -- including a tunable parameter to control "substantial overlap", set to a reasonable default -- the tests pass, and when I plug in the method, 99.9% of the times the code works as expected. And because this is vision-based I can immediately verify by sight if the approach works!
To me, the valuable part was coming up with that whole approach based on bounding boxes, which led to that prompt. The actual code in itself is not interesting because it is not a difficult problem, just a cumbersome one to handcode.
To solve the overall problem I have to combine a large number of such sub-problems, so the leverage that AI gives me is enormous.
But as I said, it's getting rare that I need to change anything the AI generates. That's partly because I decompose the problem into small, self-contained tasks that are largely orthogonal and easily tested -- mostly a functional programming style. There's very little that can go wrong because there is little ambiguity in the requirements, which is why a 3 line prompt can reliably turn into dozens of lines of working, tested code.
The main code I deal with manually is the glue that composes these units to solve the larger computer vision problem. Ironically, THAT is where the tech debt is, primarily because I'm experimenting with combinations of dozens of different techniques and tweaks to see what works best. If I knew what was going to work, I'd just prompt the AI to write it for me! ;-)
This just sounds like "no true scotsman" to me. You have a problem and a toolkit. If you successfully solve the problem, and the solution is good enough, then you are a problem solver by any definition worth a damn.
The magic and the satisfaction of good prompting is getting to that "good enough", especially architecturally. But when you get good at it - boy, you can code rings around other people or even entire teams. Tell me how that wouldn't be satisfying!
Coding is just a formal specification, one that is suited to be automatically executed by a dumb machine. The nice trick is that the basic semantics units from a programming language are versatile enough to give you very powerful abstractions that can fit nicely with the solution your are designing.
> Personally, once the shape of the solution and the code is crystallized in my head typing it out is a chore
I truly believe that everyone that says that typing is a chore once they've got the shape of a solution get frustrated by the amount of bad assumptions they've made. That ranges from not having a good design in place to not learning the tools they're using and fighting it during the implementation (Like using React in an imperative manner). You may have something as extensive as a network protocol RFC, and still got hit by conflict between the specs and what works.
Look at the length of my prompt and the length of the code. And that's not even including the tests I had it generate. It made all the right assumptions, including specifying tunable optional parameters set to reasonable defaults and (redacted) integrating with some proprietary functions at the right places. It's like it read my mind!
Would you really think writing all that code by hand would have been comparable to writing the prompt?
But the point is, there were no assumptions or tooling or bad designs that had to be fought. Just an informal, high-level prompt that generated the exact code I wanted in a fraction of the time. At least to me that was pretty surprising -- even if it'd become routine for a while by then -- because I'd expect that level of wavelength-match between colleagues who had been working on the same team for a while.
If you really believe this, I'd never want to hire you. I mean, it's not wrong, it's just ... well, it's not even wrong.
Your response and depth of reasoning about why you wouldn't hire them is a red flag though. Not for a manager role and certainly not as an IC.
Coding is as much a method of investigating and learning about a problem as it is any sort of specification. It is as much play as it is description. Somebody who views code as nothing more than a formal specification that tells a computer what to do is inhibiting their ability to play imaginatively with the problem space, and in the work that I do, that is absolutely critical.
To a lot of people (clearly not yourself included), the most interesting part of software development is the problem solving part; the puzzle. Once you know _how_ to solve the puzzle, it's not all that interesting actually doing it.
That being said, you may be using the word "shape" in a much more vague sense than I am. When I know the shape of the solution, I know pretty much everything it takes to actually implement it. That also means I've very bad at generating LOEs because I need to dig into the code and try things out, to know what works... before I can be sure I have a viable solution plan.
That being said, we can say
- Given the implementation options we've found, this solution/direction is what we think is the best
- We have enough information now that it is unlikely anything we find out is going to change the solution
- We know enough about the solution that it is extremely unlikely that there are any more real "problems/puzzles" to be solved
At that point, we can consider the solution "found" and actually implementing it is no more a part of solving it. Could the implemented solution wind up having to deal with an off-by-one error that we need to fix? Sure... but that's not "puzzle solving". And, for a lot of people, it's just not the interesting part.
But I still love getting my hands dirty and writing code as a mental puzzle. And the best puzzles tend to happen outside of a work environment anyways. So I continue to work through advent of code problems (for example) as a way of exercising that muscle.
getting things solved entirely feels very very numbing to me
even when gemini or chatgpt solves it well, and even beyond what i'd imagine.. i feel a sense of loss
I use writing the code as a way to investigate the options and find new ones. By the time I'm sure of the correct way to implement something, half the code is written [1]. At that point, now that I know what and how I'm going to do, it starts to get boring. I think what would work best for me would be able to say "ok, now finish this" to the AI and have it do that boring part.
[1] This also makes my LOEs horrible, because I don't know what I'm going to build until I've completed half of it. And figuring out how long it will take to do something that isn't defined is... inaccurate.
Some tasks I do enjoy coding. Once in the flow it can be quite relaxing.
But mostly I enjoy the problem solving part: coming up with the right algorithm, a nice architecture , the proper set of metrics to analyze etc
Claude writing code gets the same output if not better in about 1/10 of the time.
That's where you realize that the writing code bits are just one small part of the overall picture. One that I realize I could do without.
Same for sql, do you really context switch between sql and other code that frequently?
Everyone should stop using bash, especially if you have a scripting language you can use already.
For example, I often find Python has very mature and comprehensive packages for a specific need I have, but it is a poor language for the larger project (I also just hate writing Python). So I'll often put the component behind a http server and communicate that way. Or in other cases I've used Rust for working with WASAPI and win32 which has some good crates for it, but the ecosystem is a lot less mature elsewhere.
I used to prefer reinventing the wheel in the primary project language, but I wasted so much time doing that. The tradeoff is the project structure gets a lot more complicated, but it's also a lot faster to iterate.
Plus your usual html/css/js on the frontend and something else on the backend, plus SQL.
telling it to do better without any feedback obviously is going to go nowhere fast.
Rather than converging on optimal code (Occam's Razor for both maintainability and performance) they are just spewing code all over the scene. I've noticed that myself, of course, but this technique helps to magnify and highlight the problem areas.
It makes you wonder how much training material was/is available for code optimization relative to training material for just coding to meet functional requirements. And therefore, what's the relative weight of optimizing code baked into the LLMs.
224 more comments available on Hacker News