Perfect Software – Software for an Audience of One
Key topics
The notion of "perfect software" has sparked a lively debate, with many developers chiming in to share their own experiences crafting code for personal use. As some commenters, like tidderjail2, enthusiastically endorse building side projects with AI assistance, others, like prof-dr-ir, dismiss the original post as potential "AI slop," prompting a backlash from m_w_ and stronglikedan, who see this as a knee-jerk, edgy reaction. Meanwhile, jesse__ and rolfus offer a more nuanced perspective, highlighting the joy of creating something tailored to one's own needs, whether that's through coding or other hobbies like woodworking. The discussion reveals a consensus that building "perfect software" is about personal satisfaction, with or without AI.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
4d
Peak period
54
84-96h
Avg / period
19.8
Based on 99 loaded comments
Key moments
- 01Story posted
Dec 20, 2025 at 2:14 AM EST
18 days ago
Step 01 - 02First comment
Dec 23, 2025 at 3:14 PM EST
4d after posting
Step 02 - 03Peak activity
54 comments in 84-96h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 26, 2025 at 9:33 AM EST
11 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Sure, the projects mentioned aren't the most impressive pieces of software ever written, but isn't that kind of the point of the article?
It's the new way of attempting to be an edgelord, so we'll see quite a bit of it for a while, unfortunately. It doesn't have to be accurate or relevant.
It's never required LLMs. In fact, I think the idea that "LLMs allow us to write software for ourselves" borders on missing the point, for me at least. I write software for myself because I like the exploratory process .. figuring out how do do something such that it works with as little friction as possible from the side of the user; who is of course myself, in the future.
I like nitpicking the details, getting totally side-tracked on seemingly frivolous minutiae. Frequently enough, coming to the end of a month long yak-shave actually contributes meaningful insight to the problem at hand.
I guess what I'm trying to say is "you're allowed to just program .. for no other reason than the fun of it".
As evidence for my claims: a few of my 'perfect' projects
https://github.com/scallyw4g/bonsai
https://github.com/scallyw4g/poof
https://scallywag.software
Some people enjoy cooking. Some people enjoy eating great food. Some people enjoy both. Some people enjoy cooking certain things, and also like eating things they would never bother cooking themselves.
There is nothing wrong with any of these perspectives.
But programming doesn't give me that same feeling, and honestly; the scope of doing and learning everything needed to make my projects without LLM's are just way out of reach. Learning these things would not be relevant to my career or my other hobbies. So, for me I use LLM's the way a person who's not into carpentry might buy the services of a carpenter, despite the possibility of them doing the project themselves after investing tons of time into learning how.
https://play.tirreno.com
I’ve had really good luck with Claude helping me write shortcut scripts. For example, I wore a CGM this year for a bit and I couldn’t find an easy way to get the raw data. It did log everything to Apple Health and a shortcut was able to extract it and append it to a spreadsheet where each row was a reading.
Which is ironic considering the subject matter. “Perfect”, but artificially constructed. “Just for me”, but algorithmic slop.
I agree that you can do so much more custom tailoring of bespoke software with the speed an LLM brings. But something inside of me still revolts at calling this anything other than “convenient”.
“Perfect” I will reserve for things I’ve made myself. However “imperfect” they may really be.
It’s nothing like being a carver. It’s like being a director; “Once more! With feeling!”. “Perfect, brilliant, just one more take, and I want you to consider…”
A sculptor shaped with their hands, and there is pleasure in that. A director shapes with someone else’s hands.
There is so much software out there, written by people who wanted to solve their particular problem, just like you. Chances are that some of it will fit your needs, and, if the software is flexible enough, will allow you to customize it to make that fit even better.
This is why the Unix philosophy is so powerful. Small independent programs that do one thing well, which can be configured and composed in practically infinite number of ways. I don't need to write a file search or sorting program, nor does the file search program need to implement sorting. But as a user, I can compose both programs in a way to get a sorted list of files by any criteria I need. This is possible without either program being aware of the other, and I can compose programs written decades ago with ones written today.
You can extend this to other parts of your system as well. Instead of using a desktop environment like GNOME, try using a program that just manages windows. Then pick another program for launching applications. And so on.
With LLMs, you can ask for what you want and it can assess approaches. And try to solve. And adjust rapidly.
Trying to understand what affordances are available has been a nightmare of computing. Yes people do incredible things & take ownership of systems in amazing & incredible ways! But I just think it has required such deep esoteric knowledge just to get started, to have an idea of where you have leverage. I really think there's such a chance for more agency here, and without such being totally lost and confused and having no idea and so little help pointing the way.
And programming isn't? It requires even deeper esoteric knowledge about language intricacies, build systems, architectures, etc.
LLMs might help with getting a prototype up and running, but fixing issues or adding new features to an existing codebase still has a low success rate, and high chances of introducing other issues. These chances are directly proportional to the size and complexity of the software. Often the only way to address these issues is for the user to dig into the codebase themselves, which becomes a gargantuan task if they don't have an understanding of it to begin with.
So, sure, these new tools are useful for writing small and dirty scripts as the author needed, but then again, they can also be used to write configuration, help with integration issues, and to gain understanding of existing software. Asking them to write software is a riskier proposition, IME. Especially if that software is released into the wild and is used by more than one person.
Trying to berate the LLMs & scare folks into believing code is all impossible no matter what is maybe accurate perhaps, and yes much wrong and ignorance will spring forth.
But there's going to be so much going forth that wouldn't &bcouldnt have happened and I think in the balance this is incredibly empowering and amazing onboarding, an unbelievable resource. That yes many will squander, will vibe into trouble on, but and tool if misweilded is a danger.
I think of all the pissed off people finally trying to switch to Linux. And with LLMs there to help understand and explain systems, to partner on the work, I'm just so much more excited than the hard rugged road that used to be there. Sure more than half are going to coast along without seeing what their partner peer is up to, without soaking it in. But there's many many thousands who will get into it, will be eager, and will have so much better chance of success for this patient but not flawless peer.
What really gets on my nerves is the justified text...
If you need to mix other things with it, then the coffee isn't good.
For me, part of creating "perfect" software is that I am very much the one crafting the software. I'm learning while creating, but I find such learning is greatly diminished when I outsource building to AI. It's certainly harder and perhaps my software is worse, but for me the sense of achievement is also much greater.
A lot of the time, the LLM outputs the code, I test my idea, and realize I really don't care or the idea wasn't that great, and now I can move on to something else.
I did vibe code the first version. It runs, but it is utterly unmaintainable. I'm now rewriting it using the LLM as if it were a junior or outsourced programmer (not a developer, that remains my job) and I go over every line of application code. I love it, I'm pushing out decent quality code and very focused git commits. I write every commit message myself, no LLM there. But I don't even bother checking the LLM's unit and integration tests.
I would have never gotten to this stage of my dream project without AI tooling.
Why not? People have been writing successful personal projects without LLMs for years.
By now it's grown to 100k lines of code. I've not read all of them, but I do have a high level overview of the app, I've done several refactorings to keep it maintainable.
This would not have happened without AI agents. I don't have the time, period. With AI agents, I can kickoff a task while I'm going to the park with my kids. Instead of scrolling HN, I look every now and then to what the agent is doing.
Did you add an extra zero there? A journal with 100k lines of code, presumably not counting the framework it is built on?
That doesn't sound correct.
So yes, it's around 100k lines of code (Python, HTML, JS and CSS).
Then there are things that work but aren't polished enough or should really have documentation.
Over time I’ve learned to not even start such projects, but LLMs have made it easier to complete such projects by making the work faster reducing the time variable in time over importance and easing the refamiliarization problem, adding to the set of such projects I’m willing to tackle.
I really do
Github is full of half forgotten saved games waiting for money to be thrown at them.
Why is this, idunno a better way to say it, good?
So ok you don't get into the weeds and you're proud of that, but also nothing you can think of wanting to do turns out to be worth doing.
Those things are wholly related. Opportunity never comes exactly the time and the way you expect. You have to be open to it, you have to be seeking out new experiences and new ideas. You have to get into the weeds and try things without being entirely sure what the outcome might be, what insight you might gain, or when in the future that insight might actually become useful.
The author is saying that “perfect software” is like a perfect cup of coffee. It’s highly subjective to the end user. The perfect software for me perfectly matches how I want to interact with software. It has options just for me. It’s fine tuned to my taste and my workflows, showing me information I want to see. You might never find a tool that’s perfect for you because someone else wrote it for their own taste.
LLMs come in because it wildly increases the amount of stuff you can play around with on a personal level. It means someone finally has time to put together the perfect workflow and advanced tools. I personally have about 0 time outside of work that I can invest in that, so I totally buy the idea that LLMs can really give people the space to develop personal tools and workflows that work perfectly for them. The barrier to entry and experimentation is incredibly low, and since it’s just for you, you don’t need to worry about scale and operations and all the hard stuff.
There is still plenty of room for someone to do it by hand, but I certainly don’t have time to do that. So I’ll never find perfect software for some of my workflows unless I get an assist from LLMs.
I agree with you about learning and achievement and fun — but that’s completely unrelated to the topic!
You hit on the key constraint: time. The point isn't that the use of LLMs specifically provides agency, but that it lowers the barrier, allowing us to build things that bring it. "Perfect software" is perfect not just because of what they do, but because of what it lacks (fluff, tracking, features we don't need).
I remember some of the early phases of home computing. The whole point of owning a home computer was that in addition to using other people's software, you could write your own and put the machine to whatever use you could think of. And it was a machine you owned, not time on some big company's machine which, ultimately, was controlled, and uses approved, by that company. The whole point of the home computing market was to create an environment where people managed the machines, not the other way around. (Wozniak has said that this was one of his motivations for creating the Apple I and II.)
Now we have people like this guy who say we finally have autonomy in computing—by purchasing time on some big company's machine doing numberwang to write the software for you. Ultimately the big company, not you, controls the machine and the uses to which it may be put. What's worse is these companies are buying up all the manufacturing capacity, starving the consumer market and making it more difficult to acquire computing hardware! No, this is not the autonomy envisioned by Wozniak, Jobs, or even a young shithead Bill Gates.
Large language models, the resources and the exploitative means it took to create them, are not "free", they have serious social costs and loss of personal freedom. I still use them, particularly local models, but even that is questionable. At least when the AI bubble bursts and the inevitable enshittification begins, I will be able to continue running them without further vendor lock-in or erosion of privacy.
In terms of bootstrappability and supply chain risk, LLMs fail because we the people are not able to re-create them from scratch.
However, I don’t think using LLMs has to be an all-or-none proposition. You can still choose to build the parts you most care about yourself (where the learning happens) and delegate the other aspects to AI.
In the case of the text justifier, it was a small nuisance I wanted solved with very little effort. I didn't care about the browser APIs, just the visual outcome, so I let the LLM do it all.
If I were building something more complex, I would use LLMs much more mindfully. The value is in having the choice to delegate the chores so you can focus on the craft where it matters to you.
While we might value the process differently, the broader point remains that these tools enable people to build things they otherwise wouldn't have the time or specific resources to create, and still feel a sense of agency and ownership.
The first time I saw a computer, I saw a machine for making things. I once read a quote from actor Noel Coward who said that television was "for appearing on, not watching", and I immediately connected it to my own relationship with computers.
I don't want an LLM to write software or blog posts for me, for the same reason I don't want to hire an intern to do that for me: I enjoy the process.
Everything else, I'm in agreement on. Writing software for yourself - and only for yourself - is a wonderful superpower. You can define the ergonomics for yourself. There's lots of things that make writing software a little painful when you're the only customer: UX learning curves flatten, security concerns diminish a little, subscription costs evaporate...
I actually consider the ability to write software for yourself a more profound and important right over anything the open source movement offers. Of course, I want an environment which makes that easier, so it's this that makes me more concerned about closed ecosystems.
that being said calling it "perfect" is on the nose, at least for my own, it does a thing, it does it good enough, and that's all. It could be better but it won't be because it's not worth it, because it's good enough
But as for today, have we all just collectively decided to pretend that the LLMs we have are capable of writing good software?
I use LLMs a lot in my workflow, I probably spend a whole day per week learning and fiddling with new tools, techniques, etc. and trying to integrate them in all sorts of ways. Been at it for about a year and a half, mainly because I’m intrigued.
I’m sorry but it still very much sucks.
There are things it’s pretty good at, but writing software, especially in large brownfield projects, is not one of them. Not yet, at least.
I’m starting to believe many are just faking it.
The article isn't talking about "large brownfield projects" or people wanting "success [to] come rolling in". It's about people making little apps for themselves, for personal enjoyment, not profit.
Otherwise, there is too much you have to do right before you have a suitable software base to start building your extra personalized features on. Building on existing open-source software (not designed to be extended on) isn't great either because you would need to merge any changes from the original software into your fork, as opposed to a purpose-built SDK that would better tolerate plugins on different base software versions.
I'm working on this for gaming but the idea is really applicable to any kind of software.
In my case, I am not letting mods extend game code directly because I have additional constraints that require mods to run on a restricted runtime. In some other games, mods DO extend game code directly and I am most familiar with Minecraft. The problem with Minecraft is that the modloaders are community-made; the game itself isn't designed to support mods so there isn't a stable modloader API between versions. If that one issue is fixed it would be possible to effectively vibecode your own version of the game and not have to reinvent the wheel by coding up the base game first. You'd also have your changes persist indefinitely and benefit from base game updates. And like I said, I think this is a powerful pattern that can be applied to plugin systems in general non-game software as well.
How do we work together when we all have our own unilateral views of software?
Overall both are net positives, I have some nice wood furniture and also a $7 Lack bedside table, and of course I rely on some industrial long term software (Linux e.g.) but almost every day vibe code some throwaway thing for a specific task.
I even generated a linkedIn style post off the back of it:
https://richardcocks.github.io/chum/the-flatpack-fallacy
NB: I don't actually post these posts to linkedIn or anywhere in fact, but I like to generate them to get a sense of the form. So I can get a feel for what AI generated blogs are actually like, so I can better feel for what is and isn't genAI, and just as an amusing hobby. ( The actual posts on my blog are all my own writing, hence I've stuck this in a /chum/ folder to share. )
This one didn't come out well at all. Sometimes if I get a feel for the hook I refine it, but this time the concept didn't come through at all so I binned it after the first take.
We know, your blog is the first static webpage in existence.
Third, it brings back autonomy.
This is the new talking point. Musk claims that cars that are always connected provide "autonomy", vibe coders claim that the stolen code distributed by Anthropic provides "autonomy".
War is peace, freedom is slavery.
You could approach these topics with more nuance, and your posts would be stronger.
The current legal status is murky, and evolving!
https://ipwatchdog.com/2025/12/23/copyright-ai-collide-three...
There are a lot of contexts where I can understand arguing against LLM use (even in cases where I might not entirely agree, I can understand the objection), but this is not one of them.
Don't think this will improve your life? Great, don't do it. But this is also the most classic case of "don't yuck other people's yum". If someone tells you they used an LLM to make some piece of software for themselves and they like it or that it improved some aspect of their life or workflow, what on earth is gained by trying to convince them that no, actually, it's not good and isn't improving their lives and is in fact garbage?
While I enjoy the challenge of writing software, I more enjoy having the thing which does exactly what I want.
LLMs are amazing for this.
I have years of experience, but I never had the time (or will) to take on some _very minor nuisances_ or different areas of dev far from my day job expertise.
LLMs solved this. I produced about 12 different things that "I needed" to improve aspects of my life.
Each single took between a few hours to 3 days, and most of them I use daily, they are the most used applications (mobile, desktop and web) for my family.
It is a game changer.
Personalized custom software would never really reach critical mass, LLM enabled it, this is the age of personalized software, egosoftware, llmware.
How have you done distribution and auth? I'm interested in doing something similar but not sure of the best way to approach that part, especially with the family/multiplatform angle as well.
I do think it is a bit scary to imagine software devolving into an unconnected / less connected state. Lots of apps but less protocols, everyone kind of frontiersing their own stuff has enticement and I love the can do itiveness. But I am scared for a major regression, of disconnection.
I do think atproto is an enormously positive hope here. I used to be super thrilled by platform as a service stuff, works like Apache UserGrid that gave a wide range of basic compute platform stuff, accounts, data stores, etc. Letting devs focus on the core competencies, on their value add rather than building boring account systems and what not felt like a huge win back then & expected to see soar.
I think atproto is a very interesting resurgence of something similar-ish, where users have their own Personal Data Stores that your app can read and write records from. And which serves as an auth system for your app! It makes social app building incredibly simple, and there's been so many neat simple apps people have booked up with so much less ceremony than what doing connected stuff used to require. Atproto creating a platform as a service (PaaS) is super cool & may really enable awesome new ways of building.
When I saw "Perfect Software" in the title, I thought it referred to Perfect Software, the developer who produced the Perfect Writer word processor, Perfect Calc spreadsheet, and Perfect Filer database. These were a suite of office software products developed in the early '80s for CP/M and MS-DOS computers.
I find myself scratching real itches that would otherwise have gone left un-scratched, because the hurdle to just getting started on something was too damn high.
For example I had need for an image contact-sheet. I'm sure there exist a lot of contact sheet generators out there, but I was able to just as quickly get claude to write me a set of scripts that took a bunch of raw images, resized them down to thumbnails, extracted their meta-data, and wrote a PDF (via Typst) with filenames and meta-data, in date order.
I got lost perfecting it, hand-picking fonts etc, but it still only took an hour or so from start to finish.
It's perfect for my need, I can customise it any time I want by simply asking claude to modify it.
Did I need to be a developer to do that? Arguably yes, to know the capabilities of the system, to know to ask it to leverage image-magick and typst, to understand what the failure-modes looked like, etc.
But I dind't need to be programmer, and over time people like the OP will learn the development side of software development without learning the programming side.
And that's okay.
And as a learning tool, it’s extraordinary. Not because it replaces understanding, but because it accelerates it: you can explore unfamiliar domains, compare approaches, and iterate with feedback that used to take days or weeks.
The responsibility to think, judge, and decide still sits entirely with the developer.
I never did it because I imagined the pain of supporting every device or screen size, or dealing with someone who wants to know why their gift stopped working 6 months later.
The gains I’ve seen from LLM code - making me personally more productive in languages I’ve never used before - don’t erase the support burden so I think I’d still avoid this.
Still, I wonder if soon people will be trying to make software for their own amusement, and will my job ever morph into being a software “mechanic“ who is paid to fix what someone else built? Not just “someone else working at the company who owns the software”, but a different company entirely?
Will software maintenance become the job that big industry stops wanting to take because it’s so cheap to write something new that they’ll never fix this year’s model?
Or is software maintenance being democratised by LLMs such that a corner software shop could realistically offer maintenance for this one copy of a piece of software on this one device that the customer brings in?
I think we’ve never discussed a “software right to repair” because changing software is expensive, but we might see that change.
We see LLMs as a huge opportunity here, to self define.
And existing software as too limberous & weighty.
But there are so many other dimensions and levels of how and why the past hasn't let us situate our software and us together effectively. The architecture of software doesn't have this grow-in-ability to it.
I love the Home-cooked Software and Barefoot Developers material. But neither of those ideas nor perfect software nor audience of one actually appeal to me that strongly. They are all very positive enormous breaks from bad software and bad times where we didn't have basic liberty over systems. But they all strike me as valorizing a particularly isolated rejectionist view of software, that ultimately is rude to the protocols & affordances building that a good healthy and connected form of software that we might and perhaps SHOULD aspire to.
But anything unjamming is from the inflexible unarticulated illegible mess of systems we can at best endure today is doing great work. Many positive steps into greater beyonds out of bad tar pits. 2025 has amazing hope amid all this.
We started with Google Sheets (way too cumbersome), then Docs (cumbersome in a different way), then a simple app using Firebase + the Google Maps embedded API built over a weekend, and then ended up building a full blown planning app and eventually a Chrome extension[0] that hooks directly into Google Maps (our preferred tool for exploring).
We are meticulous planners so I totally get the author's sentiment here. Many people see the app the first time and feel overwhelmed, but for us, it's hard to imagine using other tools now because this one fits "just right" having been built specifically for our planning process.
[0] For anyone interested: https://chromewebstore.google.com/detail/turasapp/lpfijfdbgo...