Free Software Scares Normal People
Posted2 months agoActiveabout 2 months ago
danieldelaney.netTechstoryHigh profile
supportivemixed
Debate
70/100
User Experience (ux) DesignFree SoftwareSoftware Complexity
Key topics
User Experience (ux) Design
Free Software
Software Complexity
The article discusses simplifying free software interfaces for non-technical users, sparking a debate on balancing simplicity with feature richness and catering to diverse user needs.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
10m
Peak period
75
0-12h
Avg / period
17.8
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 30, 2025 at 11:07 AM EDT
2 months ago
Step 01 - 02First comment
Oct 30, 2025 at 11:16 AM EDT
10m after posting
Step 02 - 03Peak activity
75 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 7, 2025 at 12:14 PM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45760878Type: storyLast synced: 11/23/2025, 1:00:33 AM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
1. Free software is developed for the developer's own needs and developers are going to be power users.
2. The cost to expose options is low so from the developer's perspective it's low effort to add high value (perceiving the options as valuable).
3. The developer doesn't know who the customer is and rather than research/refine just tries to hit all the boxes.
4. The distribution of the software itself means anyone who successfully installs it themselves really is a power user and does like the options. Installing it for family and friends doesn't work.
Probably many other factors!
I think it’s essentially survivorship bias. The simple applications don’t get traction and later get abandoned.
i have seen many comments, by lay people, out of Sonobus [0] being superb on what it does and impressive by being 100% free. that's a niche case that if it was implemented on Ardour, could fit the same problem OP describes
[0] https://sonobus.net/
however i can't feel where the problem of FOSS UX scaring normal people is. someone getting a .h264 and a .wav file out of a video-record isn't normal after all. there are plenty of converters on the web, i dunno if they run ffmpeg at their server but i wouldn't get surprised. the problem lies on the whole digital infrastructure running on FOSS without returning anything back. power-user software shouldn't simplify stuff. tech literacy hopefully can be a thing and by quickly learning how to import and export a file in a complex software feels better to install 5 different limited software over the years because your demands are growing
* Free software which gains popularity is developed for the needs of many people - the users who make requests and complaints and the developers.
* Developers who write for a larger audience naturally think of more users' needs. It's true that they typically cater more to making features available than to simplicity of the UI and ease of UX.
> 2. The cost etc.
Agreed!
> 3. The developer doesn't know who the customer is and rather than research/refine just tries to hit all the boxes.
The developer typically knows what the popular use cases would be. Like with the handbrake example. They also pretty much know how newbie users like simplified workflows and hand-holding - but it's often a lot of hassle to create the simplified-with-semi-hidden-advanced-mode interface.
> 4. The distribution of the software itself means anyone who successfully installs it themselves really is a power user
Are people who install, say, the Chrome browser on their PC to be considered power userS? They downloaded and installed it themselves after all... no, I believe you're creating a false dichotomy. Some users will never install anything; some users might install common software they've heard about from friends; and some might actively look for software to install - even though they don't know much about it or about how to operate the apps and OS facilities they already h ave. ... and all of these are mostly non-power-users.
Implementing the UI for one exact use case is not much trouble, but figuring out what that use case is difficult. And defending that use case from the line of people who want "that + this little extra thing", or the "I just need ..." is difficult. It takes a single strong-willed defender, or some sort of onerous management structure, to prevent the interface from quickly devolving back into the million options or schizming into other projects.
Simply put, it is a desirable state, but an unstable one.
or, simply put, nerds
it takes both a different background, approach and skillset to design ux and interface
if anything FOSS should figure out how to attract skilled artists so majority of designs and logos doesn't look so blatantly amateurish.
UI and UX are for all intents lost arts. No one is sitting on the other side of a 2 way mirror any more and watching people use their app...
This is how we get UI's that work but suck to use. This is how we allow dark patterns to flourish. You can and will happily do things your users/customers hate if it makes a dent in the bottom of the eye and you dont have to face their criticisms directly.
Which is also why UI/UX on open source projects are generally going to suck.
There's certainly no money to pay for that kind of experiment.
And if you include telemetry, people lose their goddamn minds, assuming the open source author isn't morally against it to begin with.
The result is you're just getting the author's intuitive guesswork about UI/UX design, by someone who is likely more of a coder than a design person.
> You can and will happily do things your users/customers hate if ... you dont have to face their criticisms directly.
A lot of software developers can't take criticism well when it comes to their pet projects. The entire FreeCAD community, for instance, is based entirely around the idea that FreeCAD is fine and the people criticising it are wrong and have an axe to grind, when that is exactly backwards.
It's difficult to get those kinds of creatives to donate their time (trust me on this, I'm always trying).
I'm an ex-artist, and I'm a nerd. I can definitively say that creating good designs, is at least as difficult as creating good software, but seldom makes the kind of margin that you can, from software, so misappropriation hurts artists a lot more than programmers.
I don't, as a rule, ever ask artists to contribute for free, but I still occasionally get gifted art from kind folks. (I'm more than happy to commission them for one-off work.)
Artists tragically undercharge for their labor, so I don't think the goal should be "coax them into contributing for $0" so much as "coax them into becoming an available and reliable talent pool for your community at an agreeable rate". If they're enthusiastic enough, some might do free work from time to time, but that shouldn't be the expectation.
There’s a very good reason for me to be asking for gratis work. I regularly do tens of thousands of dollars’ worth of work for free.
It’s a matter of Respect. It’s really amazing, how treating folks with simple Respect can change everything.
I like working in teams, but I also participate in an organization, where we’re all expected to roll up our sleeves, and pitch in; often in an ad hoc fashion.
If it is your job, then go do it as a job. But we all have jobs. Free software is what we do in our free time. Artists don't seem to have this distinction. They expect to be paid to do a hobby.
Because it's a different job!
Your post is like asking, "Why is breathing free but food costs money?"
Yeah it's a different job but they're both jobs. Why should one be free and one not be free?
It usually involves developing a design language for the app, or sometimes, for the whole organization (if, like the one I do a lot of work for, it's really all about one app). That's a big deal.
Logo design is also a much more difficult task than people think. A good logo can be insanely valuable. The one we use for the app I've done a lot of work on, was a quick "one-off," by a guy who ended up running design for a major software house. It was a princely gift.
Are you quoting someone? Yeah it's a real job, and so is programming. I don't think anyone in this conversation is being dismissive about either job.
As a programmer, working with a good graphic designer can be very frustrating, as they can demand that I make changes that seem ridiculous, to me, but, after the product ships, makes all the difference. I've never actually gotten used to it.
That's also why it's so difficult to get a "full monty" treatment, from a designer, donating their time.
> Logo design is also a much more difficult task than people think. A good logo can be insanely valuable. The one we use for the app I've done a lot of work on, was a quick "one-off," by a guy who ended up running design for a major software house. It was a princely gift.
A lot of developers also tend to invest quite an insane amount of work into their preferred open-source project and they do know how complicated their work is, and also how insane the value is that they provide for free.
So, where is the difference?
That's my point.
It’s not like graphic design is harder than programming.
I’d rather have crappy graphics than pay designers instead of programmers for free oss.
Software people love writing software to a degree where they’ll just give it away. You just won’t find artists doing the same at the same scale. Or architects, or structural engineers. Maybe the closest are some boat designs but even those are accidental.
It might just be that we were lucky to have some Stallmans in this field early.
Not sure how that happens with a painting, even a digital one.
But professional graphic designers, train to work in product-focused teams. They also are able to create collaborative suites of deliverables.
Most developers will find utility in the work of graphic designers, as opposed to fine artists.
Ego is likely involved. I love my babies, but what others think of my work isn't that important (which is good, because others aren't very impressed).
I make tools that I use, mostly.
But more importantly, most of them don't really care beyond "oh copyright's the thing that lets me sue big company man[0]".
The real impediment to CC-licensed creative works is that creativity resists standardization. The reason why we have https://xkcd.com/2347/ is because software wants to be standardized; it's not really a creative work no matter what CONTU says. You can have an OS kernel project's development funded entirely off the back of people who need "this thing but a little different". You can't do the same for creativity, because the vast majority of creative works are one-and-done. You make it, you sell it, and it's done. Maybe you make sequels, or prequels, or spinoffs, but all of those are going to be entirely new stories maybe using some of the same characters or settings.
[0] Which itself is legally ignorant because the cost of maintaining a lawsuit against a legal behemoth is huge even if you're entirely in the right
Another thing is that the vast amount of fan fiction out there has a hub-and-spoke model forming an S_n graph around the solitary 'original work' and there are community norms around not 'appropriating' characters and so on, but you're right that community works like the SCP Foundation definitely show that software-like property of remixing of open work.
Anyway, all to say I liked your comment very much but couldn't reply because you seem to have been accidentally hellbanned some short while ago. All of your comments are pretty good, so I reached out to the HN guys and they fixed it up (and confirmed it was a false positive). If you haven't seen people engage with what you're saying, it was a technical issue not a quality issue, so I hope you'll keep posting because this is stuff I like reading on HN. And if you have a blog with an RSS feed or something, it would be cool to see it on your profile.
OH MY FUCKING GOD THANK YOU. As far as I remember, I got 'disappeared' off HN right after some major upgrade in August, but it never got fixed, so I just assumed someone at YC was pissed off about something I said about (insert authoritarian dictator here).
Graphic artists are creating graphics editors (Gimp, Krita, Blender, ComfyUI, etc.) with tons of options.
I think this is because there are plenty of software nerds with an interest in typography who want to see more free fonts available.
I don't know if that qualifies as "getting ripped off", but it's not exactly paying me either.
Developers seem to have a product that people can actually attach a value to, but art and music; not so much. They seem to be in different Venn circles.
In all of it, we do stuff because of the love of the craft. One of the deeper satisfactions, for me, is when folks appreciate my work (payment is almost irrelevant; except for "keeping score"). It's pretty infuriating, to have someone treat my work as if it is a cheap commodity. There's a famous Star Trek scene, where Scotty and his crew are being disciplined for a bar fight with some Klingons[0], and Scotty throws the first punch. I can relate.
[0] https://www.youtube.com/watch?v=5rsZfcz3h1s
This says more of your perception I think. Many people attach value to art and music. Many people do not attach value to software.
[0] https://news.ycombinator.com/item?id=40917886
Pretty much everyone is a power user of SOME software. That might be Excel, that might be their payroll processor, that might be their employee data platform. Because you have to be if you work a normal desk job.
If Excel was simpler and had an intuitive UI, it would be worthless. Because simple UI works for the first 100 hours, maybe. Then it's actively an obstacle because you need to do eccentric shit as fast as possible and you can't.
Then, that's where the keyboard shortcuts and 100 buttons shoved on a page somewhere come in. That's where the lack of whitespace comes in. Those aren't downsides anymore.
Excel is a simple intuitive UI.
I use 10% of Excel. I don't even know the 90% of what it's capable of.
It hides away it's complexity.
For people that need the complex stuff, they can access it via menus/formulas.
For the rest of us, we don't even know it's there.
Whereas, Handbrake shoves all the complexity in your face. It's overwhelming for first time users.
Yes, this is an obstacle. This makes your software worse for power users. Because now they have to jump through hoops.
If they just took all those options and dumped them somewhere, that would be better.
Okay, another example: a datagrid or table. In naive apps targeting consumers, they're filled with whitespace and they're simple to look at. Great, right?
Oh... you need to see more information than the absolute bare bones? It's okay, you can click 'show more'. The problem is that, now, it takes too much time.
What if I want to see 50 results at the same time? Gulp. If I have to click show more 10 times to do that, I'm taking my computer and throwing it out the window. I don't give a rats ass about your whitespace or visual hierarchy. I want the software to do the thing for me so I can move on with my life.
This is why people will SWEAR by old software. There are many people who refuse to use modern versions of Excel. Because it's too annoying to use, and they use it all day long, so that's not acceptable.
This means they want to add features they couldn't get anywhere else, and already know how to use the existing UI. Onboarding new users is just not their problem or something they care about - They are interested in their own utility, because they aren't getting paid to care about someone else's.
It's not a "nerd" thing.
i think the bigger issue is that the power users usecases are different from the non-power users. not a skillset problem, but an incentive one
Yeah, no, that isn't it.
Most people just keep the default. When the default is Linux (say, the Steam Deck), most people just keep Linux.
And then you need to implement that, which is never an easy task, and maintain the eternal vigilance to both adhere to the vision but also fit future changes into that vision (or vice versa).
All of that is already hard to do when you're trying to build something. Only harder in a highly collaborative voluntary project where it's difficult or maybe even impossible to take that sort of ownership.
Not at all. Talented human artists still impress me as doing the same level of deep "wizardry" that programmers are stereotyped with.
Other engineering disciplines are simpler because you can only have complexity in three dimensions. While in software complexitiy would be everywhere.
Crazy to believe that
Cost, safety, interaction between subsystems (developed by different engineering disciplines), tolerances, supply chain, manufacturing, reliability, the laws of physics, possibly chemistry and environmental interactions, regulatory, investor forgiveness, etc.
Traditional engineering also doesn't have the option of throwing arbitrary levels of complexity at a problem, which means working within tight constraints.
I'm not an engineer myself, but a scientist working for a company that makes measurement equipment. It wouldn't be fair for me to say that any engineering discipline is more challenging, since I'm in none of them. I've observed engineering projects for roughly 3 decades.
I don't think that's entirely true, what I usually see is people that think AI art is just as good as many artists.
You can be impressed by something and still think a machine can do it just as well. People that can do complex mental arithmetic are impressive, even if that skill is mostly obsolete by calculators.
I also remember the hostility of my informal universities IT chat groups. Newbs were rather insulted for not knowing basic stuff, instead of helping them. A truly confident person does not feel the need to do that. (and it was amazing having a couple of those persons writing very helpful responses in the middle of all the insulting garbage)
Is this an inherently bad thing if the software architecture is closely aligned with the problem it solves?
Maybe it's the architecture that was bad. Of course there are implementation details the user shouldn't care about and it's only sane to hide those. I'm curious how/why a user workflow would not be obviously composed of architectural features to even a casual user. Is it that the user interface was too granular or something else?
I find that just naming things according to the behavior a layperson would expect can make all the difference. I say all this because it's equally confusing when the developer hides way too much. Those developers seem to lack experience outside their own domain and overcomplicate what could have just been named better.
Nor can the design world, for that matter. They think that making slightly darker gray text on gray background using a tiny font and leaving loads of empty space is peak design. Meanwhile my father cannot use most websites because of this.
It's like dark patterns are the ONLY pattern these days.. WTF did we go wrong?
Win95 was peak UI design.
I don’t understand modern trends.
Then the world threw away the menus, adopted an idiotic “ribbon” that uses more screen real estate. Unsatisfied, we dumbed down desktop apps to look like mobile apps, even though input technology remains different.
Websites also decided to avoid blue underlined text for links and be as nonstandard as possible.
Frankly, developers did UI better before UI designers went off the deep end.
A few days ago I had trouble charging an electric rental car. When plugging it in, it kept saying "charging scheduled" on the dash, but I couldn't find out how to disable that and make it charge right away. The manual seemed to indicate it could only be done with an app (ugh, disgusting). Went back to the rental company, they made it charge and showed me a video of the screen where to do that. I asked "but how on earth do you get to that screen?". Turned out you could fucking swipe the tablet display to get to a different screen! There was absolutely no indication that this was possible, and the screen even implied that it was modal because there were icons at the bottom which changed the display of the screen.
So you had: zero affordances, modal design on a specific tab, and the different modes showed different tabs at the top, further leading me to believe that this was all there was.
99% of the users are not using the mobile version.
That's part of the problem, they'll defend their poorly visible choice by lawyering "but this meets the minimal recommended guideline of 2.7.9"
This is the part where people get excited about AI. I personally think they're dead wrong on the process, but strongly empathize with that end goal.
Giving people the power to make the interfaces they need is the most enduring solution to this issue. We had attempts like HyperCard or Delphi, or Access forms. We still get Excel forms, Google forms etc.
Having tools to incrementaly try stuff without having to ask the IT department is IMHO the best way forward, and we could look at those as prototypes for more robust applications to create from there.
Now, if we could find a way to aggregate these ad hoc apps in an OSS way...
The usual situation is that the business department hires someone with a modicum of talent or interest in tech, who then uses Access to build an application that automates or helps with some aspect of the department's work. They then leave (in a couple of cases these people were just interns) and the IT department is then called in to fix everything when it inevitably goes wrong. We're faced with a bunch of beginner spaghetti code [0], utterly terrible schema, no documentation, no spec, no structure, and tasked with fixing it urgently. This monster is now business-critical because in the three months it's been running the rest of the department has forgotten how to do the process the old way, and that process is time-critical.
Spinning up a proper project to replace this application isn't feasible in the short term, because there are processes around creating software in the organisation, for very good reasons learned painfully from old mistakes, and there just isn't time to go through that. We have to fix what we can and get it working immediately. And, of course, these fixes cause havoc with the project planning of all our other projects because they're unpredictable, urgent, and high priority. This delays all the other projects and helps to give IT a reputation as taking too long and not delivering on our promised schedules.
So yeah, what appears to be the best solution from a non-IT perspective is a long, long way from the best solution from an IT perspective.
[0] and other messes; in one case the code refused to work unless a field in the application had the author's name in it, for no other reason than vanity, and they'd obfuscated the code that checked for that. Took me a couple of hours to work out wtf they'd done and pull it all out.
Part of the problem is that the novices that create these applications don't consider all the edge cases and gnarly non-golden-path situations, but the experienced devs do. So the novice slaps together something that does 95% of the job with 5% of the effort, but when it goes wrong the department calls in IT to fix it, and that means doing the rest of the 95% of the effort. The result is that IT is seen as being slow and bureaucratic, when in fact they're just doing the fecking job properly.
If you want a developer to write good code quickly, put them in an isolated silo and don't disturb them.
If you want a developer to engage with the business units more, be prepared for their productivity to drop sharply.
As with all things in tech, it's a trade-off.
IT should not be focusing on the theoretical, platonic Business Process. It never exists in practice anyway. They should focus on streamlining actual workflow of actual people. I.e. the opposite advice to the usual. Instead of understanding what users want and doing it, just do what they tell you they want. The problem with standard advice is that the thing you seek to understand is emergent, no one has a good definition, and will change three times before you finish your design doc.
To help company get rid of YOLOed hacks in Excel and such made by interns, IT should YOLO better hacks. Rapid delivery and responsiveness, but much more robust and reliable because of actual developer expertise behind it.
If you streamline a shitty process, you will have diarrhea...
Unfortunately, most processes suck and need improvement. It isn't actually IT's job to improve processes. But almost always, IT is the only department that is able to change those processes nowadays since they are usually tied to some combination of lore, traditions, spreadsheets and misused third-party software.
If you just streamline what is there, you are cementing those broken processes.
http://lpd2.com/
I disagree: it's a business prioritisation issue (not necessarily a problem). Ultimately, a lot of the processes are there because the wider business (rightly) wants IT to work on the highest impact issues. A random process that 3 people suffer from probably isn't the highest impact for the business as a whole.
Also, because it's not high impact, it makes sense that an intern is co-opted to make life easier (also as a learning experience), however it also causes the issues OP highlighted.
The problem is solvable, I think, but it's not easily solvable!
My best example was a conversation I had with one of the scientists at my job when she mentioned that she had people spending hours every day generating reports from data our instruments produced. I pointed out that with the code we had it would be simple to generate the reports automatically.
Her response that she had asked repeatedly for a developer to be assigned to the task, but she kept being pushed away because it was low priority.
I couldn't just change the codebase on my own (it was for a medical device), but it was easy enough to spend a lazy afternoon writing a tool to consume the output logs from the device and generate the reports that she needed. That's it: about 4 hours of work and produced something this person had asked for a year prior, and that people were already spending hours each day doing!
The people in charge of vetting requests never even bothered to ask a developer to estimate the task. They just heard that there was a work around, so it immediately became "low priority."
This leads to the exact problem OP brings up: who fixes it if it breaks? If it becomes critical, now other priorities go unhandled as a person or team are dragged into resolution.
I’ll bring a couple more counter arguments:
- When I was running an internal enablement team, I reckon I had 12 months worth of work of these simple requests at any given point in time.
- Even that time saving might not be worth it, if it means those 4 hours could have been spent building something the company can sell (which for a SaaS product, is literally millions over its lifetime).
Just to be clear, I’m not saying you did the wrong thing at all. Hell, I’ve done this stuff myself. I’m just pointing out it’s not as easy as “just spend 4 hours on it and it’s done!” I’d go on but I think I’d just end up regurgitating the article.
Which is a huge reason that learning a RAD (rapid application development - emphasis on rapid) tool is a pretty useful skill.
I'm not "prioritizing" anything. The scenario we're discussing is when an intern or low-level employee is able to successfully automate, enhance, or simplify a manual, inefficient business process that management has not seen fit to improve - so the worker does it themselves.
Access and similar platforms aren't "rapid" because of shortcuts, they are rapid because they are visual-based, drag-and-drop, object-oriented and often make a component's properties and methods customizable also via a visual interface. It's a different way of programming, yes, accessible to the masses (which is likely the reason you have so much disdain), but not "shortcuts".
You need a structure if you have org of 100+ employees. If it is smaller than that I don’t believe you get dev department.
most of these teams only wants a straightforward spec, shut themselves off from distractions, just to emerge weeks or months later with something that completely misses the business case. and yet, they will find ways to point fingers at the product owner, project manager, or client for the disaster.
The huge majority of devs want to understand the business and develop high quality software for it.
In one business I worked for, the devs knew more about the actual working of the business than most of the non-IT staff. One of the devs I worked with was routinely pulled into high-level strategy meetings because of his encyclopaedic knowledge of the details of the business.
I.e. done right, it should be not just possible but completely natural for a random team lead in the mail room to call IT and ask, "hey, we need a yellow highlighter in the sheet for packages that Steve from ACME Shipping needs to pick on extra evening run, can you add it?", and the answer should be "sure!" and they should have the new feature within an hour.
Yes, YOLO development straight on prod is acceptable. It's what everyone else is doing all the time, in every aspect of the business. It's time for developers to stop insisting they're special and normal rules don't apply to them.
The main reason you want a computer is cheap emulation (cad, daw,…) or fast (and reliable) automation. Both requires great deal of specifications to get right.
We should not be thinking about architecture at the business process level. This is just repeating the mistake that needs to be avoided here. This is, and will ever be, a pile of ad-hoc hacks. They're not meant to add up to a well-designed system in a bottom-up fashion, because there is no system to design. The structure we naturally seek, is constantly in flux.
The right architectural/design decisions to make here is to make it possible to assemble quick hacks out of robust parts that fulfill additional needs the people on the ground may not consider - logs/audit trail/telemetry, consistency for ergonomic reasons, error handling, efficient compute usage, tracking provenance, information access restrictions dictated by legal obligations, etc.
The most important change needed is in the mindset. Internal dev needs to stop thinking of itself as the most important part of the company, or as a well-defined team that should own products. To be useful, the opposite is needed - such devs need to basically become ChatGPT that works: always be there to rapidly respond to requests to tweak some software by people on the ground, and then to retweak is as needed. They need to do this work rapidly, without judgement, and never assume they know better.
Only then people will stop weaving ad-hoc Excel sheets into business-critical processes.
The single most valuable tool is user testing. However it really takes quite a few rounds of actually creating a design and seeing how wrong you saw the other person’s capabilities, to grok how powerful user testing is in revealing your own biases.
And it’s not hard at all at core. The most important lesson really is a bit of humility. Actually shutting up and observing what real users do when not intervened.
Shameless plug, my intro to user testing: https://savolai.net/ux/the-why-and-the-how-usability-testing...
I assume those processes weren't applied when deciding to use this application, why? Was there a loophole because it was done by an intern?
Then think again of those managers getting paid manager salaries who couldn't figure this out themselves - or worse, the ones who want to shut it all down because he didn't "follow the procedure" (the procedure of not doing anything useful???)
The loophole is that if you have Office or similar you have a variety of development environment, IT/compliance/finance aren't caring what files you produce with the applications you have, and no one else is paying attention initially either, but would have a say (and a procedure for you to follow) if you wanted to bring in or create a new application. The usual process is bypassed.
This is more commonly associated with Excel, but it applies to Access too (less so than it used to, but there are still plenty people out there who rely on it daily).
Once the demo/prototype/PoC is there it is a lot easier to “fix up” that than spin up a project in anything else, or get something else in that is already available, for the same reasons as why it was done in Excel/Access in the first place plus the added momentum: the job is already at least part way done, using something else would be a more complete restart so you need to justify that time as well as any other costs and risks.
[Note: other office suites exist and have spreadsheets & simple DBs with similar capabilities, or at least a useful subset of them, of course, but MS Office's Excel & Access are, for better or worse, fairly ubiquitous]
This reminds me of the "just walk confidently to their office and ask for a job to get one!" advice. This sounded bullshit to me until I got to stay with some parts of a previous company, where the hiring process wasn't that far really.
That's also the kind of companies where contracts and vendor choices will be negociated on golf courses and the CEO's buddies could as well be running the company it would be the same.
I feel for you.
Love the assumption "when it inevitably goes wrong." In real life, many of these applications work perfectly for years and assist employees tremendously. The program doesnt fail, but the business changes - new products, locations, marketing, payment types, inventory systems, tons of potential things.
And yes, after the original author is gone, nobody is left to update the program. Of course, a lot of programmers or IT folks probably could update it, but ew, why learn and write Access when we can create a new React app with microservices-based backend including Postgres in the cloud and spin up a Kubernetes cluster to run it.
One of the truest things I've read on HN. I've also tried to visit this concept with a small free image app I made (https://gerry7.itch.io/cool-banana). Did it for myself really, but thought others might find it useful too. Fed up with too many options.
Therefore: If you want lots of users, design for the median user; if you don't, this doesn't apply to you
For those of you thinking (which 20%) following that article from the other day — this is where a good product sense and knowing which 80% of people you want to use it first. You could either tack on more stuff from there to appeal to the rest of the 20% of people, or you could launch another app/product/brand that appeals to another 80% of people. (e.g. shampoo for men, pens for women /s)
463 more comments available on Hacker News