You Want Technology with Warts
Posted3 months agoActive3 months ago
entropicthoughts.comTechstoryHigh profile
calmmixed
Debate
80/100
Software LongevityTechnology ChoiceBackwards Compatibility
Key topics
Software Longevity
Technology Choice
Backwards Compatibility
The article argues that 'technology with warts' is desirable for longevity, sparking a discussion on the trade-offs between warts, backwards compatibility, and maintainability.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
56
Day 1
Avg / period
14.8
Comment distribution59 data points
Loading chart...
Based on 59 loaded comments
Key moments
- 01Story posted
Oct 2, 2025 at 11:13 PM EDT
3 months ago
Step 01 - 02First comment
Oct 3, 2025 at 1:08 AM EDT
2h after posting
Step 02 - 03Peak activity
56 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 13, 2025 at 9:10 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45458550Type: storyLast synced: 11/20/2025, 2:46:44 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I'm sure it's an unpopular opinion but sh/bash scripts suck. There are magic incantations all over and if you get one wrong then you've got code injection. We can't go back and fix it but we could either replace it, or update in some way so it it's easy to be safe and only one way to do things that "does the right and safe thing" always.
I don't think keeping unsafe by default is a good model and I think all of the daily headlines of people/companies/hospitals/airports/goverments being hacked in large part because we keep the warts.
Long lived shipping code is typically not aesthetically pleasing.
It's a difference of opinion more than it is a conflation with something else.
My _personal_ preference is that software does the "correct" thing by default even if it breaks my build or my tests or even running software; I would rather it break visibly than work nefariously.
You don’t need the shiniest & newest framework to tell a computer to generate some HTML & CSS with a database and some logic. And don’t have the lived experience of building & shipping to realise that only 1 in 1,000,000 projects probably ever get more than 100 sets of eyeballs so end up using much more complicated tools than necessary.
But all the news & socials will sing praise to the latest shiny tools x.x.9 release so those needs easily get confused.
I think some companies oversubscribe to reliability technology too. You should assess if you really need 99.9999% uptime before building out a complex cloud infrastructure setup. It's very likely you can get away with one or two VMs.
As I understand, HN runs on a single server with a backup server for failover.
The point of the article is that warts are subjective. What one person considers unwanted behavior, another might see it as a feature. They are a testament to the software's flexibility, and to maintainers caring about backwards compatibility, which, in turn, is a sign that users can rely on it for a long time.
Nobody wants bugs. But I agree with the article's premise.
I don't disagree that things probably won't stay static though, but you can do a lot to minimise the surface area of change you are exposed to.
isn't it the opposite? Just take a look at any web app and tell me if it wasn't rewritten at some point due to the rapid shift in web frameworks and hype cycles. heck, even HN was rewritten afaik.
meanwhile i have Matlab that has remained literally the same for over a decade.
But that was more than a decade ago, I guess?
Really? How come half of the apps I've used in the past are listed in the app store as "not compatible with this device, but they can be installed on $device_gone_to_rest_in_drawer_14_years_ago"?
I'm genuinely curious! I thought it was aggressive deprecation of mobile OS APIs that made old apps no longer work on new phones.
Not to mention things like the whole 32->64bit transition that dropped all previous iOS apps (and on the MacOS side of things, we've had 4x of those in the past 25 years - classic->OSX, powerpc->intel, 32->64bit, intel->M series)
The major platform transitions are harder breaks but are pretty rare. We’re not going to another architecture shift or bit increase for a long time.
I feel like there is a category error here where people think a website is the HTML that lands in your browser. When actually it is also the technology that received your request and sent the HTML. Even if it’s the same “flat” HTML sitting on the server, a web application is still sending it. And yes, those require maintenance.
It's not a dichotomy though. If you build with cutting edge platforms, you'll suffer churn as things change underneath you. It's a choice you should make with awareness, but there's no right answer.
I prefer stable mature platforms for professional settings, because you can focus on the business problems and not the technology problems. It's the more fiscally responsible choice as well, given lower platform churn means less work that has no direct business value.
I mean, I guess the death of Flash / Java Applets / ActiveX / etc count, but in the javascript world, doesn't feel like we've actually had that many breaking changes
But the point is not whether webapps are rewritten, but whether they have to be rewritten. I know some old enterprise webapps made with PHP about 10 years ago that are still working fine.
You do have to worry about security issues, and the occasional deprecation of an API, but there is no reason why a web-based service should need to be rewritten just to keep working. Is that true for mobile and desktop apps?
I mean most webapps of any size are built on underlying libraries, and sometimes those libraries disappear requiring a significant amount of effort to port to a new library.
Desktop apps in theory can run too, but it depends on what they link and if OS still provides it.
It's not like any modern OS, or popular libraries/frameworks could not provide an equally stable (subset of an) API for apps, but sadly they don't.
[1]: I am young enough that when I last used DOS was before I had started programming, so I never learned it beyond command-line interaction.
Basically, if there are no obvious warts, then either:
1. The designers magically hit upon exactly the right needed solution the very first time, every time, or
2. The designers regularly throw away warty interfaces in favor of cleaner ones.
#1 is possible, but #2 is far more likely.
a domain specific language for walking into nested lists based on an implementation detail of how linked list pointers and values were packed into a single 36-bit memory location on an IBM 704 computer from the 1950s.
[1] https://franz.com/support/documentation/ansicl/dictentr/carc...
[2] https://en.wikipedia.org/wiki/CAR_and_CDR
No, everything today is about endless frenzied stacking of features as high as possible, totally ignoring how askew the tower is sitting as a result. Just prop it up again for the 50th time and above all, ship faster!
How? All nontrivial software has warts, you're never choosing between warts and no warts, you're choosing between more warts and less warts. And JSON has few warts compared to the alternatives, especially YAML.
This is why my first instinct when automating a task is to write a shell script. Yes, shells like Bash are chock-full of warts, but if you familiarize yourself with them and are able to navigate around them, a shell script can be the most reliable long-term solution. Even cross-platform if you stick to POSIX. I wouldn't rely on it for anything sophisticated, of course, but I have shell scripts written years ago that work just as well today, and I reckon will work for many years to come. I can't say that about most programming languages.
Literally the first time of using uniq and finding duplicates in the output is a wart. Having to realise it's not doing "unique values" it's doing "drop consecutive duplicates" and the reason it's doing it is because 1970s computers didn't have enough memory to dedupe a whole dataset and could only do it streaming.
cat being "catenate" intended to catenate multiple files together into one pipeline stream, but always being used to read single files instead of a pager. "cat words.txt | less" is a wart seen all over the place.
cut being needed to separate fields because we couldn't all come together and agree to use ASCII Field Separator character to separate fields is arguably warty.
Or the find command where you can find files by different sizes and you can specify +100G for finding files with sizes 100 Gibibytes or larger, or M for Mebibytes, K for Kibibytes, and b for bytes. No just kidding b is 512-byte blocks, it's nothing at all for bytes just +100. No just kidding again that's also 512-byte blocks. It's c for bytes. Because the first version of Unix happened to use 512-byte blocks on its filesystem[1]. And the c stands for characters so you have to know that it's all pre-Unicode and using 8-bit characters.
[1] https://unix.stackexchange.com/questions/259208/purpose-of-f...
A wart is not surprising or unintuitive behavior. It is a design quirk—a feature that sticks out from the program's overall design, usually meant to handle edge cases, or to preserve backwards compatibility. For example, some GNU programs like `grep` and `find` support a `POSIXLY_CORRECT` env var, which changes their interface and behavior.
My point is that the complexity of the software and its interface is directly proportional to the likelihood of it developing "warts". `find` is a much more complex program than `cut` or `uniq`, therefore it has developed warts.
Next time, please address the point of the comment you're replying to, instead of forcing some counterargument that happens to be wrong.
> "For example, some GNU programs like `grep` and `find` support a `POSIXLY_CORRECT` env var, which changes their interface and behavior."
By your own logic that's not a wart, that's the way the programs work and they are documented to work like that. Twisting yourself into knots to pretend Linux/Unix utilities are perfectly designed isn't good logic. There's no excuse "it was designed that way" or "it's documented to work like this" which makes something not warty. Something might have been a good design originally and become a bad design as the world changed around it.
For example unserialised byte streams where you have to fight whether \n or 0 (null byte) are data or record separators, is a wart. How do we know? Because when people design newer tools they don't keep that, they design it away. Serialised JSON and jq, .NET objects in PowerShell, structured data in nushell[1] and oilshell YSH[2].
[1] https://www.nushell.sh/
[2] https://oils.pub/ysh.html
Look, here's a program:
Do you see any warts in it? No? Well, my point is that the likelihood of a program developing warts is partly related to its complexity. That's it.I mentioned Unix tools as an example of this, since they're by far the most reliable tools I use, particularly the simpler ones. I never said that they were "perfectly designed". Don't put words in my mouth.
> By your own logic that's not a wart, that's the way the programs work and they are documented to work like that.
A wart can be documented. I mentioned the case of `POSIXLY_CORRECT` since GNU tools predate the POSIX standard by a few years, so presumably that env var and the change in behavior was introduced when POSIX was first established. I'm not familiar with the detailed history, but that example is beside my point.
> Something might have been a good design originally and become a bad design as the world changed around it.
Again, you're misunderstanding what a wart is. It has nothing to do with bad design, and it doesn't appear on its own over time. It is a deliberate quirk that deviates from the program's original design—whatever that may be—in order to support some functionality without breaking backwards compatibility.
Just because you don't like the design doesn't mean that the software has warts.
Read that again.
Those fancy shells you mention have at least one major drawback: interop between tools only works if structured I/O is supported by each tool, whether that is because they were written from scratch, or because it was tacked on via shims. In practice, this can only be done if the entire ecosystem is maintained by a single organization. And if that strict interface needs to be changed, good luck updating every single tool while maintaining backwards compatibility. Talk about warts...
In contrast, using unstructured text streams as the universal interface means that programs written 40 years ago can interoperate with programs written today, without either author having to add explicit support for it, and without requiring centralized maintenance. Most well-designed programs do support flags like `-0` to indicate a null separator, which has become somewhat of a standard, but otherwise the user is in control of how that interop happens. This freedom is a good thing, and is precisely what makes programs infinitely composable.
In any case, this discussion has veered off from my original point and the article, because you don't like how Unix programs are designed, and misunderstand what warts are. So for that reason, I'm out.
Yes. It's Python 2's print statement, in PythonWiki's PythonWarts[2] page. And it counters a lot of your claims:
- It was right there from the beginning, it wasn't designed in later.
- It may have been good design at the time when Python was a teaching language, and did appear on its own over time as the world and Python's use cases changed around it.
- It wasn't added as a workaround to keep backward compatibility.
- It is a single line of code, as small as you could reasonably get, and still warty.
> "A wart can be documented. [A wart] is a deliberate quirk that deviates from the program's original design"
It was your point that "x is not a wart because it's documented to work that way". How does the article's "Haskell .. built in String type is bad data structure for storing text" 'deliberately deviate' from Haskell's original design? How does sqlite "tables being flexibly typed by default" deliberately deviate from SQLite's original design? On that PythonWarts page "Explicit self in methods" is Python's original design. It's you who has come up with some personal defition of wart that nobody else is using.
> "I mentioned Unix tools as an example of this, since they're by far the most reliable tools I use"
That's orthogonal to their wartiness. And incidentally that should make you suspicious. By the blog post's reasoning, tools are far more likely to be (warty + reliable) than (wart-free + reliable) since the latter relies on them being designed perfectly and not changed and in an environment which isn't changing or to have a design that lets them be changed very flexibly. All those are unlikely and difficult, whereas the former (warty+reliable) can apply to anything, and probably does.
> "Just because you don't like the design doesn't mean that the software has warts."
And just because you like the design doesn't it's wartless.
> "In contrast, using unstructured text streams as the universal interface means that programs written 40 years ago can interoperate with programs written today, without either author having to add explicit support for it"
Unix/Linux shell streams are not unstructured text streams. That's why tools like `cut` exist, because the text has structure. And null bytes can be used as record separators because the stream is not text. And it's why "interop between tools only works if structured I/O is supported by each tool" applies - each tool can only read the byte streams that are within the ad-hoc informally specified half of a common serialization standard. It's not that I don't like this, it's that I don't like people spreading Linux propaganda about how this is wartless and brilliant when it's barely good enough even for tasks of the 1970s and should be in a museum.
[1] https://wiki.python.org/moin/PythonWarts*
But seriously I don’t think warts are the key to long-serving web applications without maintenance. The key is to pick technologies with no security vulnerabilities. That way you won’t ever have to install patches (which risk breaking functionality, which requires maintenance to fix), and you won’t ever get hacked (which also requires maintenance to fix).
Do such technologies exist? I don’t think so. Which is why I think the concept of constant maintenance is inherent to web applications and everyone should just accept that.
I think we’re far enough along in this journey to recognize that the dream of “immortal frictionless machinery” is not what software actually is. We create the friction (via usage, attacks, and patches) and we should plan to address it.
Bridges do receive regular maintenance BTW, even very well-engineered ones…
These are exceedingly rare, and even more rarely do those stable technologies cover your actual needs.
Operationally it is much better to accept the need for software maintenance and plan for it.
Some years ago, I gave a talk on functional-style shell programming which began with this:
ref:Dr. Strangepipes Or: How I Learned To Stop Worrying && Function In Shell (Functional Conf 2019)
org-mode slideware: https://gist.github.com/adityaathalye/93aba2352a5e24d31ecbca...
live demo: https://www.youtube.com/watch?v=kQNATXxWXsA&list=PLG4-zNACPC...
I see this attitude, especially with juniors, and often with project managers, that we need perfection and are building things that are meant to last for decades. Almost nothing does though. Many/most business applications are obsolete within 5 years and it costs more to cling to the fiction that what we're doing is important and lasting.