Surface Tension of Software
Key topics
The concept of "surface tension of software" sparked a lively debate about the trade-offs between rigid and flexible software design. Commenters weighed in on the importance of robust type definitions and making invalid states unrepresentable, with some pointing out that overly restrictive models can be limiting. A fascinating discussion ensued about the role of object-oriented programming (OOP) versus functional programming, with some declaring OOP "dead" while others cited its continued relevance in projects like the Linux kernel. As commenters dug deeper, they uncovered nuanced insights into the evolution of programming paradigms and the impact of design choices on software complexity.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
18m
Peak period
45
0-12h
Avg / period
12.3
Based on 49 loaded comments
Key moments
- 01Story posted
Dec 14, 2025 at 3:54 AM EST
20 days ago
Step 01 - 02First comment
Dec 14, 2025 at 4:12 AM EST
18m after posting
Step 02 - 03Peak activity
45 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 18, 2025 at 12:29 PM EST
16 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
There's a folk story - I don't remember where I read it - about a genealogy database that made it impossible to e.g. have someone be both the father and the grandfather of the same person. Which worked well until they had to put in details about a person who had fathered a child with his own daughter - and was thus both the father and the grandfather of that child. (Sad as it might be, it is something that can, in fact, happen in reality, and unfortunately does).
While that was probably just database constraints of some sort which could easily be relaxed, and not strictly "unrepresentable" like in the example in the article - it is easy to paint yourself into a corner by making a possible state of the world, which your mental model dims impossible, unrepresentable.
The critical thing with state and constraints is knowing at what level the constraint should be. This is what trips up most people, especially when designing relational database schemas.
I think the solution to that is to continuously refactor, and to spell out very clearly what your assumptions are when you are writing the code (which is an excellent use for comments).
The trick is to make be schema represent what you need - right now - and no more.
The real world and user experience requirements have a way of intruding on these underspecified models of how the world "should" be.
Hi, can you give an example? Not sure I understand what you're getting at there.
(My tuppence: "the map is not the territory", "untruths programmers believe about...", "Those drawn with a very fine camel's hair brush", etc etc. All models are wrong, and that's inevitable/fine, as long as the model can be modified as necessary. Focus on ease of improving the model (eg can we do rollbacks?) is more valuable than getting the model "right").
An utterly trivial example is constraining the day-field in a date structure. If your constraint is at the level of the field then it can’t make a decision as to whether 31 is a good day-value or not, but if the constraint is at the record-structure level then it can use the month-value in its predicate and that allows us to constrain the data correctly.
When it comes to schema design it always helps to think about how to ‘step up’ to see if there’s a way of representing a constraint that seems impossible at ‘smaller’ schema units.
- that's why OOP failed - side effects, software too liquid for its complexity
- that's why functional and generic programming are on their rise - good FP implementations are natively immutable, generic programming makes FP practical.
- that's why Kotlin and Rust are in position to purge Java and C, philosophically speaking - the only things that remain are technical concerns, such as JetBrains' IDEA lock-in (that's basically the only place where you can do proper Kotlin work) as well Rust's "hostility" to other bare-metal languages, embedded performance, and compiler speed.
I don't know how strong lock in is.
also, hot take: Kotlin simply does not need this many tools for refactoring, thanks in part to the first-class FP support. in fact, almost every non-Android Kotlin dev I have ever met would be totally happy with analysis and refactoring levels on par with Rust Analyzer.
but even with LSP, I would still need IDEA (at least Community) to perform Java -> Kotlin migration and smooth Java interoperability.
inheritance is what has been changing in scope, where it’s discouraged to base your inheritance structure on your design domain.
[0] https://paulgraham.com/reesoo.html
trivia: Kotlin interfaces were initially called "traits", but with Kotlin M12 release (2015), they renamed it to interfaces because Kotlin traits basically are Java interfaces. [0]
[0]: https://blog.jetbrains.com/kotlin/2015/05/kotlin-m12-is-out/...
2 indeed never made sense to me since once everything is ASM "protected" means nothing, and if you can get a pointer to the right offset you can read "passwords". This claim of enforcing what can and cannot be reached from a subclass to help security never made sense to me.
3 i never liked function overloading, prefer optional arguments with default values.. if you need a function to work with multiple types of one parameter, make it a template and constrain what types can be passed
7 interfaces are a must have for when you want to add tests to a bunch of code that has no tests.
8 rust macros do this, and it's a great way to add functionality to your types without much hassle
9 idk what this is
Actually, I have some similar concerns about powerful type systems in general -- not just OOP. Obsessing about expression and enforcement of invariants on the small scale can make it hard to make changes, and to make improvements on the large scale.
Could you please expand upon your idea, particularly the idea that creating (from what I understood) a hierarchical structure of "blackboxes" (abstractions) is bad, and perhaps provide some examples? As far as I understand, the idea that you compose lower level bricks (e.g. classes or functions that encapsulate some lower level logic and data, whether it's technical details or business stuff) into higher level bricks, was what I was taught to be a fundamental idea in software development that helps manage complexity.
> structure things as a loosely coupled set of smaller components
Mind elaborating upon this as well, pretty please?
You'll notice yourself when you try to actually apply this idea in practice. But the closest analogy is: How many tall buildings are around your place, what was their cost, how groundbreaking are they? Chances are, most buildings around you are quite low. Low buildings have a higher overhead in space cost, but high buildings have a lot of overhead too. They are harder to construct. In denser cities, space is rare, so you may have comparatively a lot of buildings of 4 to say, 12 levels, but those are quite boring: how to construct them is well-understood, there isn't much novelty.
With software components it's similar. There are a couple of ideas that work well enough such that you can stack them on top of each other (say, CPU code on top of CPUs on top of silicon, userspace I/O on top of filesystems on top of hard drives, TCP sockets on top of network adapters...) which allows you to make things that are well enough understand and robust enough. But also, there isn't much novelty in there. And it's not like you can just come up with new rock-solid abstractions and combine 5 of those to create something new that magically solves all your business needs and is understandable and maintainable. The opposite is the case. The general, pre-made things don't quite your specific problem. Their intention was not focused to a specific goal. The more of them you combine, the less the solution fits and the less understandable it is and the more junk it contains. Also, combining is not free. You have to add a _lot_ of glue to even make it barely work. The glue itself is a liability.
> structure things as a loosely coupled set of smaller components
Don't build on top of shoddy abstractions. Understand what you _have_ to depend on, and understand the limitations of that. Build as "flat" as possible i.e. don't depend on things you don't understand.
I also think it's about how many people you can get to buy-in on an abstraction. There probably are better ways of doing things than the unix-y way of having an OS, but so much stuff is built with the assumption of a unix-y interface that we just stick with it.
Like why can't I just write a string of text at offset 0x4100000 on my SSD? You could but a file abstraction is a more manageable way of doing it. But there are other manageable ways of doing it right? Why can't I just access my SSD contents like it's one big database? That would work too right? Yeah but we already have the file abstraction.
>But OOP, as I take it, is exactly that idea. That you're creating lots of perfect objects with a clear and defined purpose, and a perfect implementation. And you combine them to implement the functional requirements, even though each individual component knows only a small part of them, and is ideally reusable in your next project!
I think OOP makes sense when you constrain it to a single software component with well defined inputs and outputs. Like I'm sure many GoF-type patterns were used in implementing many STL components in C++. But you don't need to care about what patterns were used to implement anything in <algorithm> or <vector>. you just use these as components to build a larger component. When you don't have well defined components that just plug and play over the same software bus, no matter how good you are in design patterns it's gonna eventually turn into spagetti un-understandable mess.
A database would not work as mostly unstructured storage for uncoordinated processes. Databases are quite opinionated and requires global maintenance and control, while filesystems are less obtrusive, they implement the idea of resource multiplexing using a hierarchy of names/paths. The hierarchy lets unrelated processes mostly coexist peacefully, while also allowing cooperation very easily. It's not perfect, it has some semantically awkward corner cases, but if all you need is multiplexing a set of by-ranges onto a physical disk, then filesystems are a quite minimal and successful abstraction.
Regarding STL containers, I think they're useful and useable after a little bit of practice. They allow you to get something up and running quickly. But they're not without drawbacks and at some point it can definitely be worthwhile to implement custom versions that are more straightforward, more performant (avoiding allocation for example), have better debug performance, have less line noise in their error messages, and so on. The most important containers in the STL are quite easy to implement custom versions with fewer bells and whistles for. Maybe with the exception of map/red-black tree which is not that easy to implement and sometimes the right thing to use.
Thank you! I don't get to hear that often. I have to say I was almost going to delete that above comment because it's too long, the structure and build up is less than clear, there are a lot of "just" words in it and I couldn't edit anymore. I do invest a lot of time trying to write comments that make sense, but honestly have never seen myself as a clear thinker or a good writer. To actually answer your question, initial attempts to start a blog didn't go anywhere really... Your comment is encouraging though, so thanks again!
It reminds me of huge enterprise-y tools, which in the long run often are more trouble than they're worth (and reimplementing just the subset you need would be better), and (the way you speak about OOP) bloated "enterprise" codebases with huge classes and tons of patterns, where I agree making things leaner and less generic would do a lot of good.
At first however I thought that you're against the idea of managing complexity by hierarchically splitting things into components (i.e. basically encapsulation), which is why I asked for clarification, because this idea seems fundamental to me, and seeing that someone is against it got me interested. I think now though that you're not against this idea, and you're against having overly generic abstractions (components? I'm not sure if I'm using the word "abstractions" correctly here) in your stack, because they're harder to understand, which I understand. I assume this is what blackbox means here.
Does it sound correct?
Neither Kotlin nor Rust cares about effects.
Switching to Kotlin/Rust for FP reasons (and then relying on programmer discipline to track effects) is like switching to C++ for RAII reasons.
Kotlin and Rust are just a lot more practical than, say, Clojure or Haskell, but they both take lessons from those languages.
Right. Just like Strings aren't inherently bad. But languages where you can't tell whether data is a String or not are bad.
No language forbids effects, but some languages track effects.
When you actually design interfaces you discover that there are way more states to keep in mind when implementing asynchronous loading.
1. There’s an initial state, where fetching has not happened yet
2. There may be initial cached (stale or not) data
3. Once loaded the data could be revalidated / refreshed
So the assumption that you either are loading XOR have data XOR have an error does not hold. You could have data, an error from the last revalidation, and be loading (revalidating).
I tend to avoid fancy type system things in general because those create dependency chains throughout the codebase. Data, and consequently data types (including function signatures) is visible at the intersection of modules, so that's why it's so easy to create unmaintainable messes by relying on type systems too much. When it's possible to make a usable API with simple types (almost always), that's what you should do.
This will be discovered at compile time if you use immutability & types like the article suggests.
> So the assumption that you either are loading XOR have data XOR have an error does not hold.
If you design render() to take actual Userdata, not LoadingUserdata, then you simply cannot call render if it's not loaded.
The way to produce unspecified behaviour is to enable nulls and mutability. Userdata{..} changing from unloaded to loaded means your render() is now dynamically typed and does indeed hit way more states than anticipated.
Indeed, and tagged unions (enums in Rust) explicitly allow you to avoid creating invalid state.
> Stability isn’t declared; it emerges from the sum of small, consistent forces.
> These aren’t academic exercises — they’re physics that prevent the impossible.
> You don’t defend against the impossible. You design a world where the impossible has no syntax.
> They don’t restrain motion; they guide it.
I don't just ignore this article. I flag it.
(oh no am i one of them?)
Now, is that gonna help design better software in any particular way?
Yet in practice, I've found that it's easy to over-model things that just don't matter. The number of potential states can balloon until you have very complex types to handle states that you know won't really occur.
And besides, even a well modelled set of types isn't going to save you from a number that's out of the expected range, for example.
I think Golang is a good counter example here -- it's highly productive, makes it easy to write very reliable code, yet makes it near impossible to do what the author is suggesting.
Properly considered error handling (which Go encourages) is much more important.