Type Checking Is a Symptom, Not a Solution
Posted4 months agoActive4 months ago
programmingsimplicity.substack.comTechstory
heatedmixed
Debate
85/100
Type SystemsProgramming LanguagesSoftware Complexity
Key topics
Type Systems
Programming Languages
Software Complexity
The article argues that type checking is a symptom of underlying complexity in software design, sparking a heated debate among commenters about the role of type systems in managing complexity.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
53m
Peak period
105
12-18h
Avg / period
21.5
Comment distribution129 data points
Loading chart...
Based on 129 loaded comments
Key moments
- 01Story posted
Sep 5, 2025 at 1:56 AM EDT
4 months ago
Step 01 - 02First comment
Sep 5, 2025 at 2:49 AM EDT
53m after posting
Step 02 - 03Peak activity
105 comments in 12-18h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 7, 2025 at 5:06 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45135391Type: storyLast synced: 11/20/2025, 3:35:02 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The author seems to like black boxes. Unix corelib is a collection of black boxes[1]. So are docker containers[2]. Do you know what else are black boxes? TYPES. Specifically interfaces. Types let you compartmentalize your software into as many black boxes as you want, then compose them.
———
[1] Try building any non trivial service by composing Unix corelib tools, docker containers, or REST APIs. You’re in for a world of hurt.
[2] Docker containers aren’t even really isolated. The JupyterHub DockerSpawner image spins up and tears down other docker containers at will. The whole thing is a mess of state. There’s no composition—just the container equivalent of `goto`
Indeed. This person doesn't even know what types or functions are.
I agree in general with the principle that there is a 'human-readable' size, and that it's around some 10k-30k LOC, but I'm curious how you came up with that specific number. Has it been researched and measured? Does it vary significantly from person to person? I tend to think of 80kLOC as the upper limit, because that's the size of the DOOM engine, written by a single highly-capable programmer, who presumably had it all in his head at one point. But that is empirically way beyond what I'm capable of.
This post is written by someone rather naive of computer science. Their description of a "function" betrays that. The failure to recognize UNIX pipelines as function composition also betrays that and so does the whole "black boxes" paragraph.
Yes, the article is deeply unserious and perhaps even in "not even wrong" territory. The poor blogger tried to put up a strawman on how type checking was a cognitive load issue, completely ignoring how interfaces matter and comply with them is a necessary condition for robust bug-free code.
Over the last year I’ve seen people on the internet using “unserious” as an adjective to criticize people. Is it just an expression that became common, a frequent literal translation from another native language…?
Not at all judging, I am not a native speaker and I never saw the word used that way before, but this is the kind of question that’s pretty much impossible to get an answer for without a direct question.
When we are grading a person’s argument or their approach to a topic for seriousness, none of those seem to capture the sort of opposite we need. And that’s where ‘unserious’ comes up. How seriously should we take this? Is it serious or unserious?
I’m not sure why there would be a relatively recent surge in its use. Etymonline cites ‘unserious’ as having 17th century origins. Ngram shows increasing usage in the modern era but starting from the 1950s onwards.
If anything it probably reflects an increased use of ‘serious’ in the context of discussing intellectual opinions, as opposed to ‘consequences’ or ‘demeanor’ or ‘injury’ – which if anything is a reflection of increased public engagement with intellectual pursuits.
That’s what weirded me, without context it sounds too literary for random internet comments, but my gut feeling in a second language isn’t really worth much.
For example, Rust borrow checking isn’t to add complexity, it is to make it possible to explicitly define lifetime bounds in interfaces.
I agree with the overall thrust of your argument, but on this specifically what's noticeable in an assembly language is that data doesn't have types, the operations are typed. That's usually not really a thing in programming languages. It does happen, but it's not common.
For example Rust's Wrapping<i8> is a distinct type from Saturating<i8>, and if we've got variables named foo and bar with the value 100 in them, if foo and bar have the type Wrapping<i8> the addition operation means the answer is -56 but for Saturating<i8> the exact same addition operation gets you 127
In assembly or machine code there's be Wrapping addition, and Saturating addition and they'd both work on integers, which don't themselves inherently care.
The 32-bit integer 0x3f000000 is literally the same thing as the 32-bit floating point value 0.5
If we write that Quake 3 "fast" inverse square root, the machine code does not have any casting or type changes, but the C does and the Rust does, because in machine code the operations were typed but in these high level languages the values were typed.
One major limitation of almost all type systems: they fail to distinguish logical types from representation.
Disclaimer: not hardware engineer.
However as a junior in college we had several classes using VHDL and later Verilog. They have types. They are used to simulate design before committing it to board.
I would be surprised people brute force it often.
Since software won’t be subjected to physics, we have to introduce our own laws to constrain reality. Type checkers are exactly that.
This is not accepted wisdom at all.
I'm of the mind that the modern resurgence of typed programming languages is mainly the revelation that you can build your contracts directly into the structure of the code without necessarily needing additional (imperative, runtime!) assertions or physically separated unit tests to validate those assumptions as the software evolves. It's just inlining a certain kind of assertion or test directly into the declarations of the code.
"How do you specify the shape of the input and output ports?" shouts the goose as it chases me away.
If I felt snarky, I'd say "This is your brain on Javascript".
Maybe the truth is that our job as developers is to manage the inherent complexity of the world we live in.
No it's not.
Type checking is a first line of defense against bugs. That's all.
It's just making sure you're not accidentally passing invalid data or performing an invalid operation.
And since the article gets the premise wrong, its conclusion that reducing complexity will make type checking "largely irrelevant" is therefore also wrong.
Exactly, and it goes way beyond that point. Static type checking turns expensive, hard-to-catch bugs that manifest only at runtime when exercising specific code paths into cheap, trivial to spot bugs that you spot at compile time.
On top of that, nowadays it greatly improved developer experience as IDEs use type hints to detect and avoid bugs even before you compile the code, and even helps autocomplete code you are typing.
We've been through this before. Haven't we learned this yet?
To understand that this is a big deal, one sometimes needs to face the relaxed rules of their static language of choice. Delphi (and its ancestor Turbo Pascal) allows passing untyped pointers for parameters defined as typed. This behavior is actually controlled by a setting that is probably relaxed (allowed) by default. It seems it was done for convenience when using pointer arithmetic. At some point in the past, I also started logging categories of bugs (an ad-hoc taxonomy) and, if some kind of bug appeared more than once, to do something about it (like encouraging a new habit, rule, or something similar). So, after passing wrong pointers twice in a short period of time, I made it mandatory for all sources to have only typed pointers, which required some additional typing but saved me from similar bugs further on.
Unfortunately, the examples given are not particularly representative, because they all imply complex interfaces (reading? writing? buffering?) and complex input/output formats: the shell that is doing the piping only cares about ‘bytes’, but the programs themselves have much more complex requirements on the data, which tends to be what happens once you start assigning semantics to your interfaces.
Even with spatial locality limiting the number of inputs and outputs, the ‘interfaces’ that hardware uses are usually specified in multi-hundred-page manuals, and that's mostly due to having to encode the semantics of what it means to put volts on those wires.
As such this notion that "we don't need types, we can have parsers." kind of argument doesn't feel very inspiring.
Remember when the world went insane and wanted JS (not TS) to run literally everywhere?
Now we live in a world where TS, Python and even PHP admitted the mistake of unrestrained types. People who didn’t get the memo are highly questionable, probably facing solo-dev hacker mindset syndrome.
I am not sorry for having strong opinions.
The lax attitude that's passable for small hobby projects would be an utterly disastrous Tower of Babel if applied to real-world large-scale projects like at my job where hundreds of us are actively contributing at any given time (thousands over time) to a multi-million-line server. Developing software at that scale, in terms of developers and lines of code, would not be remotely feasible without type systems or equivalent systems that automatically check invariants statically.
To suggest that the elegance of Unix pipes extends to large-scale software development betrays a lack of experience with the complexity of real-world large-scale software systems.
It is looking more like the author genuinely thinks he understands something on a higher level and is a bit high on his own farts.
Just a program counter, a stack, a flat addressable block of memory, and some registers.
You’ll very quickly learn how hard it is to program when you have complete responsibility for making sure that the state of the system is exactly as expected before a routine is called, and that it’s restored on return.
Every language tool we’ve built on top of that is to help programmers keep track of what data they stored where, what state things should be in before particular bits of code are run, and making it harder to make dumb mistakes like running some code that will only work on data of one sort on data of a totally different structure.
Of course types are a solution. You just have no idea what the problem was.
If anyone is interested in learning assembly, this was the book and simulator I used as an undergrad. It's not exactly bytecode that will run on an actual processor but it will help you understand the bare metal things a CPU has to do in order to actually operate upon data and it's kinda eye opening when compared to what modern coding is actually like nowadays.
https://eater.net/
It was easier to get right, there was no wizard behind the typed curtain, just a bunch of arguments.
The PRNG was trivial. Managing the user's bank balance, switching between games, making the roulette wheel 'spin' by printing a loop of numbers slower and slower was painstaking, error ladden work.
State doesn't just contributing to complexity, it is complexity.
Things like the ABI itself, which includes register saving/restore, function call stack setup, to the linker and loader, ensuring the right sections go to the right places and are correctly sized, and then at load time ensuring relative addresses are correctly relocated, all branch target labels are resolved to these relocated addresses, it was such a pain.
All of this book-keeping was a genuinely the most time-consuming part of the assignments; the actual compiler ideas and implementation being relatively manageable.
Funnily enough the compiler implementation was done in OCaml.
While I don't know whether I agree with their view, I do see that, once we've used the function abstraction to build a C compiler, and used this C compiler to build a proper OS, there is possibility for such an OS to provide entirely new abstractions, and to forego (almost) completely with functions.
It's a puzzle focused on creating electronic components (with the goal of creating a running computer). Since it's more educational than competitive it's rather smooth journey.
But in the end there's not only assembler but there are actual electrons in the system that take time to reach CPU. It took me 2-3 days to finish during Holiday period and it made me much more aware of what is happening under the hood.
> We’ve created architectures so tangled, so dependent on invisible connections and implicit behaviors, that we need automated tools just to verify that our programs won’t crash in obvious ways.
> Type checking, in other words, is not a solution to complexity—it’s a confession that we’ve created unnecessary complexity in the first place.
I don't buy the idea that (1) we can always easily identify unnecessary complexity, separating it from necessary complexity; (2) reduce the system to just the necessary complexity, such that: (3) the remaining necessary complexity is always low, and easy to understand (i.e. not "incomprehensible to human reasoning", having few or no "implicit behaviors", so that we don't "need automated tools just to verify that" it "won't crash in obvious ways").
The idea that we can implement arbitrary systems to handle our complex requirements, while keeping the implementation complexity capped below a constant simply by not admitting any "unnecessary complexity" is not believable.
This is simply not a mature argument about software engineering and therefore not an effective argument against type checking.
> The problem isn’t that software is inherently more complex than hardware.
It positively is. The built electronic circuit closely mimics the schematic. The behavior of the electronic circuit consists of local states that are confined to the circuit components. For instance, in the schematic you have a capacitor and a neighboring resistor, as symbols? Then in the manufactured circuit, you have the same: a capacitor component connected to a neighboring resistor component. Now those things have a run-time behavior in response to the changing voltage applied. But that run-time behavior is locally contained in those components. All we need to imagine the behavior is to take the changing quantities like the capacitor's charge, current through the capacitor, voltage drop across the resistor and superimpose these on the components.
Software source code is a schematic not for compiled code, but for run-time execution. The run-time execution has an explosive state space. What is one function in the source code can be invoked recursively and concurrently so that there are thousands of simultaneous activations of it. That's like a schematic in which one resistor a circuit with 50,000 resistors, and that circuit is constantly changing; first we have to know what the circuit looks like at any time t and then think about the values of the components and then their currents and voltages. You might understand next to nothing about what is going on at run-time from the code.
Circuits havre no loops, no recursion, no dynamic set data structures with changing contents, no multiple instantiation of anything. The closest thing you see to a loop in a schematic is something like dozens of connections annotated as a 'bus' that looks like one wire. Some hardware is designed with programming-like languages like VHDL and whatnot that allow circuits with many repeating parts to be condensed. It's still nothing like software.
I mean, show me a compiler, web browser, or air traffic control system, implemented entirely in hardware.
> The evidence for this alternative view is hiding in plain sight. UNIX pipelines routinely compose dozens of programs into complex workflows, yet they require no type checking at the transport layer.
Obligatory: where is the web browser, Mobile OS, video game, heat and lung machine, CNC router, ... written with shell scripts and Unix pipes?
Even within the domain where they are applicable, Unix pipes do not have a great record of scalability, performance and reliability.
In some narrowly defined, self-contained data processing application, they can shine, especially against older languages. E.g. Doug McIlroy writing a tiny shell script to solve the same thing as Knuth's verbose Pascal solution.
Someone hasn't dealt with op-amp circuits. :-)
I think the exact opposite. Having spent more than a decade in distributed systems, I became convinced functional programming (and pure functions) are essential for survival in distributed systems.
Back then I wrote down my insights here (amazed that the site still exists)
https://incubaid.wordpress.com/2012/03/28/the-game-of-distri...
The flip side of this is to "make representable states valid." If you have an enum that doesn't fill a bitfield, values of the bitfield outside the enum are representable -- and the behavior of the system must be defined in that case. (Most often, this is done by mapping the behavior of undefined states to a chosen defined state, or using it to trigger an abort -- the key is that it must be an explicit choice.)
- https://www.cs.cmu.edu/~rwh/students/okasaki.pdf
- https://en.wikibooks.org/wiki/Haskell/Zippers
I learned electronics and dataflow languages before I learned traditional languages, and learning how modern systems were architected was horrifying to me. The level of separation of components is nowhere close to what's possible, because—as the author points out—the default level of abstraction in modern programming languages is the function.
Don't read this article as a critique of types (as in: typed vs untyped), but as a finger pointing at a higher level of abstraction that isn't a first-class concept in the tooling used to architect large systems today.
All that said, I have come across many people beating this drum over the years—usually Alan Kay or Ted Nelson types—and I suspect there's something they're missing as well.
Labview was a particular nightmare even for very basic tasks.
I suspect the issue is that traditional boxes and wires type dataflow IDEs put the burden of abstraction on the programmer, and since most people encounter these tools as students, they understandably fail to set up the abstractions correctly and conclude that the entire concept of high level abstraction is pointless.
I think there's a much better way, but nobody's really built it in a way that cuts through the prejudice.
Coincidentally, I learned how Palantir Foundry works today and it's an interesting case-study.
What primitive do you feel is more suitable for a high-level abstraction of processes operating on data?
https://en.m.wikipedia.org/wiki/Substructural_type_system
They talk about how electronics engineers use "isolation, explicit interfaces, and time-aware design" to solve these problems as if that's somehow different than current software development practices. Isolation and explicit interfaces are types. There's no strict analogue to "time-aware design" because software depends on timing in very different ways than hardware does.
Electronics engineers use a raft of verification tools that rely on what in software we'd call "types". EEs would actually love it if the various HDLs they use had stronger typing.
Where they really lost me is in the idea that UNIX pipelines and Docker are examples of their concept done right. UNIX pipelines are brittle! One small change in the text output of a program will completely break any pipelines using it. And for the person developing the tool, testing that the output format hasn't changed is a tedious job that requires writing lots of string-parsing code. Typed program output would make this a lot easier.
And Docker... ugh, Docker. Docker has increasingly been used for isolation and "security", but the original purpose of Docker was reproducible environments, because developers were having trouble getting code to run the same way between their development machines and production (and between variations of environments in production). The isolation/security properties of Docker are bolted on, and it shows.
> UNIX pipelines routinely compose dozens of programs into complex
> workflows, yet they require no type checking at the transport
> layer. The individual programs trust that data flowing between
> them consists of simple, agreed-upon formats—usually lines of
> text separated by newlines.
The transport layer is not the relevant layer; it's the programs in between that matter. If you're just looking for words or characters or something, that's straightforward enough--but the equivalent in programming is passing strings from one function to another. You run into problems when program 1 in the pipeline uses tab delimiters, whereas program 2 was expecting space characters or commas or something, and program 3 is expecting tokens in the pipe to be in a different order, or program 4 doesn't know what to do when some records have three tokens and others have four or two or something.
Yes. String-parse the output of a command... hope it's not different on MacOS, or add special conditions to it... it's all very tedious. AI seems to do this stuff well due to the sheer number of examples people have produced, thank god, but I put in my decades of writing it by hand. The pipe is great, but Bash sucks as a language for many reasons, lack of types pretty low on the list but still on there somewhere.
Powershell seems to have fixed that a bit by passing objects around in the shell. On the unix side more and more utilities just serialize to JSON and we all expect `jq` - somehow not always installed, but still Perl always is...
This author would apparently also argue that, because I rely on handwritten labels for the dozen or so storage bins in my garage, I must have created a system so intricate and interconnected that human reasoning fails, then declared the tools that help me navigate this complexity to be essential.
The author would surely make similar arguments against clocks, calendars, and calculators.
Of course, what's actually happening in all of these situations is that humans are relying on tools to handle tasks which the tools are designed to handle, so that the humans can focus on other things.
But type safety is really just defining an explicit interface, and enforcing that both sides maintain that interface. And it allows you to isolate components in functions or classes, because you don't need to know how the implementation works, just the types of the inputs and outputs, and what the function is supposed to do (sans bugs).
I don't know a lot about electrical engineering, but it would surprise me if they don't have tools to make sure you don't do something like send a 120V AC current into a component that expects 5V DC.
Probably a more apt analogy would be a keyed adapter.
A few thoughts...
1. Some things are complicated. Of course we should elminate needless complexity - but not all complexity is needless.
2. If your system has huge thing 1 connected to huge thing 2, then you have to understand 2 huge things and a little glue. If you instead have a system comprised of small thing 1 connected to small thing 2, ... connected to small thing 100, you now how to understand 100 small things and 99 connections. The nature of these things and connections matter but you can't just keep dividing things to simplify them. John Carmack's email about inlining code springs to mind: http://number-none.com/blow/john_carmack_on_inlined_code.htm...
3. "A complex system that works is invariably found to have evolved from a simple system that worked. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. You have to start over, beginning with a working simple system." - https://en.wikipedia.org/wiki/John_Gall_(author)
I would love to see the author take a complex system where Typescript (for example) is "necessary" and refactor it to JavaScript.
[1] https://www.computer.org/csdl/journal/ts/1976/01/01702333/13...
[2] https://en.wikipedia.org/wiki/Margaret_Hamilton_(software_en...
Testing is science, type-checking is math https://adueck.github.io/blog/testing-is-science-type-checki...
Bullshit, they absolutely do. They have circuit-design software with all sorts of verification tools. Go search "circuit verification". And EEs would jump at any additional opportunity to check their circuits for common errors before having to run them, just as every other engineering discipline would jump on the opportunity for better checking of common issues.
Sure, we should be writing more "obviously correct" systems. Type-checking helps you do that. What is the problem?
> HTTP servers and clients, email systems, DNS resolvers—they all interoperate based on simple protocols
Yeah, massive disagree with this. There is absolutely nothing "simple" about these protocols. Try to compose 3/4 UNIX programs together and you necessarily need to run it to see if it even makes sense. Don't get me started on HTTP.
It boils my blood to read "simple" in the context of software engineering
In something like JavaScript, I might write a function or class or whatever with the full expectation that some parameter is a string. However, if I don't check the runtime type, and throw if it is unexpected, then it is very easy for me to write this function in a way where it currently technically might work with some other datatype. Publish this, and some random user will likely notice this and that using that function with an unintended datatype.
Later on I make some change that relies on the parameter being a string (which is how I always imagined it), and publish, and boom, I broke a user of the software, and my intended bugfix or minor point release was really a semver breaking change, and I should have incremented the major version.
I'd bet big money that many JavaScript libraries that are not fanatical about runtime checking all parameters end up making accidental breaking changes like that, but with something like typescript, this simply won't happen, as passing parameters incompatible with my declared types, although technically possible, is obviously unsupported, and may break at any time.
I'd be forever ashamed if I wrote that.
Maybe there are some people like that, but for many, types simply are the best way to think about and structure code. They provide a sort of interactive feedback loop that actually makes the development process more pleasant. And these skeptics seem entirely incapable of conceiving that people pick types not because they were looking for some sort of organizational "solution" but because they genuinely prefer the mode of code-writing that type systems enable.
I do agree with how the article talks about time though. It seems as though in software there has been an implicit assumption that timing things isn't a consideration and it's because the early principles of the craft were developed on single threaded systems and we were all taught as late as the 90s that coding is just writing a list of instructions to be followed in order.
So many errors. The recommendations from electrical design are ones we all (should/do) actually follow. Type checking is in addition. We developed it because it helps
So many errors. This one repeated many times...
> grep | sort | uniq, you’re not doing type checking at the transport layer.
Maybe not checking, but definitely assuming! Try that on a binary file, go on!
The obvious flaw in the article: we can have both static type checking AND better architecture. In fact I also agree a lot of software architecture is unnecessarily complex and we should do better. Simultaneously I want a rock solid type system to back up that good architecture at a lower level.
Next: programming a computer is inherently complex, we make a lot of code, and humans make mistakes. In my experience we make quite a lot of mistakes as a result. Did you remember to initialise that variable? Assign the right data type? Free that memory allocation? Pick the right CPU register? One day you'll forget, so why not have a machine check it for us? Why handicap yourself with manual checks for something the machine can solve near instantly? Some people absolutely INSIST on dodging static checks and then are surprised when their software ends up a buggy mess and on the CVE list.
Next: types are abstractions for things we shouldn't need to worry about every time. Like another commenter implied, if you hate types then feel free to code in assembly, where you'll spend most of your day choosing registers and which MOV instruction to use. In most use cases this is a waste of time because the compiler will do a great job in a millisecond. And some things are just complicated and beyond comfortable human mental capacity - who wants to model a hash table every time they need a hash table? Lucky we can put it in a type and stop thinking about it. Computers are intricate and types help us manage the difficulty at a low-medium level of abstraction.
Next: loose typing will almost always reduce readability. For any sort of structured data it quickly becomes a headache to figure out what's happening without types. Types are precise, immutable, and enforced. "Stringly-typed" or "dict-typed" data sucks. Even if you architect your design well it's still clearer to say in code what you mean for your code. Maybe you can put a detailed code comment but at that point just use types. Interesting that the article mentioned UNIX shell coding because past the given example of sorting lines - which is about as trivial and uninteresting of an example as you can possibly get - shell coding is a pain exactly because everything is a string. What is this variable? What does this function/module return? Who knows! Shall we spend an hour researching and probing until we figure it out?
Lastly: I'm not an electronic engineer but I'm pretty sure they DO use tools to statically check their designs. Because signal timing is so important and it's hard to check manually. I doubt Intel's CPU has architectural simplicity and that's the sole reason it works correctly.
In summary, types make so much sense. Please use them to make your software more efficient, more reliable, and more maintainable.
Type checking is simply built in testing for a subset of problems. At scale, the only thing you gain from dynamic typing is the need to write even more tests than usual.
Still types are the best we have.
"You don't need x, real programmers use y!"
- the explicit interfaces for modules used in hardware design _are_ types. - hardware design requires extensive testing and verification, and formal verification (using types) is a valuable step in hardware design.
I think the author conflates the use of types with structural complexity - at least at the introduction of his article, hence the term "symptom" used to qualify type checking in the title (actually _lengthy_ type checking, this is implied, but without context the title of the article becomes clickbait-y).
Indeed, by the end, the author admits that types are useful, without conceding that the initial idea was an error of judgment.
I treat them as I would unit testing: a useful tool to catch bugs early and if you’re skipping it, it should be an intentional decision where you’ve accepted the trade offs.
Maybe I'm missing something, but those simple well defined interfaces are the types?
I've worked with large javascript codebases before typescript came on the scene, and functions with a single params object were rife, which combined with casual mutation plus nested async callbacks made for a mess that was very difficult to get confidence on what was actually being passed to a given function (presumably easier if you'd written it instead of joining later)
A type system so that you can define those interfaces (and check they're adhered to) feels pretty essential to me.
There's not really a JavaScript program so small that I wouldn't use TypeScript. Okay, typing a few lines into the Chrome DevTools console is an exception, if you count that.
Which is exactly what you find a ton of in electrical engineering (e.g. IEEE C37.2 and gazillion more).
> The standard answer is scale. “Small programs don’t need types,” the reasoning goes, “but large programs become unmaintainable without them.” This sounds reasonable until you realize what we’re actually admitting: that we’ve designed our systems to be inherently incomprehensible to human reasoning. We’ve created architectures so tangled, so dependent on invisible connections and implicit behaviors, that we need automated tools just to verify that our programs won’t crash in obvious ways. > > Type checking, in other words, is not a solution to complexity—it’s a confession that we’ve created unnecessary complexity in the first place.
Woah woah, where did unnecessary come from? Yes we have made super complex systems that need automated checking. But they are (mostly) necessarily complex. Are you going to write a 10k line OS kernel? A 10k web browser? A 10k line word processor? Yeah good luck.
> electronics engineers routinely design systems with millions of components, intricate timing relationships, and complex interactions between subsystems. Yet they don’t rely on anything analogous to our type checkers.
I dunno if he's talking about PCBs... but actually having types for wires is a pretty great idea. Seems like Altium kind of has support for this but I don't know how much checking it does: https://www.altium.com/documentation/altium-designer/sch-pcb...
Anyway, aside from the fact that PCB design is a completely different thing, PCB designers accidentally make wrong connections all the time.
> The problem isn’t that software is inherently more complex than hardware.
It is though. Way more complex. Find me a PCB as complex as Chrome or LLVM. Hell, find me a CPU as complex. Even the latest OoO designs don't come close.
> UNIX pipelines
Ha I knew he was going to suggest this.
> yet they require no type checking at the transport layer
"require" sure, I guess. Now go and look up how many quoting options there are in `ls`. And nobody has ever realized that structured piping is better and have written two popular shells (at least) that support it.
Ugh so many misguided words remain, I'll stop there.
"Type checking is an antipattern" because electrical engineers designing CPUs have nothing like type checking to rely upon
These two statements alone seem, to be charitable, wrong. But together, I have a sneaky feeling that a running CPU is at least as cognitively overwhelming as a modern software application. Which would mean cognitively overwhelming systems are not an antipattern. Or that CPU designers are also engaging in an active antipattern. So these arguments would appear to contradict each other in any case.
Overall a post like this that makes huge grandiose claims and ends with basically nothing to show at the end rings pretty hollow. Yes complexity is inconvenient but handwaving "there should be systems that are simple ! easy to understand!" yeah thanks! show us one? Oh, the mainboard of an iPhone, that's easy to understand? It literally requires software to figure out how to lay out all the tiny components on both sides like that. It is...not simple at all.
Types are an attempt to prevent certain bugs from reappering. They are not perfect, of course. The general problem is that we do not know how to design programs yet.
So, typing.
Yes??? I personally cannot understand the complexity of a single compiler yet alone all the libraries and systems that might contribute to a program, but maybe I'm just not trying hard enough
> you can focus on the explicit contracts between components
> you can design systems where problematic interactions are structurally impossible
In other words, exactly what a strong type system gives you?
My impression from doing my own amateurish hardware projects and knowing some real hardware engineers is that they would KILL for something like the type systems we have in software. In fact there are plenty of hardware description language projects that try to strengthen the type system of the HDL to catch errors early.
If you think a Rails test suite is bad, look at a Verilog test suite sometime.
As an EE: no, just no. This is like asserting that the average programmer routinely writes thousands of lines of C in a highly distributed environment. And like programmers, once you're at this scale of complexity, it's far too big to fit into a single human skull, and you absolutely must have an entire team of people.
> Yet they don’t rely on anything analogous to our type checkers.
We absolutely do. Every day. When we lay out a PCB, there are dozens and dozens of safe/sane checks happening in the background at all times.
Counterpoint: we have all kinds of situations where type checking would save us. Ask anyone who has implemented an I2C driver. We're twiddling bits and we have a huge pile of magic numbers from the vendor. There is absolutely no way to verify that your driver isn't going to set the chip on fire other than to try it. It takes hours and days to validate.
When we write to CPU registers, there is absolutely no way to check in advance if you've set the bit you want, or if you've set a bit that permanently burns some efuses and bricks the chip.
> They use different architectural principles—isolation, explicit interfaces, and time-aware design—that make their systems naturally more robust and understandable.
I'm gonna stop reading right here. Hardware systems are exactly as robust and reliable as software. Because hardware is software. There is almost nothing in the modern day that operates on straight digital/analog logic without any code. Everything is running software and the vast majority of it is written in C with Classes.
I surmise that the author has never met an EE or even seen a circuit board. They also have never written or seen a single line of C.
But really, C is the only argument you need. We invented typed languages because C is so godawful.
I think this is the same thing as vaccine denial. We've lived in a world without these problems for so long that certain people without critical thinking skills just don't believe the problem ever existed in the first place, so the remedies we've spent generations building are an undue and dangerous burden. This is a remarkably unintelligent and uninformed line of thinking that is sadly far, far too common.
That being said, clearly there is a ton of value in automatically verifying interfaces are implemented correctly. Even dynamic languages like Clojure have Spec. I think that's a good balance.
I'm sorry, but this is ridiculous. Yes, type checking is a tool for managing complexity (among other things). But to make the leap from there to "this complexity is unnecessary" has absolutely no basis. This is like saying EDA tools shouldn't have design rule checks because it means we're making our circuit boards too complex. These tools are there to enable you to be more efficient: to write code or lay down traces without having to manage all the constraints yourself.
And speaking of design rule checks:
> Consider this: electronics engineers routinely design systems with millions of components, intricate timing relationships, and complex interactions between subsystems. Yet they don't rely on anything analogous to our type checkers.
The author has clearly never done any professional electrical engineering because this could not be further from the truth. Every EDA package has a design rules system that includes a design rule checker. You cannot spin a board without passing these checks, and if you try to work without them, not only is the board you get from the assembly house not going to work, you're also going to get fired.
The suggestion in the concluding section that we should merely design every software component as a black box with explicit input and output ports is already how we build software. It's how we've been building software for decades. What do you think function arguments and return values are? What do you think APIs are? And how do engineers whose components interface with yours know how to interface with your components, if they don't have at least some type information? Even in a JavaScript application you need to know something about the layout of the objects that are expected and returned. That information is a type!
Quit trying to outsmart decades of real-world experience and accept help from the tools that make this job easier.
> We debate whether Rust’s borrow checker is better than Haskell’s type classes. We argue about whether TypeScript’s structural typing is superior to Java’s nominal typing.
Life is too short to argue about stupid programming language bullshit. If you want to use Rust, do it. If not, don't. The actual code that makes up the computer system as a whole is still 99.99% the same because you are running on an operating system written in C calling a database or other systems also written in C.
That's a great idea! I'd like to suggest that we make programming languages have this as a first class concept: as part of writing a module, you write down a static description of the explicit contract. You might even encode effects and error states into that description!
Then, you know that if you stick to that contract, nothing ripples through the dependencies, but if you change it, there's things you need to go update elsewhere.
You can even readily swap in different components that have that same external contract! And since it's static, we can check it before the program even runs.
"external contract" is kind of clunky. What if we called it a "type"?
It's not scale, it's correctness.
> Consider this: electronics engineers routinely design systems with millions of components, intricate timing relationships, and complex interactions between subsystems. Yet they don’t rely on anything analogous to our type checkers.
They do. Verilog gets its name from "verification". The electronics industry has built all sorts of verification and testing software and equipment to essentially do the same things as type-checkers; ensure that inputs yield the right outputs, and some sort of assurance this is true all the time.
> We’ve created systems so intricate and interconnected that human reasoning fails, then declared the tools that help us navigate this complexity to be essential.
That's just how complexity works. We've created cities so large we need maps to navigate them. We've created so much medical knowledge, we need specialists. This last one is a good analogy— our bodies are so complex we still don't fully understand them. That doesn't mean there's anything wrong with them.
> It’s like building a maze so convoluted that you need a GPS to walk through it, then concluding that GPSs are fundamental to architecture.
As similar as this sounds to the statement I made about cities above, this sentence is flawed in that GPSs in no way fundamental to architecture. They're fundamental to quick navigation. You can live without a GPS, but you'll take more wrong turns if you're not familiar with a place.
CAD is fundamental to architecture, and helps both build cheaper and avoid mistakes.
> The missing piece is programming languages and development environments designed from the ground up for this isolated, message-passing world. Languages where time is a first-class concept, where components are naturally isolated, and where simple data formats enable universal composition. Tools that make it as easy to wire together distributed components as it is to pipe together UNIX commands.
This is a bad conclusion because anyone who was worked with microservices and distributed systems will know– they don't reduce complexity, they just try hide it.
We'd like to believe we can create a big system out of little system— and that is true. That's how programming languages work, and the reason functions, modules, classes, etc exist. Logic can be encapsulated.
We'd also like to believe we can completely decouple these systems, so that when my team works in the "users" service, I can just ship code and not have to worry about any of the other teams' work. This is not true. If my system changes its output, it'll have downstream effects— wether it be in the same, type-checked program or a separate program consuming it. The only difference is in the former case I realize my change will break something and I am told to mind the change as I write the code for it— In the latter, I don't. Independently deployable components come in two flavours— non-typechecked and error prone due to version drift and unintended API changes, or typechecked, which is essentially a mono-system but with extra steps.
On static type checking in general my view is: Programming is principally about dealing with invariants. The author is correct to point out that humans are bad at reasoning about more than 7 things at once. Those things we often reason about are invariants and type systems helps us ensure we never violate them. I think even 7 invariants is too many, in C it's common enough to mess up a single invariant (only access this memory while it's alive) that it has caused thousands of CVEs.
Ultimately, why would I not use a tool that helps me get the invariants right more of the time, thus reducing cognitive load?
I distinctly remeber back in 2017 or so I was using tensorflow to train a CNN.
At the time Python really hadn't gotten its shit together with regards to typing.
Googles Tensorflow documentation was heavily filled with completely useless high level nonsense like this.
Or something like this. Useful maybe if you wanna put "wrote a machine learning classifer" on your resume or something. But not useful.But the biggest pain of all was that I had no clue what I was supposed to put into any of these functions. The documentation was the function name and arguments names. I ended up just opening Python in interactive mode and calling type on various intermediates running through examples and writing it down.
When you want others to actually be able to use your code you NEED to specify in what actually are your inputs in a clear and unambiguous way. This is fundamentally why typing is vital and usedul even in small codebases. Argument names (or even worse just being like *args, **kwargs just doesn't work.
This is why if you open up any half decent Python library now they actually tell you what you need to provide and what to expect out. Without it you cannot reason about what to do.
No, they can't - they can make assumptions and assertions, but not agreements, because they have no way to communicate such agreements between each other.
Reasoning has nothing to do with it. If I have a `scale(x, y, z)` function in my code I can easily remember how to call it.
But with a large codebase, if my fuzzy memory tells me I should call it as `scale([x, y, z])` I don't want to find out I'm wrong at runtime.
Unit tests are good, but there's a tendency for dynamic languages to overuse them exactly because they don't have basic type safety.