Experiment: Making Typescript Immutable-by-Default
Postedabout 2 months agoActiveabout 2 months ago
evanhahn.comTechstoryHigh profile
calmpositive
Debate
60/100
TypescriptImmutabilityProgramming Paradigms
Key topics
Typescript
Immutability
Programming Paradigms
The author experiments with making TypeScript immutable-by-default, sparking a discussion on the benefits and challenges of immutability in programming.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
36m
Peak period
25
2-4h
Avg / period
7.6
Comment distribution91 data points
Loading chart...
Based on 91 loaded comments
Key moments
- 01Story posted
Nov 18, 2025 at 8:56 AM EST
about 2 months ago
Step 01 - 02First comment
Nov 18, 2025 at 9:31 AM EST
36m after posting
Step 02 - 03Peak activity
25 comments in 2-4h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 19, 2025 at 1:52 PM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45966018Type: storyLast synced: 11/20/2025, 6:42:50 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I think you want to use a TypeScript compiler extension / ts-patch
This is a bit difficult as it's not very well documented, but take a look at the examples in https://github.com/nonara/ts-patch
Essentially, you add a preprocessing stage to the compiler that can either enforce rules or alter the code
It could quietly transform all object like types into having read-only semantics. This would then make any mutation error out, with a message like you were attempting to violate field properties.
You would need to decide what to do about Proxies though. Maybe you just tolerate that as an escape hatch (like eval or calling plain JS)
Could be a fun project!
Property 'thing' does not exist on type 'Readonly<{ immutable: true; }>'.ts(2339)
So it would be a simple way to achieve it.
e.g., `let i = 0; i++;`
They seem to be only worried about modifying objects, not reassignment of variables.
Of course, it doesn't help that the immutable modifier for Swift is `let`. But also, in Swift, if you assign a list via `let`, the list is also immutable.
1. This isn't an effort to make all variables `const`. It's an effort to make all objects immutable. You can still reassign any variable, just not mutate objects on the heap (by default)
2. Recursion still works ;)
It may be beside the point. In my experience, the best developers in corporate environments care about things like this but for the masses it’s mutable code and global state all the way down. Delivering features quickly with poor practices is often easier to reward than late but robust projects.
Would your 'readonly' annotation dictate that at compile time ?
eg
class Test {
}We may be going off topic though. As I understand objects in typescript/js are explicitly mutable as expected to be via the interpertor. But will try and play with it.
The point does stand though, outside of modifying properties I'm not sure what a "private" class itself achieves.
`in` already implies the reference cannot be mutated, which is the bit that actually passes to the function. (Also the only reason you would need `in` and not just a normal function parameter for a class.) If you want to assert the function is given only a `record` there's no type constraint for that today, but you'd mostly only need such a type constraint if you are doing Reflection and Reflection would already tell you there are no public setters on any `record` you pass it.
Its a bummer haxe did not promote itself more for the web, as its a amazinlgy good piece of tech. The languge shows age, but has an awesome typesystem and metaprogramming capabilities.
That said, haxe 5 is on the horizon.
And yes, i know that is what made it popular.
- State management
- Concurrency
- Testing
- Reasoning about code flow
Not a panacea, but calling it "overrated" usually means "I haven't felt its benefits yet" or "I'm optimizing for the wrong thing"
Also, experiencing immutability benefits in a mutable-first language can feel like 'meh'. In immutable-first languages - Clojure, Haskell, Elixir immutability feels like a superpower. In Javascript, it feels like a chore.
No runtime cost in production is the goal
Clojure's persistent data structures are extremely fast and memory efficient. Yes, it's technically not a complete zero-overhead, pragmatically speaking - the overhead is extremely tiny. Performance usually is not a bottleneck - typically you're I/O bound, algorithm-bound, not immutability-bound. When it truly matters, you can always drop to mutable host language structures - Clojure is a "hosted" language, it sits atop your language stack - JVM/JS/Dart, then it all depends on the runtime - when in javaland, JVM optimizations feel like blackmagicfuckery - there's JIT, escape analysis (it proves objects don't escape and stack-allocates them), dead code elimination, etc. For like 95% of use cases using immutable-first language (in this example Clojure) for perf, is absolutely almost never a problem.
Haskell is even more faster because it's pure by default, compiler optimizes aggressively.
Elixir is a bit of a different story - it might be slower than Clojure for CPU-bound work, but only because BEAM focuses on consistent (not peak) performance.
Pragmatically, for the tasks that are CPU-bound and the requirement is "absolute zero-cost immutability" - Rust is a great choice today. However, the trade off is that development cycle is dramatically slower in Rust, that compared to Clojure. REPL-driven nature of Clojure allows you to prototype and build very fast.
From many different utilitarian points, Clojure is enormously practical language, I highly recommend getting some familiarity with it, even if it feels very niche today. I think it was Stu Halloway who said something like: "when Python was the same age of Clojure, it was also a niche language"
I think immutability is good, and should be highly rated. Just not as highly rated as it is. I like immutable structures and use them frequently. However, I sometimes think the best solution is one that involves a mutable data structure, which is heresy in some circles. That's what I mean by over-rated.
Anyway, regardless of the capabilities of the language, some things work better with mutable structures. Consider a histogram function. It takes a sequence of elements, and returns tuples of (element, count). I'm not aware of an immutable algorithm that can do that in O(n) like the trivial algorithm using a key-value map.
You can absolutely do this efficiently with immutable structures in Clojure, something like
This is O(n) and uses immutable maps. The key insight: immutability in Clojure doesn't mean inefficiency. Each `update` returns a new map, but:1. Persistent data structures share structure under the hood - they don't copy everything
2. The algorithmic complexity is the same as mutable approaches
3. You get thread-safety and easier reasoning for a bonus
In JS/TS, you'd need a mutable object - JS makes mutability efficient, so immutability feels awkward.
But Clojure's immutable structures are designed for this shit - they're not slow copies, they're efficient data structures optimized for functional programming.
You are still doing a gazillion allocations compared to:
But apart from that the mutable code in many cases is just much clearer compared to something like your fold above. Sometimes it's genuinely easier to assemble a data structure "as you go" instead of from the "bottom up" as in FP.I felt that way in the latest versions of Scheme, even. It’s bolted on. In contrast, in Clojure, it’s extremely fundamental and baked in from the start.
This is shockingly common and most developers will never ever hear of Clojure, Haskell or Elixir.
I really feel there is like two completely different developer worlds. One where these things are discussed and the one I am in where I am hoping that I don't have to make a teams call to tell a guy "please can you make sure you actually run the code before making a PR" because my superiors won't can him.
[1] https://clojure.org/reference/transients
Default immutability in Clojure is pretty big deal idea. Rich Hickey spent around two years designing the language around them. They are not superficial runtime restrictions but are an essential part of the language's data model.
I tried Immutability.js back in the day and hated it like any bolted-on solution.
Especially before Typescript, what happened is that you'd accidentally assign foo.bar = 42 when you should have set foo.set('bar', 42) and cause annoying bugs since it didn't update anything. You could never just use normal JS operations.
Really more trouble than it was worth.
And my issue with Clojure after using it five years is the immense amount of work it took to understand code without static typing. I remember following code with pencil and paper to figure out wtf was happening.
It appears a small proposal along these lines has appeared in then wake of that called Composites[0]. It’s a less ambitious version certainly.
[0]: https://github.com/tc39/proposal-composites
It's too late and you can't dismiss it as "been tried and didn't get traction".
As we all know, TypeScript is a super-set of JavaScript so at the end of the day your code is running in V8, JSCore or SpiderMonkey - depending on what browser the end user is using, as an interpreted language. It is also a loosely typed language with zero concept of immutability at the native runtime level.
And immutability in JavaScript, without native support that we could hopefully see in some hypothetical future version of EcmaScript, has the potential to impact runtime performance.
I work for a SaaS company that makes a B2B web application that has over 4 million lines of TypeScript code. It shouldn't surprise anyone to learn that we are pushing the browser to its limits and are learning a lot about scalability. One of my team-mates is a performance engineer who has code checked into Chrome and will often show us what our JavaScript code is doing in the V8 source code.
One expensive operation in JavaScript is cloning objects, which includes arrays in JavaScript. If you do that a lot.. if, say, you're using something like Redux or ngrx where immutability is a design goal and so you're cloning your application's runtime state object with each and every single state change, you are extremely de-optimized for performance depending on how much state you are holding onto.
And, for better or worse, there is a push towards making web applications as stateful as native desktop applications. Gone are the days where your servers can own your state and your clients just be "dumb" presentation and views. Businesses want full "offline mode." The relationship is shifting to one where your backends are becoming leaner .. in some cases being reduced to storage engines, while the bulk of your application's implementation happens in the client. Not because we engineers want to, but because the business goal necessitates it.
Then consider the spread operator, and how much you might see it in TypeScript code:
const foo = {
};// same thing, clones original array in order to push a single item, because "immutability is good, because I was told it is"
const foo = [...array, newItem];
And then consider all of the "immutable" Array functions like .reduce(), .map(), .filter()
They're nice, syntactically ... I love them from a code maintenance and readability point of view. But I'm coming across "intermediate" web developers who don't know how to write a classic for-loop and will make an O(N) operation into an O(N^3) because they're chaining these together with no consideration for the performance impact.
And of course you can write performant code or non-performant code in any language. And I am the first to preach that you should write clean, easy to maintain code and then profile to discover your bottlenecks and optimize accordingly. But that doesn't change the fact that JavaScript has no native immutability and the way to write immutable JavaScript will put you in a position where performance is going to be worse overall because the tools you are forced to reach for, as matter of course, are themselves inherently de-optimized.
Like @drob518 noted already - the only benefit of mutation is performance. That's all. That's the only, distinct, single, valid point for it. Everything else is nothing but problems. Mutable shared state is the root of many bugs, especially in concurrent programs.
"One of the most difficult elements of program design is reasoning about the possible states of complex objects. Reasoning about the state of immutable objects, on the other hand, is trivial." - Brian Goetz.
So, if immutable, persistent collections are so good, and the only problem is that they are slower, then we just need to make them faster, yes?
That's the only problem that needs to be solved in the runtime to gain countless benefits, almost for free, which you are acknowledging.
But, please, don't call it a "trade-off" - that implies that you're getting some positive benefits on both sides, which is inaccurate and misleading - you should be framing mutation as "safety price for necessary performance" - just like Rust describes unsafe blocks.
Yes, js runs in a single-threaded environment for user code, but immutability still provides an immense value: predictability, simpler debugging, time-travel debugging, react/framework optimizations.
Modern js engines are optimized for short-lived objects, and creating new objects instead of mutating uses more memory only temporarily. Performance impact of immutability is so absolutely negligible compared to so many other factors (large bundles, unoptimized images, excessive DOM manipulation)
You're blaming the wrong thing for overblown memory. I don't know a single website that is bloated and slow only because the makers decided to use immutable datastructures. In fact, you might be exactly incorrect - maybe web pages getting slower and slower because we're now trying to have more logic in them, building more sophisticated programs into them, and the problem is exactly that - we are reaching the point that is no longer simple to reason about them? Reasoning about the code in an immutable-first PL is so much simpler, you probably have no idea, otherwise you wouldn't be saying "this is not the way"
The Purely Functional Data Structures book, that Clojure data structures are based on, is from 1996.
This is how far back we're behind the times.
however, enforcing it is somewhat difficult & there are still quite a bit lacking with working with plain objects or maps/sets.
I believe you want `=`, `push`, etc. to return a new object rather than just disallow it. Then you can make it efficient by using functional data structures.
https://www.cs.cmu.edu/~rwh/students/okasaki.pdf
Then took a moment to tame my disappointment and realized that the author only wants immutability checking by the typescript compiler (delineated mutation) not to change the design of their programs. A fine choice in itself.
With persistent data structures - only the changed parts are new; unchanged parts are shared between versions; adding to a list might only create a few new nodes while reusing most of the structure; it's memory efficient, time efficient, multiple versions can coexist cheaply. And you get countless benefits - fearless concurrency, easier reasoning, elimination of whole class of bugs.
I say "read-write" or "writable" and "writability" for "mutable" and "mutability", and "read-only" and "read-only-ness" for "immutable" and "immutability". Typically, I make exceptions only when the language has multiple similar immutability-like concepts for which the precise terms are the only real option to avoid confusion.
Mutable is from Latin 'mutabilis' - (changeable), which derives from 'mutare' (to change)
You can't call them read-only/writable/etc. without confusing them with access permissions. 'Read-only' typically means something read-only to local scope, but the underlying object might still be mutable and changed elsewhere - like a const pointer in C++ or a read-only db view that prevents you from writing, but the underlying data can still be changed by others. In contrast, an immutable string (in java, c#) cannot be changed by anyone, ever.
Computer science is a branch of mathematics, you can't just use whatever words you think more comfortable to you - names have implications, they are a form of theorem-stating. It's like not letting kids call multiplication a "stick-piling". We don't do that for reasons.
Generally immutability is also a programming style that comes with language constructs and efficient data structures.
Whereas 'read-only' (to me) is just a way of describing a variable or object.
https://gcanti.github.io/fp-ts/modules/
But the most popular functional ecosystem is effect-ts, but it does it's best to _hide_ the functional part, in the same spirit of ZIO.
https://effect.website/
https://news.ycombinator.com/item?id=45771794
I made a little function to do deep copies but am still experimenting with it.
37 more comments available on Hacker News