State-Based vs Signal-Based Rendering
Posted2 months agoActive2 months ago
jovidecroock.comTechstory
calmmixed
Debate
70/100
State ManagementSignalsReactFrontend Development
Key topics
State Management
Signals
React
Frontend Development
The article compares state-based and signal-based rendering approaches, sparking a discussion on the merits and drawbacks of each, with some commenters sharing their experiences and opinions on the topic.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
27
3-6h
Avg / period
8.9
Comment distribution62 data points
Loading chart...
Based on 62 loaded comments
Key moments
- 01Story posted
Oct 20, 2025 at 5:36 AM EDT
2 months ago
Step 01 - 02First comment
Oct 20, 2025 at 8:02 AM EDT
2h after posting
Step 02 - 03Peak activity
27 comments in 3-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 21, 2025 at 9:07 PM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45641892Type: storyLast synced: 11/20/2025, 3:10:53 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Compare: "import a specific lightweight library and wire together as needed" vs "write the whole app in terms of a bloated framework".
I've been out of the frontend game for a while, but what does react give you that knockout and maybe some url management logic do not?
I guess components are supposed to standardize modularity, so you can easily import some random widget?
- auto rerender by comparing trees?
- track all changes by signals?
e.g:
Where signals offer an important benefit is in localizing re-rendering. If you use context with regular, non-signal values the entire VDOM tree has to be re-rendered when the context value changes (because there's no way to know what code depends on its value). With signals you can change the value of a signal in the context without changing the context itself, meaning that the only part of the VDOM tree that gets re-rendered is the one using the signal.
With performance considerations out of the way context becomes a really interesting way to provide component composition without having to specify the same props over and over as you make your way down a chain.
That consideration is orthogonal to what the useful data is. It could be a signal, or not. In other words, signals are not an alternative to context.
Traditional? I remember when React was the new kid on the block. I am getting old! :-D
Also, it seems that with signals you must use immutable values only. Imagine if you have, let's say, a text document model, and you need to create a new copy every time the user types a letter. That's not going to work fast. And there will be no granular updates, because the signal only tracks the value (whole document), not its components (a single paragraph).
Also the article mentions rarely used preact and doesn't mention Vue. Vue can track mutable object graphs (for example, text document model). But Vue uses JS proxies that have lot of own issues (cannot access private fields, having to deal with mixing proxies and real values when adding them to a set, browser APIs break when a proxy is passed).
Also I don't like that React requires installing Node and compilation tools, this is a waste of time when making a quick prototype. Vue can be used without Node.
Also, for nested data structures you need to either do path copying, or use "modification boxes" [1].
[1] https://en.wikipedia.org/wiki/Persistent_data_structure#Tech...
Too small. Imagine if you have a 2GB mutable file. Each keystroke in the middle of the file has to move the whole 2nd gigabyte backward.
Funnily enough, back when storage was slow enough that saving a text document involved a progress bar, one of the big advantages Word had over competitors was lightning fast saves, which they accomplished by switching to an immutable data structure. That is, while others would update and save the data, Word would leave it untouched and just append a patch, skipping lots of time spent rewriting things that an in-place edit would shift.
The “copy everything” mental model of immutable programming is really about as wrong as a “rewrite everything” mental model of mutable programming. If it happens it’s bad code or a degenerate case, not the way it’s supposed to happen or usually happens. Correctly anticipating performance requires getting into a lot more detail.
[1] https://react.dev/learn/installation#try-react
By default, usestate causes unnecessary rerenders which signals avoid (all automatically).
When you ignore the performance aspect, React has the objectively least amount of boilerplate for reactivity.
The question, that I genuinely don't know the answer to, is a) whether the performance improvement is worth it, and b) whether that's still the case after the compiler.
I find it pointless to argue about old versions of Svelte and Vue, when their drawbacks have already been addressed by their own creators [^1].
[^1]: https://svelte.dev/blog/svelte-5-is-alive
<div id="myoutput">Enter a name</div>
<input type="text" onchange="document.getElementById('myoutput').innerText = `Hello {name}!`" />
One upside of this approach is that the only subtree that needs to be re-rendered is the specific element whose state got mutated.
Another upside of this approach is that the code doing the mutation is very close to the actual UI element that triggered it. Of course, this rapidly turns into a downside as the size of the codebase grows...
It isn't, given that React requires you to provide an array of dependencies when using useMemo and useEffect. The point of React is to automatically update the DOM when a virtual DOM tree gets modified.
Oh boy. The youth of the author is really visible.
I'm sure some 80 year olds before us thought the same of us using that term.
I don't think the differences are that significant, JS signals are basically `latch(frp-event-stream)`, eg. FRP events yield edge-triggered systems and JS signals yield level-triggered systems, and latches transform edge-triggered to level triggered.
I understand why people can see JS signals as FRP behaviours though, as both have defined values at all times t, but the evaluation model is more like FRP events (push-based reactivity), so I think edge vs. level triggered is the real difference, and these are interconvertible without loss of information.
IIRC, the FRP literature calls both of them "signals" as a general category, just two different types.
"Traditional" hooks are 6 years old. I think it's to early to call it traditional. Given that literally everyone else looked at this "tradition" and chose differently. Namely, signals.
Signals were popularized by SolidJS, but SolidJS's Ryan Carniato will keep telling you that what everyone calls signals now has its roots in libs like KnockoutJS from 2010. And everyone has been busy using signals for the past three years.
Given the amount of frameworks that implement signals today (including monsters like Angular), it's React who's not following tradition.
I guess the bigger point is that we could offer the same charity to the OP that I'm lending you here in your use of the wrong "to".
There are meaningful critiques elsewhere in the comments about the piece with some semblance of charitable interpretation. We've elected instead to manufacture a snide, pedestrian haughtiness about wording.
It's a collaboration between multiple library maintainers, attempting to standardize and unify their shared change-tracking approach.
I wonder some years down the future, if there will be no need for JS frameworks since a lot of what they offer will be integrated into JS itself.
https://github.com/proposal-signals/signal-polyfill
Like, I understand why people aren't going full 3d game simulation style for applications. But I don't understand why things are as divergent as they are?
Is this just my off perspective?
I have similar questions on asset management. But I think that one makes a bit more sense, though? Game studios often have people that are explicitly owners of some of the media. And that media is not, necessarily, directly annotated by the developer. Instead, the media will be consumed as part of the build step with an edit cycle that is removed from the developer's workflow.
That is, yes, I shouldn't have mentioned the "main" method. You don't even necessarily want to own the main loop, if you will. The logic that you do every iteration of the loop, though, is something you will almost certainly see in most games. I don't think I've seen many (any?) applications where you could identify the main loop logic. Is typically spread out over god knows how many locations in the code.
See https://github.com/tc39/proposal-signals?tab=readme-ov-file#... for more on how signals differ. Mainly no manual bookkeeping and the signal is kind of like a handle, and it allows lazy/computed signals that reference other signals and do the change tracking
I feel like at some point we are gonna complete the cycle and have MVVM/MVC binding engines in frontend dev.
There is barely any difference left between createContext+useSignal vs. C# DataContext+ObservableProperty.
One way I frame such issues in my mind is that it's about who has control (and how much) over what is rendered and when on the screen I carry in my pocket. To some extent it is now Signal when I use the app. One day it might be my State instead.
In the time that I’ve been using ClojureScript, the whole react community has switched to hooks. I find it so baffling - requiring your entire data structure to be contained within your component lifecycle? The thousands of calls to individual “useState” to track the same number of state items? The fact that every function is closing over historical state unless you manually tell it to change every time the variable does — and then this last bit in combination with non-linear deep equality comparisons?
I recently have been in the process of switching phrasing.app from reagent atoms to preact/signals for performance reasons (and as part of a longer horizon move from react to preact) and I have to say it’s been fantastic. Maybe 50 lines of code to replicate reagent atoms with preact/signals, all the benefits, and much much faster.
Very happy that there is react-like library so devoted to first class support of signals.