Urls Are State Containers
Posted2 months agoActive2 months ago
alfy.blogTechstoryHigh profile
calmmixed
Debate
70/100
Web DevelopmentState ManagementURL Design
Key topics
Web Development
State Management
URL Design
The article 'URLs are state containers' argues that URLs can be used to store application state, sparking a discussion on the pros and cons of this approach among developers.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
117
0-12h
Avg / period
26.7
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Nov 2, 2025 at 6:12 AM EST
2 months ago
Step 01 - 02First comment
Nov 2, 2025 at 7:29 AM EST
1h after posting
Step 02 - 03Peak activity
117 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 7, 2025 at 1:24 PM EST
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45789474Type: storyLast synced: 11/20/2025, 8:23:06 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
If you want to argue against the use of URLs to represent state, I would concentrate on the “R” (resource) aspect.
Navigational state need not be confused with app state. Also talking about "state" as in "state machine" etc used to sound pretty academic with obscure meaning of the word "state". When someone says "state machine" they are basically saying "I'm a PhD and you are not". There are simpler and more crisp ways to convey things rather than via obscurity.
URL is considered a permanent string. You can break it, but that's a bad thing.
So keeping state in the URL will constrain you from evolving your system. That's bad thing.
I think, that it's more appropriate to treat URL like a protocol. You can encode some state parameters to it and you can decode URL into a state on page load. You probably could even version it, if necessary.
For very simple pages, storing entire state in the URL might work.
But sometimes it’s less obvious how to keep state encoded in a URL or otherwise (i.e for the convenience of your users do you want refreshing a feed to return the user to a marker point in the feed that they were viewing? Or do you want to return to the latest point in the feed since users expect a refresh action to give them a fresh feed?).
I design my SSR apps so that as much state as possible lives in the server. I find the session cookie to be far more critical than the URL. I could build most of my apps to be URL agnostic if I really wanted to. The current state of the client (as the server sees it) can determine its logical location in the space of resources. The URL can be more of an optional thing for when we do need to pin down a specific resource for future reference.
Another advantage of not urlizing everything is that you can implement very complex features without a torturous taxonomy. "/workflow/18" is about as detailed as I'd like to get in the URL scheme of a complex back office banking product.
But something that can bite you with these solutions if that browsers allow you to duplicate tabs, so you also need some inter-tab mechanisms (like the broadcast API or local storage with polling) to resolve duplicate ids
Basically, your approach is easier to code, and worse to use. Bookmarks, multiple tabs, the back button, sharing URLs with others, it all becomes harder for users to do with your design. I mean feel free, because with many tech stacks it is indeed easier, but don't pretend it's not a tradeoff. It's easier and worse.
It’s a losing battle when even the tools (web browsers hiding URLs by default, heck even Firefox on iOS does it now!) and companies (making posters with nothing more than QR codes or search terms) are what they’re up against….
Our company does phishing tests like most, and their checklist of suspicious behavior is 1 to 1 useless. Every item on the list is either 1: something that our company actually does with its real emails or 2: useless because outlook sucks a huge wang. So I basically never open emails and report almost everything I get. I’m sure the IT department enjoys the 80% false report rate.
Really we should be going to com.ycombinator.news/item?id=45789474 instead.
Plus it would make using autocomplete way harder, since I can write "news.y" and get already suggested this site, or "red" and get reddit. If you were to change that, you'd need to type _at least_ "com.yc" to maybe get HN, unless you create your own shortcuts.
Conveniently enough, my browser displays the URL omitting the protocol (assuming HTTPS) and only shows host and port in black, and path+query+fragment
As far as autocomplete goes, what you're describing is a behavior of one particular implementation. If URLs looked differently, autocomplete would behave differently as well.
I'm also reminded of https://xkcd.com/1172/
I just used Pako.js which accepts a `{ dictionary: string }` option. Concat a bunch of common URL together, done.
The only downside (with both our approaches) is if you add substantially many new fields / common values later on, you need to update the dictionary, and then old URLs don't work, so you'd need some sort of versioning scheme and use the right dictionary for the right version.
Probably because it sounds like the most poorly named breakfast cereal ever.
I think saying they are unrelated isn't correct either. In order for hypermedia to be the engine of application state, the continuations of your application must be reified as URLs, ie. they must be stateful. This state could be stored server-side or in the URL, it doesn't matter, as URLs are only meaningful to the server that generated and interprets them.
Browsers running Javascript referenced from HTML is a perfect example of HATEOAS, for example. browsers and web server creators agreed on the semantics of these two data formats, and now any browser in the world can talk to any web server in the world and display what was intended to be displayed to the user.
If the web design hadn't been HATEOAS, you'd need server specific code in your browser, like AOL had a long time ago, where your browser would know how to look up specific parts of the AOL site and display them. This is also how most client apps are developed, since both the client and the server are controlled by the same entity, and there is no problem in hardcoding URLs in the client.
I think of flight stick controllers.
From a machine client perspective, it's a different story. JSON-LD is more-or-less HATEOAS, and it works fine for ActivityPub. It's good when you want to talk to an endpoint that you know what data you want to get from it, but don't necessarily need to know the exact shape or URLs.
When you control both the server and client, HATEOAS extra pain for little to no benefit, especially when it's implemented poorly (ie. when the client still needs to know the exact shape of every endpoint anyway, and HATEOAS really just makes URLs opaque), and it interacts very badly when you need to parse the URL anyway, to pull parts from it or add query parameters.
No database. No cookies. No localStorage
Themes chosen. Languages selected. Plugins enabled.
Which have the pattern of rhetoric but no substance. Clearly the author put significant effort it so why get an LLM to add noise?
First of all thank you for your words about the content.
I get why you might feel that way. English isn’t my first language, so I sometimes use GPT to help me polish phrasing or find a smoother rhythm for certain lines.
But the ideas, structure, and all the writing direction are mine. I don’t ask it to write articles for me. It just help me express things more clearly. I treat it more like an editor than a writer.
I genuinely don't understand why people don't get more upset over hitting refresh on a webpage and ending up in a significantly different place. It's mind-boggling and actually insulting as a user. Or grabbing a URL and sending to another person, only to find out it doesn't make sense.
Developing like this on small teams also tends, in my experience, to lead to better UX, because it makes you much more aware of how much state you're cramming into a view. I'll admit it makes development slower, but I'll take the hit most days.
I've seen some people in this thread comment on how having state in a URL is risky because it then becomes a sort of public API that limits you. While I agree this might be a problem in some scenarios, I think there are many others where that is not the case, as copied URLs tend to be short-lived (bookmarks and "browser history" are an exception), mostly used for refreshing a page (which will later be closed) or for sharing . In the remaining cases, you can always plug in some code to migrate from the old URL to the new URL when loading, which will actually solve the issue if you got there via browser history (won't fix for bookmarks though).
Do you have advice on how to achieve this (for purely client-side stuff)?
- How do you represent the state? (a list of key=value pair after the hash?)
- How do you make sure it stays in sync?
-- do you parse the hash part in JS to restore some stuff on page load and when the URL changes?
- How do you manage previous / next?
- How do you manage server-side stuff that can be updated client side? (a checkbox that's by default checked and you uncheck it, for instance)
If you go there, that's the URL you get. However, if you do anything with the map, your URL changes to something like
https://radar.weather.gov/?settings=v1_eyJhZ2VuZGEiOnsiaWQiO...
Which, if you take the base64 encoded string, strip off the control characters, pad it out to a valid base64 string, you get
"eyJhZ2VuZGEiOnsiaWQiOm51bGwsImNlbnRlciI6Wy0xMTUuOTI1LDM2LjAwNl0sImxvY2F0aW9uIjpudWxsLCJ6b29tIjo2LjM1MzMzMzMzMzMzMzMzMzV9LCJhbmltYXRpbmciOmZhbHNlLCJiYXNlIjoic3RhbmRhcmQiLCJhcnRjYyI6ZmFsc2UsImNvdW50eSI6ZmFsc2UsImN3YSI6ZmFsc2UsInJmYyI6ZmFsc2UsInN0YXRlIjpmYWxzZSwibWVudSI6dHJ1ZSwic2hvcnRGdXNlZE9ubHkiOmZhbHNlLCJvcGFjaXR5Ijp7ImFsZXJ0cyI6MC44LCJsb2NhbCI6MC42LCJsb2NhbFN0YXRpb25zIjowLjgsIm5hdGlvbmFsIjowLjZ9fQ==", which decodes into:
{"agenda":{"id":null,"center":[-115.925,36.006],"location":null,"zoom":6.3533333333333335},"animating":false,"base":"standard","artcc":false,"county":false,"cwa":false,"rfc":false,"state":false,"menu":true,"shortFusedOnly":false,"opacity":{"alerts":0.8,"local":0.6,"localStations":0.8,"national":0.6}}
I only know this because I've spent a ton of time working with the NWS data - I'm founding a company that's working on bringing live local weather news to every community that needs it - https://www.lwnn.news/
I've almost entirely moved to Rust/WASM for browser logic, and I just use serde crate to produce compact representation of the record, but I've seen protobufs used as well.
Otherwise you end up with parsing monsters like ?actions[3].replay__timestamp[0]=0.444 vs {"actions": [,,,{"replay":{"timestamp":[0.444, 0.888]}]}
https://example.com/some/path?foo=bar&baz=bat&foo=bar&baz=ba...
If the website or app has a good UX for displaying/sharing URLs, the length doesn't really matter.
Both approaches (appending/rewriting) have their uses, the tricky part is using the right thing for the right action, fuck up either and the experience is abysmal.
Then a developer gets the task to create this, and they too don't push back on what exact URIs are being used, nor how the history is being treated. Either they don't have time, don't have the power to send back tasks to product, simply don't care or just don't think of it. They happily carry along creating whatever URIs make sense to them.
No one is responsible for URLs, no one considers that part of UX and design, so no one ends up thinking about it, people implement things as they feel is right, without having a full overview over how things are supposed to fit together.
Anyways, that's just based on my experience, I'm sure there are other holes in the process that also exacerbates the issue.
That said, I've also worked with some developers that didn't like intruding on their turf, so to speak. Though I've also worked with others that were more than happy to collaborate and very proactive about these sorts of things.
Furthermore, as a UX designer this is the sort of topic that we're unlikely to be able to meaningfully discuss with PMs and other stakeholders as it's completely non-visual and often trying to bring this up with them and discuss it ends up feeling like pulling teeth and them wondering why we're even spending time on it. So usually it just ended up being a discussion between me and the developers with no PM oversight.
I've had people be surprised by the request because its something they don't usually consider, but I've never had anyone actually push back on it.
Interacting with the URL from JS within the page load cycle is inherently complex.
For what it's worth, I'd also argue that the right behavior here is to replace.
But that of course also means that now the URL on the history stack for this particular view will always have the filter in it (as opposed to an initial visit without having touched anything).
Of course the author's case is the good/special one where they already visited the site with a filter in the URL.
But when you might be interested in using the view/page with multiple queries/filters/paramerers, it might also be unexpected: for example, developers not having a dedicated search results page and instead updating the query parameters of the current URL.
Also, from the history APIs perspective, path and query parameters are interchangeable as long as the origin matches, but user expectations (and server behavior) might assign them different roles.
Still, we're commenting on a site where the main view parameter (item ID, including submission pages) is a query parameter. So this distinction is pretty arbitrary.
And the most extreme case of misusing pushState (instead if replace) are sites where each keystroke in some typeahead filter creates a new history entry.
All of this doesn't even touch the basic requirement that is most important and addressed in the article: being able to refresh the page without losing state and being able to bookmark things.
Manually implementing stuff like this on top of a basic routing functionality (which should use pushState) in an SPA is complex very quickly.
I would have one state for when the user first entered the page, and then the first time they modify a filter, add a 2nd state. From thereon, keep updating/replacing that state.
This way if the user clicks into the page, and modifies a dozen things they can
1. Refresh and keep all their filters, or share with a friend 2. Press back to basically clear all their filters (get back to the initial state of the page) 3. Only 1 more press of back to get back to where-ever they came from
Unless of course, you initially visited the page with a stateful URL.
Same with search ahead.
But if you really just want your users to be able to hit refresh and not have their state change for non-navigational stuff like field contents or whatever, unless you have a really clear use case where you need to maintain state while switching devices and don’t want to do in server-side, local storage seems like the idiomatic choice.
On lichess.org/analysis, each move you make adds a history item, lichess.org/analysis#1, #2, and so on.
Pretty annoying.
Why not just use localStorage?
So that I can operate two windows/tabs of the same site in parallel without them stealing each other’s scroll position. In addition, the second window/tab may have originated from duplicating the first one.
https://stackoverflow.com/questions/11896160/any-way-to-iden...
I was referring to mostly everything else
I can imagine in your situation as a pure designer how you got it though though, sorry to hear that and I wish other devs cared more. I've def mentoring people to care about it so hope others do so too.
We all are trying to understand a problem and trying to figure out the best solution.
How each role approaches this has some low level specializations but high level learnings can be shared.
If your page is server-rendered, you get saved scroll position on refresh for free. One of many ways using JS for everything can subtly break things.
Actually it would be amazing if desktop applications were like this too, and we had a separate way to go back to the initial screen
The trouble with leaving restoring state to the application do as they wish is that most of times they will get it wrong. Also most of them don't do any of this and will never do. Good defaults matter
Good defaults definitely matter. But not overloading an app with functionality matters as well. Matching feature sets to actual user needs also matters.
The problem with state restoration is that it’s one of those features that looks simple, yet can be extremely tricky to implement correctly – the point you already made. And there’s no single solution that will fit all cases, or even 80% of them. Restoring scroll position is one thing, but restoring an unfinished video editor timeline is another. Both look deceptively simple ("I just reopened the crashed app and it opened at the exact same state"), but the internal mechanics require wildly different mechanisms and trade-offs.
I do agree, however, that frameworks and SDKs should provide properly designed mechanisms for state restoration – and they often do (like the State Restoration API on iOS/macOS).
But the argument that "state restoration should be default and provided by the environment" feels like post-rationalization of the existing mechanics.
> It’s like the Erlang approach to errors, but on steroids
The Erlang approach was intentionally designed that way. Web apps’ normalization of "restarting" is just a testament to how normal buggy software has become in the web ecosystem. Anyone who has ever tried to buy tickets online or register through a simple form on a government website knows that even for such common use cases, it’s extremely hard to create a good user experience. There are some fantastic web apps nowadays, and government-backed design systems and frameworks that sometimes match native apps’ experience – but that only proves the point. It takes an enormous amount of effort to make even simple things work reliably on the web stack.
The core reason, of course, is that the "web stack" is a typesetting engine from the ’80s that was never designed for modern UI apps’ needs in the first place. Why we still use a markup language to build sophisticated UIs and think it’s fine is beyond me. I recently saw an experiment where someone played a video in Excel, using spreadsheet cells as pixels and a lot of harness code to make it work as an output device. It’s doable, but Excel was never designed for that. No matter how many layers of abstraction we put on top – or how many ExcelReact frameworks we create – the foundation is simply not right for the task.
And yet people continue to justify the “defaults” of the web stack as if they were deliberate design choices rather than byproducts. Like, "it’s so good that everything is zoomable," or "I like that everything is selectable". Which sounds fine – until it doesn’t. Why on earth would I need to select half my widget tree with a 3-pixel mouse shift? And when I really do need to select something, it often doesn’t work properly because developers take it for granted and never verify or test it.
Or zooming – whenever I zoom a Facebook page to write a comment, the view keeps jumping around because some amazing piece of JS crapcode decides to realign the interface on a timer (to show ads?). Nobody on Facebook’s QA team probably even tests how the comment section works when zoomed in Safari. The web app experience is simply one of the worst, due to this messy feature set people call "good defaults". And as someone who also has to write web apps from time to time, I can’t stress enough how disproportionately more effort it takes to make an app with sane, good default behavior.
(P.S. There are some good things in the current state of the web stack – but they’re mostly the product of the industry’s sheer size, not the stack itself.)
There are situations where you want to link to a specific part of a page, and for that anchors and text anchors work well. But in my experience it isn't the default behaviour that I want for most pages.
This URI for example:
https://developer.mozilla.org/en-US/docs/Web/HTML/Reference/...
Links to an instance of "The Referer" narrowed down via a start prefix ("downgrade:") and end suffix ("to origins").
These are used across Google I believe so many have probably seen them.
[0] https://developer.mozilla.org/en-US/docs/Web/URI/Reference/F...
The two use cases are in slight conflict: most of the time, when I share a URL, I don't want to share a specific scroll position (which probably doesn't even make sense, if the other guy has a different screen size.)
Obviously the URL is not all state, it doesn’t save your cursor or IME input. So there is some distinction between “important” and “unimportant” state.
Youtube gives you both options, and either can be what you want. Youtube also seems to be smart enough to roughly remember where you were in the video, when you are reloading the page.
I'm in the opposite camp - I find it extremely annoying when sites clutter up the browser history with unnecesarly granular state. E.g. hitting "back" button closes a modal instead of taking me to the previous page.
I do dislike those cases. But I also dislike being two-thirds through a video or page, thinking “I’ve got to share this with <friend>, it’s right up their alley”, then hitting my fast combination of keys to share a URL and realising the link shared my exact place, which will make the person think I’m sharing a snippet and not the whole thing, so now I need to send another message to clarify.
I like being able to have URLs reproduce a specific state, but I also want that to be a specific decision and not something I can share or save to a bookmark by mistake.
I did not find an extension that does just that but it should be trivial to create one and assign a shortcut to it.
If the state were stored in the URL, I could do it in two steps: open context menu -> Send Page to Device, and I'm done.
Th web has evolved a lot, as users we're seeing an incredible amount of UX behaviors which makes any single action take different semantics depending on context.
When on mobile in particular, there's many cases where going back to the page's initial state is just a PITA the regular way, and refreshing the page is the fastest and cleanest action.
Some implementations of infinite scroll won't get you to the content top in any simple way. Some sites are a PITA regarding filtering and ordering, and you're stuck with some of the choices that are inside collapsible blocks you don't even remember where they were. And there's myriads of other situation where you just want the current page in anew and blank state.
The more you keep in the url, the more resetting the UX is a chore. Sometimes just refreshing is enough, sometimes cleaning the URL is necessary, sometimes you need to go back to the top and navigate back to the page you were on. And those are situations where the user is already in frustration over some other UX issue, so needing additional efforts just to reset is a adding insult to injury IMHO.
It actually worked really well, but obviously I had very little state. The only things I didn't store in the hash were form state and raw visualization data (like chart data).
The problem here is that they've implemented an application navigation feature with the same name as a browser navigation feature. As a user, you know you need to click "Back" and your brain has that wired to click the broswer back button.
Very annoying.
Having "Refresh" break things is (to me) a little more tolerable. I have the mental association of "refresh" as "start over" and so I'm less annoyed when that takes me back to some kind of front page in the app.
You made my day. I totally agree with you: state, state management, UX/UI.
I am extremely proud that I lately implemented exactly this: What if... you pass a link or hit reload - or back button in browser.
I have a web app that features a table with a modal preview when hitting a row - boy am I proud to have invested 1 hour in this feature.
I like your reasoning: it ain't a technical "because I can dump anything in a url", nope, it is a means to an end, the user experience.
Convenience, what ever. I have now a pattern to put in more convenience like this, which should be pretty normal.
The only think that remains and bothers me is the verbose URL - the utter mess and clutter in the browser's input field. I feel pain here and there is a conflict inside me between URL aesthetics and flatter the user by providing convenience.
I am working on a solution, because this messy URL string hurts my eyes and takes away a little bit the magic and beauty of the state transfer. This abstract mess should be taken care of, also in regard to obfuscation. It ain't cleanly to have full-text strings in the URL, with content which doesn't belong there.
But I am on it. I cannot leave the URL string out of the convenience debate, especially not on mobile. Also it can happen that strings get stripped or copy & paste accidentally cut of parts. The shorter the better and as we see, convenience is a brutally hard job to handle. Delicate at so many levels, here error handling due to wrongly formatted strings, a field few people ever entered.
My killer feature is the initial page load - it appears way more faster, since there are no skeletons waiting for their fetch request to finish. I am extremely impressed by this little feature and its impact on so many levels.
Cheers!
So what is the reality? The linked StackOverflow answer claims that, as of 2023, it is "under 2000 characters". How much state can you fit into under 2000 characters without resorting to tricks for reducing the number of characters for different parameters? And what would a rethought approach look like?
[0] https://nuqs.dev/
Url query params are not popular in the front end developer world for some reason, probably bc the fundamentals of web dev are often skipped in favor of learning leetcode and all the react hooks. Same could be sade for SQL and CSS.
I also don't think its a good look that the author is a CTO and is just discovering how useful url query params are. that being said, its a pretty good and well-written blog post.
Sure, in the prismjs.com case, I have one of those comments in my code too. But I expect it to break one day.
If a site is a content generator and essentially idempotent for a given set of parameters, and you think the developer has a long-term commitment to the URL parameters, then it's a reasonable strategy (and they should probably formalise it).
Perhaps you implement an explicit "save to URL" in that case.
But generally speaking, we eliminated complex variable state from URLs for good reasons to do with state leakage: logged-in or identifying state ending up in search results and forwarded emails, leaking out in referrer logs and all that stuff.
It would be wiser to assume that the complete list of possible ways that user- or session-identifying state in a URL could leak has not yet been written, and to use volatile non-URL-based state until you are sure you're talking about something non-volatile.
Search keywords: obviously. Seach result filters? yeah. Sort direction: probably. Tags? ehh, as soon as you see [] in a URL it's probably bad code: think carefully about how you represent tags. Presentation customisation? No. A backlink? no.
It's also wiser to assume people want to hack on URLs and cut bits out, to reduce them to the bit they actually want to share.
So you should keep truly persistent, identifying aspects in the path, and at least try not to merge trivial/ephemeral state into the path when it can be left in the query string.
In a previous experiment, I created a simple webpage which renders media stored in the URL. This way, it's able to store and render images, audio, and even simple webpages and games. URLs can get quite long, so can store quite a bit of data.
https://mkaandorp.github.io/hdd-of-babel/
Maybe a solution is some kind of browser widget that displays query params in a user-friendly way that hides the ugliness, sort of like an object explorer interface.
I actually implemented a comment system where users just pick any arbitrary URL on the domain, ie, http://exampledomain.com/, and append /@say/ to the URL along with their comment so the URL is the UI. An example comment would be typed in the URL bar like,
http://exampledomain.com/somefolder/somepage.html/@say/Hey! Cool somepage. - Me
And then my perl script tailing the webserver log file sees the line and and adds the comment "Hey! Cool somepage. - Me" to the .html file on disk for comments.
This is a small hobby project, I am not in IT.
Youre doing two things:
1) youre moving state into an arbitrary untrusted easy to modify location.
2) youre allowing users to “deep link” into a page that is deep inside some funnel that may or may not be valid, or even exist at some future point in time, forget skipping the messages/whatever further up.
You probably dont want to do either of those two things.
A challenge for this is that the URL is the most visible part of an HTTP request but there are many other submerged parts that are not available as UI yet are significant to the http response composition.
Additionally, aside from very basic protocol, domain, and path, the URL is a very not human friendly UI for composing the state.
That said, I agree with the point and expose as much as possible in the URL, in the same way that I expose as much as possible as command line arguments in command line utilities.
But there are costs and trade offs with that sort of accommodation. I understand that folks can make different design decisions intentionally, rather than from ignorance/inexperience.
https://scrobburl.com/ https://github.com/Jcparkyn/scrobburl
Few years back, I built a proof-of-concept of a PDF data extraction utility, with the following characteristic - the "recipe" for extracting data from forms (think HIPAA etc) can be developed independently of confidential PDFs, signed by the server, and embedded in the URL on the client-side.
The client can work entirely offline (save the HTML to disk, airgap if you want!) off the "recipe" contained in the URL itself, process the data in WASM, all client-side. It can be trivially audited that the server does not receive any confidential information, but the software is still "web-based", "browser-based" and plays nice with the online IDE - on dummy data.
Found a working demo link - nothing gets sent to the server.
https://pdfrobots.com/robot/beta/#qNkfQYfYQOTZXShZ5J0Rw5IBgB...
53 more comments available on Hacker News