Static Sites Enable a Good Time Travel Experience
Key topics
The article discusses how static sites enable a 'good time travel experience' by allowing developers to easily view past versions of their website using version control, sparking a discussion on the benefits and limitations of this approach compared to web archiving services.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
59m
Peak period
32
0-6h
Avg / period
10.3
Based on 41 loaded comments
Key moments
- 01Story posted
Sep 2, 2025 at 11:25 AM EDT
4 months ago
Step 01 - 02First comment
Sep 2, 2025 at 12:24 PM EDT
59m after posting
Step 02 - 03Peak activity
32 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 5, 2025 at 10:39 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
It may make sense to change a "S" there for "standard".
Use ci from RCS, and that's about it. It makes a single file in the same directly as the file, no need to checkout files, or keep track of, well, anything. Just check in new versions as you make them.
It's not the right answer in many cases (especially if you need multiple files), but for a single file? The simplicity can't be beat.
[1] https://github.com/w3c/webauthn/issues/1255
[1]: https://whatwg.org/faq#change-at-any-time
[2]: https://www.spacejam.com/1996/
Very rarely used, so two better examples:
* http://
Now mostly unusable.
* Quirks mode
Netscape navigator or Internet Explorer compatibility (no <!doctype html>). Still supported by browsers for rendering old pages. Must be annoying to maintain. https://developer.mozilla.org/en-US/docs/Web/HTML/Guides/Qui...
And if it couldn't, you could run these old programs in a VM, and I expect that to continue essentially forever, so I see no future problem viewings these browser files.
1: https://hamatti.org/posts/i-gamified-my-own-blog/
2: https://varunbarad.com/blog/blogging-achievements
Sure I won't have the actual content, but I can see the pages and designs with dummy data. But then I can also load up one of several backups of the sqlite file and most likely everything will still work.
... so it's useless to anyone except you, then?
Potential issues:
- If you have content in a database, can you able to restore the database at any point in time?
- If you code has dependencies, were all the dependencies checked in the repository? If not, can you still find the same version you were using.
- What about your tools, compilers, etc.? Sure some of them like Go are pretty good with backward compatibility, but not all of them. Maybe you used a beta version of a tool? You might need to find the same version of the tools you were using. By the way, did you keep track of the versions of your tools, or do you need to guess?
Even with static websites, you can get into trouble if you referenced e.g. a JS file stored somewhere else. But the point is: going back in time is often much easier with static websites.
(Related topic: reproducible builds.)
Especially since it's not limited to only sites I've created...
And in this particular case, all the creator was looking for was old badge images, and they'd generally be in an images directory somewhere no matter whether the site was static or dynamic.
https://joeldare.com/why-im-writing-pure-html-and-css-in-202...
EDIT: Well perhaps the "build steps" one, but building my Hugo site just involves me running "hugo" in my directory.
See “Static Sites” section. And realize that DNS caching your pages is essentially making your site “static”.
https://news.ycombinator.com/front?day=2025-08-31
(available via 'past' in the topbar)
Thanks for the share.
I almost ended up doing it twice. Old links and time is what stops me.
Inspiration - https://ankarstrom.se/~john/articles/html/