A Super Fast Website Using Cloudflare Workers
Posted5d agoActive21h ago
crazyfast.websitestory
informativeneutral
Web DevelopmentAI Performance AnalysisWebsite Management
Key topics
Web Development
AI Performance Analysis
Website Management
Super Website: A super fast website using Cloudflare workers
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
3d
Peak period
75
72-84h
Avg / period
27
Comment distribution81 data points
Loading chart...
Based on 81 loaded comments
Key moments
- 01Story posted
Dec 28, 2025 at 7:42 AM EST
5d ago
Step 01 - 02First comment
Dec 30, 2025 at 10:33 PM EST
3d after posting
Step 02 - 03Peak activity
75 comments in 72-84h
Hottest window of the conversation
Step 03 - 04Latest activity
Jan 1, 2026 at 7:26 PM EST
21h ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 46410676Type: storyLast synced: 12/31/2025, 6:20:51 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The 30 ms and 4 ms numbers were typical Apache from MAE East and MAE West in 1998. Twenty five years and orders of magnitude more computing later? Same numbers.
Possibly as an extension of Quantum Computing where some probabilistic asymmetry can be taken advantage of. The QC itself might not be faster than classical computing, but the FTL comms could improve memory and cache access.
Also MetaGoog will use it to serve up hyper personalized ads in their Gemini based Metaverse.
Durable objects, r2 as well as tunnel have been particularly poor performing in my experience.
The site should be faster, though. I’ve had a small CF workers project that works correctly with quick load times.
Getting it closer can save you 50-150ms, but if whole load takes 1s+ that's minuscule
That seems to track. The vast majority of requests won’t go half way around the Earth, so maybe halving that time at 0.06 seems like a reasonable target.
And with Workers they're accessible from hundreds of locations around the world so you can get this sort of speed from almost anywhere.
Is the site getting slower?
End result, written in go, did around 80-200us to generate post page and 150-200us (on cheap linode VPS... probably far faster on my dev machine) for index page with a bunch of posts.
Core was basically
* pre-compile the templates * load blogpost into RAM, pre-compile and cache the markdown part
cache could be easily kicked off to redis or similar but it's just text, there is no need
Fun stuff I hit around:
* runtime template loading takes a lot just for the type-casting; the template framework I used was basically thin veneer over Go code that got compiled to Go code when ran * it was fast enough that multiple Write() vs one was noticeable on flame graph * smart caching will get you everywhere if you get cache invalidation right, making the "slow" parts not matter; unless you're running years of content and gigabytes of text you probably don't want to cache it anywhere else than in RAM or at the very least have over-memory cache be second tier.
The project itself was rewrite of same thing that I tried in Perl(using Mojolicious) and even there it achieved single digit ms.
And it feels so... weird, using webpage that just reacts with speed that the well-written native app has. Whole design process was going against the ye olde "don't optimize prematurely" and it was complete success, looking at performance in each iteration of component paid off really quickly. We got robbed of so much time from badly running websites.
It appears to have static content. Why does it need any JS at all?
https://github.com/ericfortis/mockaton/blob/main/www/src/_as...
uBlock Origin does it by default for instance.
On Brave, the workaround on that linked snippet bypasses their blocking.
The reason is that browser prefetching may hit URLs that were intended to be blocked.
Brotli is so 2024. Use zstd. (73.62%, I know.)
Brotli was designed for html compression so despite/while being a relatively inferior algorithm, its stock dictionary is all html/css/js-trained/optimized. Chrome/Blink recently added support for seeing content compressed with a bespoke dictionary, but that only works for massive sites that have a heavily skewed new/returning visit ratio (because of the cost of shipping both the compressed content and the dictionary).
Long story short, I could see br being better than zstd for some purposes.
Things can easily change when you start adding functionalities. One site I like to visit to remind myself of how fast usable websites can be, is Dlangs forum. I just navigate around to get the experience.
https://forum.dlang.org
If PSA had let me use affiliate links, I was planning to do the work to SSR in a Cloudflare Worker, but they declined and I decided to call the project where it was.
Interestingly, for me each page load takes a noticeably long delay. Once it starts loading all of the content snaps in almost at once. It’s slower to get there than the other forums I visit though.
The hard part when it comes to site optimization is persuading various stakeholders who want GTM, Clarity, Dynatrace, DataDog, New Relic, 7 different ad retargeters, Meta, X, and probably AI as well now that a fast loading website is more important than the data they get from whichever of those things they happen to be interested in.
For any individual homepage where that stuff isn't an issue because the owner is making all the decisions, it's fair to say that if your site loads slowly it's because you chose to make it slow. For any large business, it's because 'the business' chose to make it slow.
Just fire them all. Start your own company.
Eventually you'll want to know what users are doing, and specifically why they're not doing what you expected them to do after you spent ages crafting the perfect user journeys around your app. Then you'll start wondering if installing something to record sessions is actually a great idea that could really help you optimize things for people and get them more engaged (and spending more money.)
Fast forward three years, and you'll be looking at the source of a page wondering how things got so bad.
That's putting the cart before the horse. The way it's properly done is just to invite a few users and measure and track their interaction with your software. And this way you'd have good feedback instead of frustrating your real users with slow software.
Users being weird are the fundamental root cause of all software problems. :)
Maybe add some dynamic feature for the demo so that we don't need to trust you and be surprised at a nothingburger.
Pretty much any small payload/non-javascript site is going to render very quickly (and instantly from cache) making SSL time be the long pole.
I’m currently working on a small e-commerce store for myself, written in SvelteKit (frontend) and Go (backend) and one of my core objectives is to make it fast. Not crazy fast, but looking for TTFB < 50-70ms for an average Polish user. Will definitely share it once it’s public.
I decided to go check my website’s PageSpeed and I do have a 100/100/100/100 with pretty lots of content on the homepage including 6 separate thumbnails.
My site is on a straight path, no tricks — Github Pages Served to the Internet by Cloudflare.
I have 5G network :)
Also, heard multiple times that edge network can be worse, because if you're low prio and other part of globe is not busy, you get it routed in worst possible way.
Why brag about how it's not static content, if you're just going to tell the browser to cache it until the end of time anyways?
I think most sites could either be static HTML and use a CDN, or they need a database and pretty much have to be located in one place anyway.
It's quite hard to think of use cases where that isn't true.
This is a bit stochastic because of regions and dynamic allocation of resources. So, e.g. if you're the first user from a large georgraphic region to visit the website in the last several hours, your first load will be longer.
My other project (a blog platform) contains a lot of optimizations, so posts [3] load pretty much as fast as that example from the thread, i.e. 60-70ms.
1. https://minifeed.net/
2. https://minifeed.net/blogs
3. https://rakhim.exotext.com/but-what-if-i-really-want-a-faste...
- 3942ms
- 4281ms
Guess it depends on your region. This is from East-Asia.
For a dynamic service, well.. maybe implement something of interest and then we can discuss.
Add imagery and see if you get the same results.
I expect you could achieve such with Base64 but the caveat is larger file sizes.