A 16.67 Millisecond Frame
Key topics
The article 'A 16.67 Millisecond Frame' discusses the intricacies of rendering a frame in 16.67 milliseconds, a crucial aspect for smooth gaming experiences, and the discussion revolves around the technical details and implications for game development.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
N/A
Peak period
17
36-42h
Avg / period
4.8
Based on 29 loaded comments
Key moments
- 01Story posted
Oct 8, 2025 at 6:01 AM EDT
3 months ago
Step 01 - 02First comment
Oct 8, 2025 at 6:01 AM EDT
0s after posting
Step 02 - 03Peak activity
17 comments in 36-42h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 10, 2025 at 5:28 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Every time we scroll, hover, or trigger an animation, the browser goes through a whole routine. It calculates styles, figures out layout, paints pixels, and puts everything together on screen. All of that happens in just a few milliseconds.
It’s kind of wild how much is happening behind what feels instant. And the way we write code can make that process either smooth and fluid or heavy and janky.
I wrote an article that walks through this step by step, with a small demo showing exactly how these browser processes work and how a few CSS choices can make a big difference.
If you go back far enough, the IBM graphics cards used a text mode where text was accelerated by the graphics card, software would write a attribute byte and a data byte and the card had a bitmapped font to render text. VGA text mode at least could use hardware windowing so scrolling could also be accelerated; not every system used it, but you can set the start address and a line number where it returns to the start of the data area. That makes it easy to scroll without having to rewrite the screen buffer all at once. (If you set the stride to something that evenly divides the screen buffer and is at least as wide as your lines, it makes things even easier)
IMO all the "user" code must run in a dedicated thread, completely decoupled from the rendering loop. This code can publish changes to a scene tree, performing changes, starting animations, and so on, but these changes ultimately are asynchronous. You want to delete an element from a webpage, but it'll not be deleted at this JS line, it'll be deleted at the next frame, or may be after that, if rendering thread is a bit busy right now.
Animations must stay fluid and UI must react to the user input instantly. FPS must not drop.
Browser does it wrong. Android GUI API does it wrong. World of Warcraft addons do it wrong.
16ms is not a long time, when interpreted languages like Lua are used.
16ms is not a long time, when GC can suddenly kick in and stop the thread for unspecified amount of time.
16ms is not a long time, when JIT can suddenly kick in and stop the thread for unspecified amount of time.
GC and JIT are good techniques for server-style throughput workloads, when infrequent delays are not noticeable. For GUI, infrequent delays lead to skipped frames and stuttering.
I don't think that it's reasonable to ask the world to develop GUI apps in C or Rust. Probably Swift is a good middle point between usability and predictable performance. But most GUI projects use slow languages and/or languages with GC.
There are plenty of games built with Lua that run very well such as Balatro and Don't Starve. And games are far more sensitive to hitches than GUI projects because they are constantly animating. Your users will not only easily see lag - it can ruin the game for them.
If a basic GUI is performing poorly then the blame almost certainly lies on whoever developed it. Don't blame the language or runtime.
It does not help, you have smooth animations but you feel you're disconnected from the program and trust it less. The UI code just needs to not take much time and offload background stuff to another thread, but not the UI logic itself. It also naturally synchronizes events.
And sometimes it's better to briefly block UI thread as the alternatives lead to worse user experience.
Button click will automatically disable it, until its handler executed. There could be creative animations translating this state to the user in an intuitive way, so the button does not feel stuck and does not feel doing nothing.
Scroll widget should require some placeholder graphics which will be displayed until loading handler executed and replaced this placeholder with real data. A lot of apps already do exactly that.
Random GUI apps aren’t incentivized enough and so garbage leaks through. I die a bit every time a random GUI app stutters drawing 2D boxes
All animation is inherently discrete. No matter how many threads you have, there always has to be the last rendering thread, the thing that actually prepares the calls to the rendering backend. It always has to have frames, and in every frame, in the timestamp T, it will be interested in getting the world state in the timestamp T. So, the things that work on the world state - they have to prepare it as it was in T, not earlier, not later. You cannot completely decouple it.
In one of game projects that I worked on, a physics thread and a game thread actually were pretty decoupled, and what the game thread did was extrapolating the world state from the information provided by physics, because it knew not only the positions of physics objects, but also their velocities. Can we make every web developer to set velocities to the HTML elements explicitly? Probably not.
There is a lot of fun programming to be had in that space.
This is why I got into programming to begin with. Fun first, visual second, technical challenges third, money fourth, company last.
The reality is that browsers contain large piles of heuristic optimizations which defy simple explanation. You simply have to profile and experiment, every time, separately.
1) The code that is running is not what's presented; it executes (non-transpiled) vanilla JS.* Why not just show that?
2) Removing the box shadow massively makes the two closer in performance.
3) The page could just be one sentence: "Reflowing the layout of a page is slower than moving a single item." GPU un-related.
---
*Code that actually is running:
```js
```1) I thought of giving an easier to read example. I just moved the example to react, so the snippets actually match exactly what's going on in the background.
2) It is true! Though, using shadows on the optimized code doesn't slow it down. I added more toggles to test same effects on transform and top/left implementations.
3) I think it's still interesting to start with some thought and then observe that in practice things are different really. In fact, thanks for all the feedback, as it made me go back and do more investigation.
If you don't mind you can give the article a second look now :)
The word you are looking for is "baloney". They are pronounced differently.
Scrolling? Animations? LOL.