Severe Performance Penalty Found in Vscode Rendering Loop
Posted2 months agoActive2 months ago
github.comTechstory
skepticalnegative
Debate
80/100
VscodePerformance OptimizationAI-Generated Content
Key topics
Vscode
Performance Optimization
AI-Generated Content
A GitHub issue reports a severe performance penalty in VSCode's rendering loop, but commenters are skeptical about the validity of the claim, suspecting it may be AI-generated.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
37m
Peak period
38
0-3h
Avg / period
9
Comment distribution45 data points
Loading chart...
Based on 45 loaded comments
Key moments
- 01Story posted
Oct 27, 2025 at 12:00 AM EDT
2 months ago
Step 01 - 02First comment
Oct 27, 2025 at 12:37 AM EDT
37m after posting
Step 02 - 03Peak activity
38 comments in 0-3h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 28, 2025 at 10:39 AM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45717285Type: storyLast synced: 11/20/2025, 5:02:38 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
This just seems like an AI slop GitHub issue from beginning to end.
And I'd be very surprised if VS Code performance could be boosted that much by a supposedly trivial fix.
(Somewhere in the pile of VSCode dependencies you’d think there’d be a generic heap data structure though)
also no discussion of measured runtimes for the rendering code. (if it saves ~1.3ms that sounds cool, but how many ms is that from going over the supposed 16ms budget.)
Good thing to find...
If you have 50 items in the list, then the list gets sorted 50 times. If you have 200 items in the list, the list is sorted 200 times.
This is unnecessary. The obvious alternative is a binary heap… which is what the fix does. Although it would also be obvious to reuse an existing binary heap implementation, rather than inventing your own.
Even if that were the case, sorting a list that's already sorted is basically free. Any reasonable sort method (like the builtin one in a JS runtime) will check for that before doing anything to the list.
> The obvious alternative is a binary heap… which is what the fix does.
The overhead of creating a heap structure out of JS objects will dwarf any possible benefit of avoiding a couple of calls to Array.sort().
That’s n−1 pairs of elements that have to be compared with a a JS callback each time. (JavaScript’s combined language misfeatures that allow the array to be modified in many odd/indirect ways while it’s being sorted might add unusually high overhead to all sorting too – I’m not sure how much of a fast path there is.) Anyway, the code in question does include functionality to add new elements to the priority queue while it’s being processed.
> The overhead of creating a heap structure out of JS objects will dwarf any possible benefit of avoiding a couple of calls to Array.sort().
Not true even in general and with an unoptimized heap implementation, and in this case, there’s an array of JS objects involved either way. In fact, there’s no number of elements small enough for sorting to be faster in this benchmark in my environment (I have no idea whether it reflects realistic conditions in VS Code, but it addresses the point): https://gist.github.com/minitech/7ff89dbf0c6394ce4861903a232...
Yes, that's indeed the approach I'd take.
I see they also contributed a fix to the OnlyFans notification robot. Clearly doing the important work that the internet needs.
Like if Batman turned out to be bad at fighting criminals so had to fight null pointer exceptions instead.
Yup, they should definitely move the sort outside of the loop. Shifting is O(N) so overall complexity would be O(N^2) but they could avoid shifting by reverse-sorting outside the loop and then iterating backwards using pop()
I don't work in JS-land.. but are Electron apps difficult to do performance profiling on?
Just seems like the reality of things is that the number of extensions or widgets or whatever has remained low enough that this extra sorting isn't actually that punitive in most real-world use cases. As a long-time developer working mainly in VSCode, I notice no difference between performance/snappiness in VSCode compared to JetBrains Rider, which is the main other IDE I have meaningful experience with these days.
I’ve had agents find similar “performance bottlenecks” that are indeed BS.
the important question is: is this an actual performance bug?
> This feature request is now a candidate for our backlog. The community has 60 days to upvote the issue. If it receives 20 upvotes we will move it to our backlog. If not, we will close it. To learn more about how we handle feature requests, please see our documentation.
also it’s an issue, not a PR
But then again, probably AI slop with "performance gain" numbers taken out of thin air. Who knows if the number 50 and 1-2ms are based on fantasy novels or not.
Like when I used Claude to build a door video intercom sytem, and first asked it to create a plan. It inserted how many weeks each milestone would take, and it was an order of magnitude off. But I guess milestone documents have time estimates, so that's how it's supposed to look, information accuracy be damned.