The GitHub Website Is Slow on Safari
Original: The GitHub website is slow on Safari
Key topics
GitHub's Safari woes have sparked a lively debate, with users pointing fingers at the platform's React-based UI rewrite and bloated DOM as potential culprits behind the slowdown. As some brave souls ventured into GitHub's new PR diff page, they reported buggy experiences, while others discovered a blog post revealing that the PR view can render over 100,000 DOM nodes, many of which are invisible inline SVG nodes. A resourceful developer even created a browser extension that doubles GitHub navigation speed, and the conversation took a humorous turn when one user joked that GitHub now feels like Jira, prompting another to warn of the horrors that ensue when Jira, Copilot, and Actions are combined.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
6m
Peak period
76
0-6h
Avg / period
17.8
Based on 160 loaded comments
Key moments
- 01Story posted
Aug 27, 2025 at 5:43 AM EDT
4 months ago
Step 01 - 02First comment
Aug 27, 2025 at 5:49 AM EDT
6m after posting
Step 02 - 03Peak activity
76 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 30, 2025 at 5:07 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Does anyone have concrete information?
[1]: https://yoyo-code.com/why-is-github-ui-getting-so-much-slowe...
[2]: https://news.ycombinator.com/item?id=44799861
https://chromewebstore.google.com/detail/make-github-great-a...
A lot of the time we just break the branch permissions on the repo we are using and run release branches without PRs and ignore the entire web interface.
> publicly disseminate information regarding the performance of the Cloud Products
https://web.archive.org/web/20210624221204/https://www.atlas...
If you actually load up a ~2015 version of Jira on today’s hardware it’s basically instant.
It was being hosted on another continent. It was written in PHP. It was rendering server-side with just some light JS on my end.
That used to be the norm.
It's really hard to fight the trend especially in larger orgs.
GitHub issues was so simple and now they keep shoving features into it.
Why has no one learned to not become Jira? You gotta say no sometimes.
For my sins I occasionally create large PRs (> 1,000 files) in GitHub, and teammates (who mostly all use Chrome) will sometimes say "I'll approve once it loads for me..."
"Rename 'CustomerEmailAddress' to 'CustomerEmail'"
"Upgrade 3rd party API from v3 to v4"
I genuinely don't get this notion of a "max # of files in a PR". It all comes off to me as post hoc justification of really shitty technology decisions at GitHub.
A computer will be able to tell that the 497th has a misspelled `CusomerEmail` or that change 829 is a regexp failure that trimmed the boolean "CustomerEmailAddressed" to "CustomerEmailed" with 100% reliability; humans, not so much.
Or that you had to avoid Ctrl+F "CustomerEmail" and see whether you had 1000 matches that matches the number of changed files or only 999 due to some typo.
Or using the web interface to filter by file type to batch your reviews.
Or...
Just that in none of those cases there is anything close to our memory/attention capacity.
I work in a large C++ codebase and a rename like that will actually just crash my vscode instance straight-up.
(There are good automated tools that make it straightforward to script up a repository-wide mutation like this however. But they still generate PRs that require human review; in the case of the one I used, it'd break the PR up into tranches of 50-ish files per tranche and then hunt down individuals with authority to review the root directory of the tranche and assign it to them. Quite useful!)
Of course some languages... PHP... aren't so lucky. $customer->cusomerEmail? Good luck dealing with that critical in production, fuckheads!
The point is moreso that PHP won't stop you from doing that. It will run, and it will continue running, and then it will throw an error at some point. Maybe.
If the code is actually executed. If it's in a branch that's only executed like 1/1000 times... not good.
Sure 1000+ changes kills the soul, we're not good at that, but sometimes there's just no other decent choice.
The usual response is something like "if you're correct, wouldn't that mean there are hundreds of cases where this needs to be fixed to resolve this bug?". The answer obviously being yes. Incoming 100+ file PR to resolve this issue. I have no other ideas for how someone is supposed to resolve an issue in this scenario
I would rather just see the steps you ran to generate the diff and review that instead.
A very simple example: migrating from JavaEE to JakartaEE. Every single Java source file has to have the imports changed from "javax." to "jakarta.", which can easily be thousands of files. It's also easy to review (and any file which missed that change will fail when compiling on the CI).
But there is also the Safari Technology Preview, which installs as a separate app, but is also a bit more unstable. Similar to Chrome Canary.
I actually have been trying to figure out how to get my React application (unreleased) to perform less laggy in Safari than it does in Firefox/Chrome, and it seems like it is related to all the damn DOM elements. This sucks. Virtualizing viewports adds loads of complexity and breaks some built-in browser features, so I generally prefer not to do it. But, at least in my case, Safari seems to struggle with doing certain layout operations with a shit load of elements more than Chrome and Firefox do.
By all means. It sometimes feels like React is more the symptom than the actual issue, though.
Personally I generally just like having less code; generally makes for fewer footguns. But that's an incredibly hard sell in general (and of course not the entire story).
But CSS has bit me with heavy pages (causing a few seconds of lag that even devtools debugging/logging didn't point towards). We know wildcard selectors can impact performance, but in my case there were many open ended selectors like `:not(.what) .ever` where the `:not()` not being attached to anything made it act like a wildcard with conditions. Using `:has()` will do the same with additional overhead. Safari was the worst at handling large pages and these types of selectors and I noticed more sluggishness 2-3 years ago.
Normally, you be able to debug selector matching performance (and in general, see how much style computation costs you), so it's a bit weird if you have phantom multi-second delays.
It's just easier to blame the tools (or companies!) you already hate.
it's Microsoft, so the answer is: buy a new computer
(which comes with a bundled Windows license)
It's a product of many cooks and their brilliant ideas and KPIs, a social network for devs and code being the most "brilliant" of them all. For day to day dev operations is something so mediocre even Gitlab looks like the golden standard compared to Github.
And no, the problem is not "Rails" or [ insert any other tech BS to deflect the real problems ].
The problem is they abandoned rails for react. The old SSR GitHub experience was very good. You could review massive PRs on any machine before they made the move.
A fun part of a retro at my company last year was me explaining to a team, “had all of your pods’ requests succeeded, the DB would have been pushing out well over 200 Gbps, which is generally reserved for top-of-rack switches.” Of course, someone else then had to translate that into “4K Blu-Rays per second,” because web devs aren’t typically familiar with networking, racks, data centers…
If github has a million users visiting it per day on a FRESH cache, and all of them have to download at least 10 megabytes of text data (both of these numbers are far too high), you are at ... 0.015 "4k blurays per second". Yeah I think MS's datacenters will survive.
I’m not a frontend dev, and have next to zero experience with anything beyond jQuery, but an analogy is shell. Bash (and zsh, though I find some of its syntactic sugar nicer, albeit still inscrutable) will happily let you do extremely stupid things, but it also lets you do extremely complicated things in a very concise manner. That doesn’t mean it’s inherently bad, it means you need to know what the hell you’re doing, and use linters, write tests, etc.
Available data confirms that SPA tends to perform worse than classic SSR.
Github's code view page has been unreasonably slow for the last several years ever since they migrated away from Rails for no apparent reason.
I’m sure you could make something work better as a SPA, but nobody does.
Their "solution" was to enable SSR for us ranters' accounts.
The fact that they have this ability / awareness and haven't completely reverted by now is shocking to me.
Which, unfortunately, cannot be measured :( so no KPIs. Darn!
Its all fun and games until you cut quality over and over so much your customers just leave. Ask Chrysler or GE. I mean they must have saved, what, billions across decades? And for free!
Well... um... not free actually, because those companies have been run into the ground, dragged through hell, revived, and then damned again.
Maybe it will make a significant enough cumulative impact 5 years later that it can actuallly be noticed and defended in a meeting against other priorities.
But I’ve never heard of anyone hiring someone on minimum wage and deferring a huge bonus to 5 years later.
Even if it does makes a big impact, would anyone even take a such a job?
I know everyone knows the cliche "The devil is in the details" but everyone seems to continually make these mistakes because nuance is hard. But then again what is a cliche if not words of wisdom that everyone can recite but fail to follow?
The alternative is you develop a Lemon Market. Which is a terrible situation for all parties involved. Short term profits might be up but these are at the loss of much higher long term rewards.[0] You infer where the downed planes were shot through the measures you can make on recovered planes. But that is very different than measuring where downed planes were shot. You can't just take the inverse of the returned planes and know where to add plating from there.
if they were forced to use slow machines, they would not be able to put out crap like that
Now you CAN so it so that is not the case, but tbh i have never seen that in the wild -
Edit: here's a good investigation on a real-enough app https://www.developerway.com/posts/tailwind-vs-linaria-perfo...
Tailwind is probably one of the best considering you can use Vite to literally strip out all unused css easily.
And I think tailwind v4 does this automatically
This is such a tired trope.
The reality is both can be slow, it depends on your data access patterns, network usage, and architecture.
But the other reality is that SPAs and REST APIs just usually have less optimal network usage and much worse data access patterns than traditional DB connected SSR monoliths. Same goes for micro service.
Like, you could design a highly scalable and optimal SPA. Who's doing it? Almost nobody.
No, instead they're making basically one endpoint per DB table, recreating SQL queries in client side memory, duplicating complex business logic on the front and back end, and sending 50 requests to load an dashboard.
Even other frameworks like Vue.js, Solid or Svelte don't really suffer from it as much. It simply happens a couple order of magnitudes more often in React than any other framework.
GitHub is big software, but not that big. Huge monorepos and big big diffs grind GitHub to a pulp.
> if they were forced to use slow machines, they would not be able to put out crap like that
Lots of developers are rather obsessed with writing good, performant code. The problem is rather that many project managers do not let them do these insane optimizations because they take time.
The only things that forcing developers to use slow machines will bring is developers quitting (and quite a lot of them would actually love to see the person responsible for this decision dead (I'm not joking) because he made the developers' job a hell on earth).
What you should rather do if you want performant software is to fire all the project managers who don't give the developers the necessary time (or don't encourage the developers) to write highly optimized code (i.e. those idiot project managers who argue with "pragmatism" concerning this point).
No they don't. It's literally just a skill issue.
To give just one simple example: to get the textbook complexity bound for the Dijkstra algorithm, you need some fancy mergeable heap data structures which are much more complicated, and thus time-intense to implement than the naive implementation.
Or you can get insane low-level optimizations by using the SIMD instructions that modern processors provide. Unluckily, this takes a lot of time and leads to code that is not easy to understand (and thus not easy to write) for "classically trained" programmers.
Yes, you indeed need a lot of skills to write such very fast algorithms, but even for such ultra-smart programmers, finding and applying such optimizations need a lot of development time, which is why this is often only done for code parts that are insanely computation-intense and performance-critical such as video (and sometimes audio) codecs.
Which, it seems, was a result of the M$ acquisition: https://muan.co/posts/javascript
> Writing on the internet can be a two-way thing, a learning experience guided by iteration and feedback. I’ve learned some bad habits from Hacker News. I added Caveats sections to articles to make sure that nobody would take my points too broadly. I edited away asides and comments that were fun but would make articles less focused. I came to expect pedantic, judgmental feedback on everything I wrote, regardless of what it was.
https://macwright.com/2022/09/15/hacker-news
Which is true. Pedantism is the lowest form of pseudo-intelligence.
You can’t just lay this bear trap of an opportunity and expect me to not pedantically state that the word is either “pedantry”, the activity performed by pedants, or “pedantic”, to describe such activities.
“Pedantism” would be a philosophy or viewpoint that extols pedantry. Pedantism would be to pedantry as deontology is to rule-following, a justification of an activity. As such, pedantism would be a slightly higher form of pseudo-intelligence than mere pedantry.
But only slightly.
> indenting or quoting yourself in a way that makes it look more authoritative
network.http.referer.XOriginPolicy = 1
Meanwhile, I opened a 100K line CSV in Neovim and while it took a couple of seconds to open and render highlighting, after that, it was fine.
There are of course performant react apps out there. What Steve did with tldraw is amazing.
However, the vast majority of the apps out there are garbage since the framework itself is terribly inefficient.
Gitlab is anything but light, by default tends to be slow, but surprisingly fast with a good server ( nothing crazy, but big ) and caching.
Gitea is an example I like because it stores the repository as a bare repository, the same as if I did git clone --bare. I bring it up because when I stopped running Gitea, I could easily go in to the data and backup all the repositories an easily reuse them somewhere else.
GitLab: https://docs.gitlab.com/administration/gitaly/praefect/
GitHub: https://github.blog/engineering/infrastructure/stretching-sp...
Too bad Phabricator is maintenance-only now https://en.m.wikipedia.org/wiki/Phabricator
https://we.phorge.it/
At the very least, I wish they set it to auto.
193 more comments available on Hacker News