I Built the Same App 10 Times: Evaluating Frameworks for Mobile Performance
Posted2 months agoActive2 months ago
lorenstew.artTechstoryHigh profile
calmmixed
Debate
70/100
Mobile PerformanceJavascript FrameworksWeb Development
Key topics
Mobile Performance
Javascript Frameworks
Web Development
The article compares the performance of various JavaScript frameworks for building mobile web applications, sparking a discussion on the importance of performance, framework choices, and native vs. web development.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
5m
Peak period
120
0-12h
Avg / period
19.6
Comment distribution157 data points
Loading chart...
Based on 157 loaded comments
Key moments
- 01Story posted
Oct 28, 2025 at 1:22 AM EDT
2 months ago
Step 01 - 02First comment
Oct 28, 2025 at 1:27 AM EDT
5m after posting
Step 02 - 03Peak activity
120 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 3, 2025 at 6:03 AM EST
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45729437Type: storyLast synced: 11/20/2025, 8:37:21 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I usually make the analogy of a video game, where you can pick the difficulty. Svelte/SvelteKit is working in the "easy" difficulty level. You can achieve the same end result and keep your sanity (and your hair).
Maybe years ago. Now it's a bloated beast.
I think this is the reason why React feels normal to you. But as someone coming into it fresh, React felt like there were always 4 different ways to do the same thing and 3 of them are wrong because they built a new API/there are more idiomatic ways to accomplish the same thing now. If you have a decade of experience, then you probably do most things the right/obvious way so don't even notice all the incorrect ways/footguns that React gives you.
official documentation, otherwise you'll be creating a NextJS app and get recommended to deploy to Vercel.
Saying React is a "bloated monster" and then not being able to provide a single example of ways it has bloated is a joke. The article we're looking at shows that the bundle size can be a bit bigger but the speed to render is equivalent to all these other frameworks.
If you really love minimal bundle sizes, go off, but bundle size is not how I would define bloat in a framework
The article seems to make the bloat self-evident by comparing the load times of identical apps and finding React magnitudes slower.
To be fair, I haven't written in React for a few years now. I reached for Svelte with the last two apps I built after using React professionally for 4 years. I was expecting there to be a learning curve and there just... wasn't? It was staggering how little I had to think about. Even something as small as not having to write in JSX (however normalized I was to writing in it) really felt meaningful once I took a step back and saw the forest for the trees.
I dunno. I just remember being on the interview circuit and asking engineers to tell me about useCallback, useEffect, useMemo, and memo and how they're used, how something like console.log would fair in relation to them, when to include/exclude arguments from memoization arrays, etc.. and it was pretty easy to trip a lot of people up. I think the introduction of the compiler is an attempt to mitigate a lot of those pains, but newer frameworks designed with those headaches in mind from the start rather than mitigating much later and you can feel it.
React 19 required almost no code changes in my multiple production apps so unless I missed something, I would say the API surface was virtually unchanged by it
> The article seems to make the bloat self-evident by comparing the load times of identical apps and finding React magnitudes slower.
What are you talking about? Next.js != React, that's your own fault if you bought into their marketing. TanStack / React looks to be a slightly larger bundle size but I'm seeing FCP differences from 35ms to 43ms (React being 43ms), how is that orders of magnitude slower?
Bad faith or bad reading, I can't help you either way here
> asking engineers to tell me about useCallback, useEffect, useMemo, and memo and how they're used
What are you even trying to say? Are you implying that other web frameworks don't come with any state management, or that they are reactive, or that you don't need the concepts from React in them?
"People got confused sometimes" isn't really a defense when the alternative is a framework you only ever use on solo greenfield projects that you've never talked to another engineer about their core concepts.
Seriously, you are just peddling groupthink, there isn't a single legit criticism of React.
Next.js, on the flip side, we should all go off on those clowns, but I wouldn't touch that with a 10 foot pole so I don't see how it's even relevant.
Maybe hooks are cool but the same code written in react vs vue vs svelte or something else is always easier on the eyes and more readable. Dependency arrays and stale closures are super annoying.
Sorry but I really hate React. I've dealt with way too many shit codebases. Meanwhile working in vue/svelte is a garden of roses even if written by raw juniors.
Now with Laravel, Blade and JQuery the IDE support is low but everything is easy enough and we work as a team and do merge requests and it's a chill job even if it's full stack.
>I liked being a solo React Typescript developer.
Being a solo FE rocks. Everyone thinks you're a magician. The worst is FE-by-committee where you get 'full-stack' devs but really they're 99% postgres and 1% html.
Congrats, it's the most popular framework, no doubt there are abuses out there.
I highly doubt raw juniors are actually writing beautiful vue/svelte code, if obviously emotionally charged anecdotes are your only arguments here, I think you can just admit you see "Facebook" and crash out...
"use no memo"
react now needs you to declare what you are not using, using a language "feature" that does not exist. It is crazy how people keep denying reality wrt React.
Are directives horrible? Absolutely. Did I encounter any need for this across porting 4 different apps and a component library to React 19? No, it was frictionless.
Because React is the same as it has been for a long time.
Alternatives are great for those without these kinds of constraints.
In which case, I rather use traditional Java and .NET frameworks with minimal JavaScript, if at all.
I wonder how anyone gets any work done when they have to wait 10 seconds on every page load on a M3 Macbook Air
Back in 1999 - 2001, every time I wanted to do a make clean; make all in a C based product (actuall TCL with lots of C extensions), it took at least one hour build time.
Both compile for the same time as a page load in Next dev mode, but then everything is smooth sailing (and Angular also does hot-reloading well)
Ugh. That thinking is what gets you things like mandatory login via apps for your desktop. And not every application makes sense on a phone. And some Web Applications just require low latency high bandwidth internet to work properly.
But the vast majority do not. And this haranguing is an opportunity / defensible position to put more efforts and resources into performances. If nothing else, think of it as a Trojan horse to make software suck less.
Even a php app without decorations would be faster and better for most applications.
My experience has been that the proliferation of mobile devices has made my desktop experience consistently worse and I struggle to come up with an example where it didn't.
"the web is mobile" = strictly "apps" ?
I'm not overly familiar with it, but we use it at work. I've no idea if I should expect it to be quicker or slower than something like Next.
What a joy to read.
But the same is true for the content itself, no business is paying you to actually build the same app 10x, especially so if it's something as trivial as a kanban board.
Also, performing well in a prototype scenario is very different than performing well in production-ready scenario with a non-trivial amount of templates and complex operations. Even the slowest SSGs perform fast when you put three Markdown posts and one layout in them, but then after a few years of real-world usage you end up in a scenario where the full build takes about half an hour.
Kinda cool that you can do that in an afternoon, but absolutely useless as a benchmark of anything.
As a general challenge to people: write your article, then see if you can halve its length without losing much. If it felt too easy, repeat the process! There’s a family of well-known quotes that amount to “sorry for writing a long letter, I didn’t have time to write a short letter”. Concise expression is not the easiest, but very valuable. Many a 100-page technical book can be improved by reduction to a one-page non-prose overview/cheat sheet (perhaps using diagrams and tables, but consider going more freeform like you might on a whiteboard) plus a ten page abridged version.
As shown in the article, you can build ONCE an app that loads in milliseconds by just providing an url to any potential customer. It works on mobile and on desktop, on any operating system.
The native alternative requires:
- Multiple development for any platform you target (to be widely used you need *at least* ios, android, macOS and windows.) - Customers are required to download and install something before using your platform, creating additional friction.
And all of this to obtain at most 20-30ms better loading times?
There are plenty of cases where native makes sense and is necessary, but most apps have very little to gain at the cost of a massive increase in development resources.
Web deployment is easier, faster and cheaper.
*) https://github.com/ryansolid
Solid is great for raw rendering speed, but it hydrates just like react (unless you use an islands framework on top like astro which has its own limitations), while qwik and marko are resumable out of the box
Can you also tell ChatGPT to fix the layout so the table just above this message is fully visible without horizontal scrolling?
Edit: Related post on the front page: https://news.ycombinator.com/item?id=45722069
On first glance it seems very legit and personally I would be very hesistant judging something GPT slop based on some writing style.
>> Marko delivers 12.6 kB raw (6.8 kB compressed). Next.js ships 497.8 kB raw (154.5 kB compressed). That’s a 39x difference in raw size that translates to real seconds on cellular networks.
Sorry, it isn't 2006, cellular networks aren't spending "seconds" in the difference between 13kB and 500kB.
Payload size can matter, but it's complete nonsense that 500kB would translate to "real seconds".
Just spotted this section:
>> The real-world cost: A 113 kB difference at 3G speeds (750 kbps) means 1.2 seconds for download plus 500ms to 1s for parse/execution on mobile CPUs. Total: 1.5 to 2 seconds slower between frameworks.
3G is literally being decommissioned, and 3G isn't 750kbps, it's significantly faster than that.
> On first glance it seems very legit
Yes, that's exactly the danger of AI slop. It's very plausible, very slick and very easy to digest. It also frequently contains unchecked errors without any strong signals that would traditionally go along with that.
That's why I stopped reading at your first quote, it didn't fit with the summary and there's no point reading a bunch of numbers and wondering which are made up.
The article cites also the use case, real estate agents. They also struggle at times with bad connection issues it seems. And with a bad connection average websites do take seconds to load for me.
Websites taking seconds to load in bad mobile reception is usually down to latency and handshaking, not raw bandwidth.
Show me a real world example of a single payload 500kB taking seconds longer than 13kB. It's not realistic.
I can also show you how slow it is when I visit the countryside and the connection is not good.
Or when I take a very crowded train to another city/country and have to share the wi-fi while traveling in a non-metropolitan area.
Or when I run out of pre-paid credits and I get bumped into low speed mode and the provider's page takes several minutes to load.
I don't even know why I answer to this. Because for sure this is all my fault and I'm the one "holding it wrong".
I'm saying that the impact of dropped packets and poor latency falls much worse on sites that have multiple connections and dozens of files to download than a single bundle.
Also in those circumstances, the 13kB would also take "seconds".
The situation described, where the 13kB file takes milliseconds but the 500kB file takes seconds, is what is unrealistic. It's an invention of an LLM.
Chances are two different 13kB files would be far worse in those circumstances than a single 500kB file.
I don't know why I'm still answering this thread, because it's clear I'm not being understood, and this is all arguing over a flagged AI slop article that no-one wrote.
Yeah but a couple seconds I can wait. A few minutes not realistically unless it’s something really important.
"Show me a real world example of a single payload 500kB taking seconds longer than 13kB. It's not realistic."
And my only comment towards this is, please go out to see for yourself.
Also maybe take into account, the bloated website is not the only thing using the device connection. Messager messages syncing in the background, ..
Not wanting to believe something is very different from the thing being untrue. The differences between 13kB and 500kB are quite real and quite measurable.
I personally think the most important parts of the post are these two related parts: (1) the technofeudalism section (2) developing lightweight apps so a business does not feel the need to have a separate native codebase along with a specialized native team ($$$). I was hoping this post was a gift to many people by doing all this work for y'all, or at least providing a foundation from which you can fork and modify for your own use.
It is an open question for everyone if start up time on cellular is something you need to worry about. All the other non-sense here misses the point.
https://news.ycombinator.com/item?id=45724022
> This isn’t just an inconvenience. It’s technofeudalism.
There are so many of these in the article. It's like a spit to the face
We ended up with Vue vs. Svelte and landed on Vue/Nuxt since we agreed they have the most intuitive syntax for us, and it seemed like the one with the best trajectory, technologically speaking.
That was one year ago. It's not moving as fast as I would hope, but I still think Vue/Nuxt is a better choice than React at least. This article seems to support this somewhat.
Also, I did a review (with the help of all the big LLMs), and they seem to agree that Vue has the syntax and patterns that are best suited for agentic coding assistance.
The wins with regards to "First Contentful Paint" and "size" is not the most important. We just trust the Vue community more. React seems like a recipe for a bloated bureaucratic mess. Svelte still looks like a strong contender, but we liked the core team of Vue a lot, and most of us just enjoy Vue/Nuxt syntax/patterns better.
all apps need a js frontend, while there backend can be in Java, rails, python, c#, nodejs etc.
even at number 2, vuejs is likely to have more users than multiple popular backend frameworks combined
Your attitude is exactly why our supercomputers struggle to display even the simplest things with any kind of performance, and why pure text takes multiple seconds to appear
I particularly like that (JSX aside) it's just JavaScript, not a separate language with its own compiler like Svelte (and by the sounds of it Marko, which I hadn't heard of before). You can split your app into JS modules, and those can all use Solid signals, even the internal bits that don't have their own UI.
> When someone’s standing in front of a potential buyer trying to look professional, a slow-loading app isn’t just an annoyance. It’s a liability.
I liked reading that. It’s actually surprising how few developers think that way.
> Mobile is the web
That’s why.
I know many people that don’t own a computer, at all, but have large, expensive phones. This means that I can’t count on a large PC display, but I also can reasonably expect a decent-sized smaller screen.
I’ve learned to make sure that my apps and sites work well on high-quality small screens (which is different from working on really small screens).
The main caveat, is the quality of the network connection. I find that I need to work OK, if the connection is dicey.
I've been there myself as a Dev and later on as a manager. You have to really watch out not getting locked into local minima here. In most cases its not bundle size that wins this but engineering an app that can gracefully work offline, either by having the user manually pre-load data or by falling back to good caches.
Some of the most challenging code that I write, is about caches.
Writing good cache support is hard.
But in cases as grandparent describes, you do have significant wiggle room.
It's fairly difficult, for me. The app can do a lot, but sometimes, the data needs to be fresh. Making the decision to run an update can be difficult.
Also, I write free software, for nonprofits, so the hosting can sometimes be a bit dodgy.
You’ve stopped caring because it. never. ends. Really.
At the end of the day there have been a lot of new things in web development but none of them are of such a significance that you’re missing out on anything by sticking with what works. I personally just like to go with a mature backend framework (usually Laravel or Django) and minimal JS on the frontend. I’ve tried many of the shiny new libraries but have not seen much reason to switch over.
> Here’s where this gets bigger than framework choice. When you ship a native app to the App Store or Google Play instead of building a web app, you’re not just making a technical decision. You’re accepting a deal that would’ve been unthinkable twenty years ago. Apple and Google each take up to 30% of every transaction (with exceptions depending on program and category). They set rules. They decide what you can ship. They can revoke your access tomorrow with no recourse. You have no alternative market. You can’t even compete on price because the fee is baked into many transactions.
Setting a header only works if you know exactly when you are going to update the file. Except from highly dynamic or sensitive things this is never correct.
You can add ?v=2 to each and every instance of an url on your website. Then you update all pages which is preposterous and exactly what we didn't want. As a bonus ?v=1 is not erased which might also be just what you didn't want.
I never want to reload something until I do.
There are also other solutions if you need to preserve the url that are cleaner than appending a query string, like etags
These are expensive hacks to work around a lack of basic functionality.
It reminds me of one kid taking something from another and refusing to give it back because they are larger.
People make websites, they think they control what goes on on the page. This isn't unreasonable to think. In fact, everything should be made to preserve that idea.
A situation where they just can't change the page shouldn't exist. Abstracting it away or otherwise working around it doesn't make it any less wrong.
Some browsers have a magic key combo to force reload. I suppose the solution is to put up a modal and ask the user to "reinstall" the web page.
I have a lot of static pages with minimal html/css that thanks to lazy loading and caching consume very little bandwidth. The technology is truly wonderful, clicking around feels like a desktop application.
Urls like /v2/yourfile.js are probably closer to that philosophy. Or /[hash]/yourfile.js.
Any query string may prevent caching in some browsers (not sure which or if they still do)
File paths in my view are to organize files hierarchically not for hash or version numbers.
Html like <img src="logo.jpg"> looks neat and sophisticated. You can teach it in 5 seconds. If more characters are needed I expect something huge in return. For example styling it individually or as a group of things is a huge benefit. lazy loading is also HUGE.
I'll give you an extra one I learnt about recently: you can use a custom compression dictionary, although this is only available in chrome right now, which means even when a file needs to be redownloaded the network size is tiny as it's compressed with a custom dictionary that matches a previous version of the file
Vite gives you that behaviour out of the box.
If so, you have the extra cost, effort and bureaucracy of building and deploying to all the different app stores. Apple's App Store and Google Play each have various annoyances and limitations, and depending on your market there are plenty of other stores you might need to be in.
Sometimes you do need a native or native-feeling app, in which case a native wrapper for JS probably is a good idea, other times you want something lightweight that works everywhere with no deployment headaches.
UX matters, and user does not care if the native wrapper or 500kB of js is there or not, as long as the job is done conveniently and fast.
So here some obscure Next.js issues magically become fundamental React architecture issues. What are these? Skill issues?
Can someone explain why ? What precisely would make React sooo slow and big compared to other abstractions ?
As to why it is slow, my knowledge is super up-to-date (haven't kept up that well with recent updates), but in general the idea is:
- The React runtime itself is 40 kB so before doing anything (before rendering in CSR or before hydrating in SSR) you need to download the runtime first.
- Most frameworks have moved on to use signals to manage state updates. When state change, observers of that state will be notified and the least amount of code will be run before updating the DOM surgically. React instead re-executes the code of entire component trees, compares the result with the current DOM and then applies changes. This is a lot more work and a lot slower. Over time techniques have been developed in React to mitigate this (Memoization, React Compiler, etc.), but it still does a lot more work than it needs to, and these techniques are often not needed in other frameworks because they do a lot less work by default.
The js-framework-benchmark [1] publishes benchmarks testing hundreds of frameworks for every Chrome release if you're interested in that.
[1]: https://krausest.github.io/js-framework-benchmark/2025/table...
You're not answering my question, just adding some more feelings.
> The React runtime itself is 40 kB
React is < 10 kb compressed https://bundlephobia.com/package/react@19.2.0 (add react-dom to it). That's not really significative according to the author's figures, the header speaks about up "to 176.3 kB compressed".
> Most frameworks have moved on to use signals to manage state updates. When state change
This is not kilobytes or initial render times, but performance in rendering in a highly interactive application. They would not impact rendering a blog post, but rendering a complex app's UI. The original blog post does not measure this, it's out of scope.
Well you seemed surprised by this fact, even though it's a given for most people working in front-end frameworks.
> React is < 10 kb compressed https://bundlephobia.com/package/react@19.2.0 (add react-dom to it).
I don't know how bundlephobia calculates package size, and let me know if you're able to reproduce them in a real app. The simplest Vite + React app with only a single "Hello, World" div and no dependencies (other than react and react-dom), no hooks used, ships 60+ kB of JS to the browser (when built for production, minified and gzipped).
Now the blog post is not just using React but Next.js which will ship even more JS because it will include a router and other things that are not a part of React itself (which is just the component framework). There are leaner and more performant React Meta-Frameworks than Next.js (Remix, TanStack Start).
> This is not kilobytes or initial render times, but performance in rendering in a highly interactive application
True, but it's another area where React is a (relative) catastrophe.
The large bundle size on the other hand will definitely impact initial render times (in client-side rendering) and time-to-interactive (in SSR), because it's so much more JS that has to be parsed and executed for the runtime before even executing your app's code.
EDIT: It also does not have to be a highly interactive application at all for this to apply. If you only change a single value, that is read in a component deep within a component tree you will definitely feel the difference, because that entire component tree is going to execute again (even though the resulting diff will show that only that deeply nested div needs to be updated, React has no way of knowing that beforehand, whereas signal-based framework do)
And finally I want to say I'm not a React hater. It's totally possible to get fast enough performance out of React. There are just more footguns to be aware of.
First, there's the separation between the generic cross-platform `react` package, and the platform-specific reconcilers like `react-dom` and `react-native. All the actual "React" logic is built into the reconciler packages (ie, each contains a complete copy of the actual `react-reconciler` package + all the platform-specific handling). So, bundle size has to measure both `react` and `react-dom` together.
Then, the contents of `react-dom` have changed over time. In React 18 they shifted the main entry point to be `react-dom/client`, which then ends up importing the right dev/prod artifacts (with `react-dom` still supported but deprecated):
- https://app.unpkg.com/react-dom@18.3.1/files/cjs
Then, in React 19, they restructured it further so that `react-dom` really only has a few utils, and all the logic is truly in the `react-dom/client` entry point:
- https://app.unpkg.com/react-dom@19.2.0/files/cjs/react-dom.d...
- https://app.unpkg.com/react-dom@19.2.0/files/cjs/react-dom-c...
So yes, the full prod bundle size is something like 60K min+gz, but it takes some work to see that. I don't think Bundlephobia handles it right at all - it's just automatically reading the main entry points for each package (and thus doesn't import `react-dom/client`. You can specify that with BundleJS though:
- https://bundlejs.com/?q=react%2Creact-dom%2Fclient&treeshake...
> Bundle size is 193 kB -> 60.2 kB (gzip)
so React has a performance issue in most places it is used. And probably in every project that lives long enough.
The other thing is that React is too big in terms of kBs of JavaScript you have to download and then parse (and often, thanks to great React ecosystem, you use many other libraries). But that's just another trade-off: it's the price you pay for great backwards compatibility (e.g. you can still use React Class components, you don't have to use hooks, etc.).
That being said React is slow. That is why you need useTransition, which is essentially manual scheduling (letting React know some state update isn't very important so it can prioritise other things) which you don't need to do in other frameworks.
useOptimistic does not improve performance, but perceived performance. It lets you show a placeholder of a value while waiting for the real computation to happen. Which is good, you want to improve perceived performance and make interactions feel instant. But it technically does not improve React's performance.
But this mostly applies to subsequent re-renders, while things mentioned in the article are more about initial render, and I'm not exactly sure why does React suffer there. I believe React can't skip VDOM on the server, while Vue or Solid use compiled templates that allow them to skip that and render directly to string, so maybe it's partially that?
however, React didn't copy from others, so it got slower than "competition"
By the way, my "horse" of choice is Quasar(based on Vue) and has been for years now.
You might want to fix your horizontal scroll on mobile. I should basically never have a full page horizontal scrollbar on a page that is mostly just text.
.. creating a maintenance issue right now.
Thanks ChatGPT for your valuable slop. Next article.
No, it is excuse not to invest money in places where users won't pay.
For questions about mobile - yeah we get requests for showing it on mobile but app in app store is hard requirement, because of discoverability. People know how to install app from app store and then they have icon. Making PWA icon is still too much work for normal people.
I would need "add to home screen" button in my website that I could have user making icon with single click, then I could go with PWA.
In that case how can you possibly get 35ms FCP? Am I missing something?
> This isn’t a todo list with hardcoded arrays. It’s a real app with database persistence (appears twice)
this article was written by ChatGPT. I'm tired
Now, let's talk about the comments, particularly the top comment. I have to say I find the kneejerk backlash against "AI style" incredibly counter-productive. These comments are creating noise on HN that greatly degrades the reading experience, and, in my humble opinion, these comments are in direct violation of all of the "In Comments" guidelines for HN: https://news.ycombinator.com/newsguidelines.html#comments
Happy to change my mind on this if anyone can explain to me why these comments are useful or informative at all.
I write pretty lean HTML/vanilla JS apps on the front end & C#/SQL on the backend; and have had great customer success on mobiles with a focus on a lot of the metrics the author hammers home.
As ever on mobile it's latency, not bandwidth, that's the issue. You can very happily transfer a lot of data, but if that network is in your interactive hot path then you will always have a significant delay.
You should optimise to use the available bandwidth to solve the latency issues, after FCP. Preload as much data as possible such that navigations are instant.
"Slowness poisons everything."
Exactly. There's nothing more revealing than seeing your users struggle to use your system, waiting for the content to load, rage clicking while waiting for buttons to react, waiting for the animations to deliver 3 frames in 5 seconds.
Engineering for P75 or P90 device takes a lot of effort, way beyond what frameworks offer you by default. I hope we'll see some more focus on this from the framework side, because I often feel like I have to fight the framework to get decent results - even for something like Vue, which looks pretty great in this comparison.
4 more comments available on Hacker News