Javascript Engines Zoo – Compare Every Javascript Engine
Key topics
A fascinating comparison of JavaScript engines has sparked a lively discussion, with users sharing benchmark results that reveal surprising performance differences between browsers. While some commenters expressed disappointment with Firefox's relatively slower performance, others pointed out potential explanations, such as the lack of vector rasterization in Firefox's GPU-accelerated rendering backend. The conversation also touched on the varying binary sizes of different engines, with some speculating about the impact of compilation differences. As the author mused, the performance landscape may shift in the future, with V8 potentially regaining its top spot.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2m
Peak period
35
0-6h
Avg / period
9.3
Based on 84 loaded comments
Key moments
- 01Story posted
Jan 4, 2026 at 6:23 AM EST
5d ago
Step 01 - 02First comment
Jan 4, 2026 at 6:25 AM EST
2m after posting
Step 02 - 03Peak activity
35 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Jan 8, 2026 at 3:15 AM EST
1d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
And SpiderMonkey seems... not up there compared to the other 2
I just ran the JetStream2 benchmark and got:
- Firefox: 159 score
- Chromium: 235 score
That's on latest Fedora Linux and Ryzen 3600 CPU.
- Firefox: 253.584
- Safari: 377.470
- Chrome: 408.332
- Edge: 412.005
Firefox: 298.136
Safari: 425.762
Just the last test in this test suite:
> 3d-cube-SP > 3D cube rotation benchmark by Simon Speich. The original can be found on Simon's web page. Tests arrays and floating-point math in relatively short-running code.
gives the following results:
Firefox: 305.197
200 First 338.983 Worst 419.309 Average
Safari: 818.449
238.095 First 176.471 Worst 1957.237 Average
which shows that in this particular test, Safari is 2.5 times faster.
I'm curious to know what the problem of Firefox is. For example, the 3d-raytrace-SP benchmark is nearly three times faster on Edge than on Firefox on my i7 laptop. The code of that benchmark is very simple and mostly consists of basic math operations and array accesses. Maybe the canvas operations are particularly slow on Firefox? This seems to be an example that developers should take a look at.
That seems likely. WebRender (Firefox's GPU accellerated rendering backend) doesn't do vector rasterization. So Firefox rasterizes vectors using the CPU-only version of Skia and then uploads them to the GPU as textures. Apparently the upload process is often the bottleneck.
In contrast, Chrome uses (GPU-accelerated) Skia for everything. And Skia can render vector graphics directly into GPU memory (at least part of the rasterization pipeline is GPU accelerated). I would expect this to be quite a bit faster under load.
It's a known problem, but I hear that almost all of the Gecko graphics team's capacity beyond general maintenance is going towards implementing WebGPU.
---
SpiderMonkey is also now just quite a bit slower than V8 which may contribute.
I've been hearing for a while that JSCore has a more elegant internal architecture than V8, and seeing the V8 team make big architectural changes as we speak seems to support it [1], but like I said, hopefully they will pay off long-term
[1]
— https://v8.dev/blog/leaving-the-sea-of-nodes
— https://v8.dev/blog/maglev
What are those incentives? I see no incentive for Google to make something fast.
What’s changed in 2026 that will motivate Google to overtake JSCore?
Why would Google have more incentive than Apple to make the fastest engine? Safari being the fastest mobile browser is important to Apple.
If Google had a stronger incentive than Apple, we would have seen V8 being more performant by now.
[0]https://web.archive.org/web/20220724110148/https://bun.sh/
Little faster? Like I said, that's not a *main* benefit as it's not why Bun can be 5 to 10x faster than Node.js
If we believe there is limit to everything, then the only sane conclusion would be v8 and JSC would both perform nearly the same with negligible difference in the long term. So choosing something that is fast, simple to integrate now makes a lot more sense.
Of course that is assuming memory usage, security and others being equal.
That's a staggering accomplishment.
https://github.com/Hans-Halverson/brimstone
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe... .
https://webkit.org/blog/6240/ecmascript-6-proper-tail-calls-...
V8 team decided that it's not worth it, since proper stack traces (such as Error.stack) are essential for some libraries, such as Sentry (!). Removing some stack trace info can break some code. Also imagine you have missing info from error the stack trace in production code running on NodeJS, that's not good. If you need TCO, you can compile that code in WASM. V8 does TCO in WASM.
It'll be interesting to see how much it will affect React Native apps as it gets more and more optimized for this use case
At one point I really thought that Flutter would outclass it but typical Google project stuff has really put a damper on it from all I can see.
It’s not better than native apps, but as far as cross platform GuIs go it’s still very very good
As a React Native developer for, what, 6 years, I don’t have much positivity left to offer. Bug reports to the core team that went nowhere, the Android crash on remote images without dimensions, all the work offloaded to Expo, etc.
Google couldn’t really done better, maybe Flutter should’ve become independent after the initial release.
That it is also the only real use case for Dart also doesn’t help matters.
While I agree the technology is very good and in some cases superior it doesn’t have a path to stable funding, seemingly. Google laid off a big chunk of Dart and Flutter teams last year, and there is no Expo to pickup the slack.
While Meta could do the same for React Native, Expo has always been there to pick up the slack and it receives bigger community support from the community too. For example Shopify has a few great well maintained RN libs
A few years ago I started work on a kind of abstraction layer that would let you plug Rust code into multiple different engines. Got as far as a proof of concept for JavascriptCore and QuickJS (at the time I had iOS and Android in mind as targets). I still think there’s some value in the idea, to avoid making too heavy a bet on one single JS engine.
https://github.com/alastaircoote/esperanto
Every time I look I find repos that look promising at first but are either unmaintained or have a team or just one or two maintainers running them as a side project.
I want my sandbox to be backed by a large, well funded security team working for a product with real money on the line if there are any holes!
(Playing with Cloudflare workers this morning, which seems like it should cover my organizational requirements at least.)
You could also look at GraalJS. It's shipped as part of the Oracle Database, there's a security team, patching process etc. It's used in production by Amazon amongst others. It's got flexible sandbox features too.
https://www.graalvm.org/latest/reference-manual/embed-langua...
The way it's written is good for security as well:
https://medium.com/graalvm/writing-truly-memory-safe-jit-com...
Disclosure: I sit next to the GraalVM team.
I looked at GraalVM but was put off by the licensing situation: https://www.graalvm.org/22.3/reference-manual/embed-language...
> GraalVM Enterprise provides the experimental Sandbox Resource Limits feature that allows for the limiting of resources used by guest applications. These resource limits are not available in the Community Edition of GraalVM.
Part of my requirements for a sandbox are strong guarantees against memory or CPU exhaustion from poorly written or malicious code.
https://www.graalvm.org/latest/introduction/#licensing-and-s...
> Oracle GraalVM is licensed under GraalVM Free Terms and Conditions (GFTC) including License for Early Adopter Versions. Subject to the conditions in the license, including the License for Early Adopter Versions, the GFTC is intended to permit use by any user including commercial and production use.
It has all the sandboxing features you might want. I don't know if the disclaimers on the other engines changes much, open source software always disclaims all liability. Nobody will stand behind something security sensitive unless it's commercial because otherwise there's no way to pay for the security team it requires.
But generally, I think best bet is to offload such things to e.g. Lambda per tenant.
Featured recently on HN.
You're running JS (an 'interpreted', managed language) - it's already intentionally designed to be executed in a sandbox. Unless you provide hooks out to the host system, it can't do anything bad. With mquickjs, the untrusted code can't even overflow your system heap or take too much execution time.
If you were running untrusted C or something, it would make more sense to add the WASM layer.
I have enormous respect for Fabrice but mquickjs is only a few weeks old and I'm no way near skilled enough to audit his C code!
Running it in WASM feels a lot safer to me.
https://github.com/fulcrumapp/v8-sandbox
But yeah, interesting, it might not exist.
workerd does not include any sandboxing layers other than V8 itself. If someone has a V8 zero-day exploit, they can break out of the sandbox.
But putting aside zero-day exploits for a moment, workerd is designed to be a sandbox. That is, applications by default have access to nothing except what you give them. There is only one default-on type of access: public internet access (covering public IPs only). You can disable this by overriding `globalOutbound` in the config (with which you can either intercept internet requests, or just block them).
This is pretty different from e.g. Node, which starts from the assumption that apps should have permission to run arbitrary native code, limited only by the permissions of the user account under which Node is running.
Some other runtimes advertise various forms of permissions, but workerd is the only one I know of where this is the core intended use case, and where all permissions (other than optionally public internet access, as mentioned) must be granted via capability-based security.
Unfortunately, JavaScript engines are complicated, which means they tend to have bugs, and these bugs are often exploitable to escape the sandbox. This is not just true of V8, it's true of all of them; any that claims otherwise is naive. Cloudflare in production has a multi-layer security model to mitigate this, but our model involves a lot of, shall we say, active management which can't easily be packaged up into an open source product.
With all that said, not all threat models require you to worry about such zero-day exploits, and you need to think about risk/benefit tradeoffs. We obviously have to worry about zero-days at Cloudflare since anyone can just upload code to us and run it. But if you're not literally accepting code directly from anonymous internet users then the risk may be a lot lower, and the overall security benefit of fine-grained sandboxing may be worth the increased exposure to zero-days.
The problem I have is that I'm just one person and I don't want to be on call 24/7 ready to react to sandbox escapes, so I'm hoping I can find a solution that someone else built where they are willing to say "this is safe: you can feed in a string of untrusted JavaScript and we are confident it won't break out again".
I think I might be able to get there via WebAssembly (e.g. with QuickJS or MicroQuickJS compiled to WASM) because the whole point of WebAssembly is to solve this one problem.
> But if you're not literally accepting code directly from anonymous internet users then the risk may be a lot lower
That's the problem: this is exactly what I want to be able to do!
I want to build extension systems for my own apps such that users can run their own code or paste in code written by other people and have it execute safely. Similar to Shopify Functions: https://shopify.dev/docs/apps/build/functions
I think the value unlocked by this kind of extension mechanism is ready to skyrocket, because users can use LLMs to help write that code for them.
For Wasm to be a secure sandbox, you have to assume a bug-free compiler/interpreter, which, alas, none of them really are. It's a somewhat easier problem than building a bug-free JavaScript runtime, but not by as much as you might expect, sadly.
> I want to build extension systems for my own apps such that users can run their own code or paste in code written by other people and have it execute safely. Similar to Shopify Functions: https://shopify.dev/docs/apps/build/functions
Ah, this is exactly the Workers for Platforms use case: https://developers.cloudflare.com/cloudflare-for-platforms/w...
And indeed, Shopify uses it: https://shopify.engineering/how-we-built-oxygen
(There's also the upcoming Dynamic Worker Loader API: https://developers.cloudflare.com/workers/runtime-apis/bindi...)
But it sounds like you really do want to self-host? I don't blame you, but that does make it tough. I'm not sure there's any such thing as a secure sandbox that doesn't require some level of monitoring and daily maintenance, sadly. (But admittedly I may be biased.)
I've been picking at this problem for a few years now!
On the one hand I get why it's so hard. But it really feels like it should be possible to solve this in 2026 - executing arbitrary code in a way that constrains its memory and CPU time usage is a problem our industry solves in browsers and hosting platforms and databases and all sorts of other places, and has done for decades.
The whole LLM-assisted end-user programming thing makes solving this with the right developer affordances so valuable!
If Simon's users choose to self-host the open source version of his service, they are probably using it to run their own code, and so the sandbox security matters less, and workerd may be fine. The sandbox only matters when Simon himself offers his software as a service, which he could do using Workers for Platforms.
(But this is a self-serving argument coming from me.)
https://developers.cloudflare.com/sandbox/
Even if you go with something backed by a full time team there is still going to be a chance you have to deal with a security issue in a hurry, maybe in the run up to Christmas. That is just going to come with the territory and if you don’t want to deal with that then you probably need to think about whether you really need a sandbox that can execute untrusted code.
It hasn't been updated in some time, but it should still be working, and can probably be brought up to date with some small effort: https://github.com/facebook/hermes/tree/static_h/API/hermes_...
Benchmark numbers for request isolated JS hello world / React page rendering:
Numbers taken from our upcoming TinyKVM paper. Benchmark setup code for JCO/wasmtime is here: https://github.com/libriscv/kvmserver/tree/main/examples/was...(I suspect even if we are able to get TinyKVM into a state you'd feel comfortable with in the future it would still be an awkward fit for Datasette since nested virtualisation is not exposed on AWS EC2.)
How much are you ready to pay for a license?
High budget is no guarantee for absence of critical bugs in an engine, maybe even somewhat opposite - on a big team the incentives are aligned with shipping more features (since nobody gets promoted for maintenance, especially at Google) -> increasing complexity -> increasing bug surface.
If speed is less important and you can live without JIT, that expands your options dramatically and eliminates a large class of bugs. You could take a lightweight engine and compile it to a memory-safe runtime, that'd give you yet another security layer for peace of mind. Several projects did such ports to Wasm/JS/Go - for example PDF.js in your browser probably runs QuickJS (https://github.com/mozilla/pdf.js.quickjs)
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
Just keep benchmark code limited to standard ECMAScript, don't expect any browser or Node APIs from most engines besides console.log() or print().
My n=1 as a long time Firefox user is that performance is a non-issue (for the sites I frequent). I’m much more likely to switch browsers because of annoying bugs, like crashes due to FF installed as a snap.
It honestly is pretty surprising, given that JS runtime runs website code single-threaded.
The gap is not so big these days. JavaScriptCore, Spidermonkey, and V8 are all competent.
The amount of work just to aggregate and compare is admirable, let alone the effort behind the engines themselves.