Head in the Zed Cloud
Postedabout 2 months agoActiveabout 2 months ago
maxdeviant.comTechstory
calmmixed
Debate
60/100
WebassemblyCloud ComputingCode Editors
Key topics
Webassembly
Cloud Computing
Code Editors
The article discusses Zed's use of Rust, WebAssembly, and Cloudflare Workers for their code editor's backend, sparking discussion on performance, pricing, and vendor lock-in.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
6h
Peak period
16
0-12h
Avg / period
5.7
Comment distribution40 data points
Loading chart...
Based on 40 loaded comments
Key moments
- 01Story posted
Nov 10, 2025 at 9:23 AM EST
about 2 months ago
Step 01 - 02First comment
Nov 10, 2025 at 3:40 PM EST
6h after posting
Step 02 - 03Peak activity
16 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 18, 2025 at 7:52 PM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45876308Type: storyLast synced: 11/20/2025, 4:29:25 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
What is performance overhead when comparing rust against wasm?
Also think the time for a FOSS alternative is coming. Serverless without, virtually, cold starts is here to stay but being tied to only 1 vendor is problematic.
It may get even closer with WASM3, released 2 months ago, since it has things like 64 bit address support, more flexible vector instructions, typed references (which remove runtime safety checks), basic GC, etc. https://webassembly.org/news/2025-09-17-wasm-3.0/
https://spidermonkey.dev/blog/2025/01/15/is-memory64-actuall...
2) The bounds checking argument is a problem, I guess?
3) This article makes no mention of type-checking, which is also a new feature, which moves some checks that normally only run at runtime to only needing to be checked once at compile time, and this may include bounds-style checks
Supabase Edge Functions runs on the same V8 isolate primitive as Cloudflare Workers and is fully open-source (https://github.com/supabase/edge-runtime). We use the Deno runtime, which supports Node built-in APIs, npm packages, and WebAssembly (WASM) modules. (disclaimer: I'm the lead for Supabase Edge Functions)
Several years ago, I used MeteorJs, it uses mongo and it is somehow comparable to Supabase. The main issue that burned me and several projects was that It was hard/even impossible to bring different libraries, it was a full stack solution that did not evolved well, it was great for prototyping until it became unsustainable and even hard to on board new devs due to “separating of concerns” mostly due to the big learning curve of one big framework.
Having learn for this, I only build apps where I can bring whatever library I want. I need tool/library/frameworks to as agnostic as possible.
The thing I love about CloudFlare workers is that you are not force to use any other CF service, I have full control of the code, I combine it with HonoJs and I can deploy it as a server or serverless.
About the runtimes: Having to choose between node, demo and bun is something that I do not want to do, I’m sticking with node and hopefully the runtimes would be compatible with standard JavaScript.
It's possible for you to self-host Edge Runtime on its own. Check the repo, it has Docker files and an example setup.
> I have full control of the code, I combine it with HonoJs and I can deploy it as a server or serverless.
Even with Supabase's hosted option, you can choose to run Edge Functions and opt out of others. You can run Hono in Edge Functions, meaning you can easily switch between CF Workers and Supabase Edge Functions (and vice versa) https://supabase.com/docs/guides/functions/routing?queryGrou...
> Having to choose between node, demo and bun is something that I do not want to do, I’m sticking with node and hopefully the runtimes would be compatible with standard JavaScript.
Deno supports most of Node built-in API and npm packages. If your app uses modern Node it can be deployed on Edge Functions without having to worry about the runtime (having said that, I agree there are quirks and we are working on native Node support as well).
It is a terrific technology, and it is reasonably portable but I think you would be better using it in something like Supabase where are the whole platform is open source and portable, if those are your goals.
At that point it doesn’t really matter if it’s cold start or not.
People can and do use this to run Workers on hosting providers other than Cloudflare.
Last time I compared (about 8 years ago) WASM was closer to double the runtime. So things have definitely improved. (I had to check a handful of times that I was compiling with the correct optimizations in both cases.)
WASM3, especially (released just 2 months ago), is really gunning for a more general-purpose "assembly for everywhere" status (not just "compile to web"), and it looks like it's accomplishing that.
I hope they add some POSIXy stuff to it so I can write cross-platform commandline TUI's that do useful things without needing to be recompiled on different OS/chip combos (at the cost of a 10-20% reduction from native compilation- not a critical loss for all but the most important use-cases) and are likely to simply keep working on all future OS/chip combos (assuming you can run the wasm, of course)
Are you aware of WASI? WASI preview 1 provides a portable POSIXy interfance, while WASI preview 2 is a more complex platform abstraction beast.
(Keeping the platform separate from the assembly is normal and good - but having a common denominator platform like POSIX is also useful).
https://wasix.org/
You can already create threads in Wasm environments (we got even fork working in WASIX!). However, there is an upcoming Wasm proposal that adds threads support natively to the spec: https://github.com/WebAssembly/shared-everything-threads
is this something that is expected to "one day" be part of WASM proper in some form?
If you want to compile threaded code, things should already work (without waiting for any proposal in the Wasm space). If you want to run it, there are few options: use wasmer-js for the browser (Wasmer using the Browser Wasm engine + WASIX) or using normal Wasmer to run it server-side.
No need to wait for the Wasm "proper" implementation. Things should already be runnable with no major issues.
But the times I've used the collaboration tooling in Zed have been really excellent. It just sucks it's not getting much attention recently. In particular I'd really like to see some movement on something that works across multiple different editors on this front.
I'm glad to hear they're still thinking about these kind of features.
Only issue is that some of the managed services are still pretty half-baked, and introduce insane latency into things that should not be slow. KV checks/DB queries through their services can be double-to-triple digit ms latencies depending on configs.
Realistically for a low traffic app it's fine, but it really makes you question how badly you want to be writing Rust.
As far as I can tell, the problem stems from the fact that CF Workers is still V8 - it's just a web browser as a server. A Rust app in this environment has to compile the whole stdlib and include it in the payload, whereas a JS app is just the JS you wrote (and the libs you pulled in). Then the JS gets to use V8's data structures and JSON parsing which is faster than the wasm-compiled Rust equivalents.
At least this is what I ran into when I tried a serious project on CF Workers with Rust. I tried going full Cloudflare but eventually migrated to AWS Lambda where the Rust code boots fast and runs natively.
Regardless, not sure why a Rust engineer would choose this path. The whole point to writing a service in Rust is that you would trade 10x time build complexity and developer ovearhead for getting a service that can run in a low memory, low CPU VM. Seems like the wrong tools for the job.
Thanks for the confirmation. I was confused as well. I always thought that the real use of WASM is to run exotic native binaries in a browser, for example, running Tesseract (for OCR) in the browser.
For me, it's a superior experience anyway. I also prefer it in editors that support both (like VS code).
You can run the REPL with a Jupiter kernel as well.
https://zed.dev/docs/repl#cell-mode
This implementation sounds fully dependant on a service that Zed has little to say about.
[0]: https://github.com/cloudflare/workerd
Its gonna be hard to compete with the scaling cloudflare offers if they migrate to their own dedicated infra, but it of course would become much cheaper than paying per request
BUT it's worth noting that WebAssembly still has some performance overhead compared to native, the article chooses convenience and portability over raw speed, which might be fine for an editor backend.