Zed for Windows: What's Taking So Long?
Posted5 months agoActive4 months ago
zed.devTechstory
calmmixed
Debate
60/100
Code EditorsCross-Platform DevelopmentGPU Rendering
Key topics
Code Editors
Cross-Platform Development
GPU Rendering
The Zed team discusses the challenges of porting their code editor to Windows, sparking a discussion about cross-platform development, GPU rendering, and the trade-offs involved.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
49
0-6h
Avg / period
9.9
Comment distribution79 data points
Loading chart...
Based on 79 loaded comments
Key moments
- 01Story posted
Aug 20, 2025 at 1:56 PM EDT
5 months ago
Step 01 - 02First comment
Aug 20, 2025 at 3:23 PM EDT
1h after posting
Step 02 - 03Peak activity
49 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 22, 2025 at 9:14 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 44964366Type: storyLast synced: 11/20/2025, 2:30:18 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
> but we got reports from users that Zed didn't run on their machines due to the Vulkan dependency
This single sentence is abstracting a lot of detail. Vulkan runs on Windows, and quite well. Looking at the bug reports, especially the last one[1]...
> Rejected for device extension "VK_KHR_dynamic_rendering" not supported
Aha, ambitious devs >:) The dynamic rendering extension is pretty new, released with Vulkan 1.3. I suspect targeting Vulkan 1.1 or 1.2 might've been a little more straightforward than... rewriting everything to target DX11. Large games with custom engines (RDR2, Doom, Doom Eternal) were shipped before this was main-lined into Vulkan.
But thinking about it a little more, I suspect switching out the back-end to a dynamic rendering-esque one (which is why D3D11 rather than D3D12) was easier than reverting their Rust code to pre-dynamic rendering Vulkan CPU calls; the Rust code changes are comparatively light and the biggest change is the shader.
That being said, it's a bit annoying to manually write render-passes and subpasses, but it's not the worst thing, and more importantly extremely high performance is less critical here, as Zed is rendering text, not shading billions of triangles. The singular shader is also not necessarily the most complex[2]; a lot of it is window-clipping which Windows does for free.
> we had two implementations of our GPU shaders: one MSL implementation for macOS, and one WGSL implementation for Vulkan. To use DirectX 11, we had to create a third implementation in HLSL.
I wonder why HLSL wasn't adopted from the outset, given roughly 99.999% of shaders—which are mostly shipped with video games, which mostly target Windows—are written in HLSL, and then use dxc to target SPIR-V? HLSL is widely considered the best-specified, most feature-complete, and most documented shader language. I'm writing a Vulkan engine on Windows and Linux, and I only use HLSL. Additionally Vulkan runs on macOS with MoltenVK (and now 'KosmicKrisp'), but I suppose the Zed spirit is 'platform-native and nothing else'.
> symbolicating stack traces requires a .pdb file that is too large to ship to users as part of the installer.
Perhaps publishing a symbol server[3] is a good idea here, rather than users shipping dump files which may contain personally-identifiable information; users can then use WinDbg or Visual Studio to debug the release-mode Zed at their leisure.
[1]: https://github.com/zed-industries/zed/issues/35205
[2]: https://github.com/zed-industries/zed/blob/c995dd2016a3d9f8b...
[3]: https://randomascii.wordpress.com/2020/03/14/creating-a-publ...
Modern Direct3D is almost indistinguishable from Vulkan, on the other hand. So it shouldn't be difficult for them to add.
I also agree with your HLSL comment. It sounds like these guys don’t have much prior graphics or game development experience.
You're right that we may be able to get rid of our WGSL implementation, and instead use the HLSL one via SPIR-V. But also, at some point we plan to port Zed to run in a web browser, and will likely build on WebGPU, where WGSL is the native shading language. Honestly, we don't change our graphics primitives that frequently, so the cost of having the three implementations going forward isn't that terrible. We definitely would not use MoltenVK on macOS, vs just using Metal directly.
Good point that we should publish a symbol server.
Except that everything has effectively converged to HLSL (via Slang which is effectively HLSL++) and SPIR-V (coming via Shader 7).
So, your pipelines, shader language, and IR code would all look mostly the same between Windows and Linux if you threw in with DX12 (which looks much more like Vulkan) rather than DX11. And you'd get the ability to multi-thread through the GPU subsystem via DX12/Vulkan.
And, to be fair, we've seen that MoltenVK gets you about 80-90% of native Metal performance on macOS, so you wouldn't have to maintain a Metal backend, anymore.
And you'd gain the ability to use all the standard GPU debugging tools from Microsoft, nVidia, and AMD rather than just RenderDoc.
You'd abandon this all for some mythical future compatibility with WebGPU--which has deployment counts you can measure with a thimble?
I suspect because a huge amount of software engineers develop on Macbooks and consider Linux second and Windows third. Culturally, I think there's a difference in tooling between Graphics developers (who would go straight for HLSL, cross-platform Vulkan, or even SDL3) and mac users (who reach for Apple tools first)
Not everywhere. See the middle bug report, "Zed does not work in Remote Desktop session on windows" (https://github.com/zed-industries/zed/issues/26692).
Most Remote Desktop/Terminal Services environments won't have any Vulkan devices available, unless you ship your own software rendererer (like SwiftShader).
Also, NVIDIA only supports Vulkan on Kepler (GTX 600 series), AMD on GCN 1.0 (Radeon HD 7000 series), and most importantly, Intel on Skylake (6000 series). Especially on the Intel side, there are plenty of old but still-supported Windows 10 machines that lack Vulkan support. For many applications that's ok, but IMO not for a text editor.
If I switch no vanilla macOS, it’s basically unusable
Clean, but unusable
So with that, this presents a HUGE opportunity for someone to build something akin to Zed, but not with the baggage that their technical strategy brings.
Not sure it’s so clean-cut. More than avoiding baggage, you’re just shifting it elsewhere. The question is if you want to own (and can handle) the baggage and benefit from the control that brings.
From a lock screen that appears ~3 seconds after my desktop does (during which time I can interact with my desktop...) to getting Nvidia GPU passthrough working in Docker being harder running on Linux natively than what it was making it work on WLS (...) to absurd amount of time it takes my machine to come out of sleep.
Oh also the popping and clicking over my BT headset every time someone speaks in a meeting. That was wonderful.
Despite using an older model MB, I needed to install some kernel extensions to get system temperatures working.
Also if I want to develop desktop software, I'm going to be writing against Windows anyway because at least that is somewhat documented, vs the ever changing landscape of Linux desktop software development. (Windows used to be the OS for desktop software, but Microsoft shot themselves in that foot, then removed the entire leg, long ago, by constantly changing and deprecating frameworks, ugh, 20+ years of API stability down the drain...)
To be fair, assuming you're using WSL2, you're running docker on a VM, so it doesn't sound that crazy that it might be more work without the abstraction around the hardware that defines. If there were a built-in VM for your Linux distro, it might end up being easier to expose the GPU through that to things running on it than directly too. I can't say I've ever had any need to access a GPU from a container running on a VM then, so this is just conjecture.
I find Windows to be the outlier against a sea of embedded Linux devices.
> Heaps of games are developed on Windows
Inertia.
> Windows-based software itself is developed on Windows.
Plenty of Windows-based software is developed on Linux with Wine.
I think you're thinking of consumer devices, not industrial.
> Inertia.
I think that's a tough case to make. Windows offers legitimate technical advantages for gaming and game development. Integration with large vendors' tooling like NVIDIA and AMD is pretty huge. There are real workflow benefits.
> Windows-based software itself is developed on Windows.
You know more about this than I do. That sounds kind of wild to me, like it could be a pretty awful work flow at times for no good reason. It looks like you don't have access to native debugging tools and Wine itself introduces potential compatibility risks. I would rather just develop on target, personally
Maybe he's thinking of more modern devices. There was a time when Microsoft flogged WinCE as an embedded solution, and yes a lot of people producing embedded stuff drank the kool aid.
I watched one instance of this happen first hand. They asked me what OS should they base their shiny new product (that I would be the first customer of), I said I would use some 'nix, but they should chose what they were comfortable with.
It turned out to be bad advice. They were comfortable with Windows desktop of course, so they chose WinCE. WinCE is not the stable WinNT they were familiar with, despite what Microsoft's marketing said. I've used a number of WinCE based devices in the past, they were all about as reliable as Windows 95/ME, which is to say most wouldn't last the day without rebooting.
In the end they could only get it working by shipping the product to a team in Germany that had access to the WinCE source. It cost them a small fortune, and lost them over a year. The delay lost me as a customer.
Most (I hope all, but it's never all) of todays experienced software engineers wouldn't make that mistake, but these people where (pretty good) hardware engineers, with a vision for a product they built the hardware for. Developing software was something you hired people to so for you, like plumbing and legal work. And they wanted those people to provide them with a familiar environment.
WinCE has long since been retired, or course. May it soul burn in hell. Yes, those same hardware engineers who insist on sticking to what they are familiar with might turn to Windows 11 instead. But that comes with costs - no ARM or other CPU's, huge resource requirements, insistence on TPM's, so little lack of control of the platform that you lose control of the USS Yorktown [0]. Those costs are large. In fact so large they would have overwhelmed the budget of my engineering friends years ago, and they would have just gone with Linux. I haven't seen a new embedded Windows design in quite a while, so I suspect that's true for most embedded projects now.
[0] https://archive.is/aKrml
I've never encountered a robot that didn't require windows to program. I know they're out there, but they don't seem common in my experience. Building them yourself is possible, but you regularly encounter cases where common, well-supported components require Windows to program. It's a drag.
I'd love to see it — Windows is far from my preferred OS. But my original point was essentially that there are tons of reasons like this which makes Windows a very productive and useful platform for many developers. I totally agree that there are cases where Linux or macOS are better (I prefer them both when possible) and yeah, WinCE was a total mess even by consumer standards. I had a pocket pc (ha, I was so excited about it) and it was a tremendous letdown largely because of the OS.
Side note, thanks for reminding me of that era. As bad as the software was, those devices were so god damn exciting. A pocket computer! I still remember how incredibly futuristic it felt.
This seems very much like something that happened because it had such a large market share, not because of any inherent technical advantages
This is a product of inertia. If Windows didn't have inertia it wouldn't have ecosystem advantages, it's not inherent to Windows itself
The overwhelming majority of software written against MinGW (or worse, Cygwin) are bad/lazy ports of Linux-first software. Case in point: Git and Perl, both of which drag along an entire coreutils ecosystem (each, so you have two copies of `ls`) along with the main binaries.
First-class Windows programs that are used every day like Office, Chromium and its forks, the Adobe suite, and tons and tons of internal administrative programs for HR, inventory, and more are written on Windows, for Windows, using C# or C++ and 'boring', so-called enterprisey frameworks like WPF, Windows Forms, and WinUI 2.
Anyone remotely serious about taking advantage of the large (albeit shrinking) market share of Windows users should at the very least fire up a VM to test their release binaries, rather than just 'use Wine'.
I don't know how people put up with it. It feels disrespectful.
I install the Enterprise/Education versions.
Never going back.
This is so often repeated, but I genuinely don't understand why. Could you try selling me on it? I ended up going the sysadmin/devops route instead after college, but the more I learn about Linux, the less I understand why anyone would choose it for personal, active manual use.
I can understand server deployments, it works well enough. It's available at no cost, Windows Server is way out in the far other end in terms of current desired behavior, and whatever pains it has you get paid to make up for. None of which applies on a personal device level.
The most common selling points I see are more performance and less "spying". I find neither of these very persuasive, and I'm not interested in ideological rationales either (supporting free software). If you have anything else, I'm all ears.
As in case for the desktop, you can switch out your audio stack, alter the display of any element and many other things. Using windows is borrowing some shoes while Linux can be your favorite slipper.
I have a few things here and there, but it's more a scheduled script or two than anything more elaborate, and I don't think they were difficult to make and deploy.
> you can switch out your audio stack
Why would I want that? Isn't this more for someone doing live audio production (e.g. due to latency concerns)?
In general, the customizability angle is also another that doesn't resonate with me much. It's less that I want to customize my stuff, and more that I want my stuff to be to my liking from the get-go.
There’s no broad stroke here. It’s more about the possibility to adjust something here and there. I don’t have anything against windows technically (I’ll use it with no complaints if it’s work provided).
When I notice something I don’t like (workflow mostly instead of appearance), I want to be able to fix it instead of suffering it (unless I’m being paid).
Unless you are using Visual Studio that blews out every other IDE out of the water if you consider debugging and profiling experience.
> As in case for the desktop, you can switch out your audio stack, alter the display of any element and many other things. Using windows is borrowing some shoes while Linux can be your favorite slipper.
Ah yes, biggest Linux advantage - being a mud hut.
Everything it does or doesn’t do is my responsibility.
These days, WSL2 effectively eliminates a need for that for most developers.
Otherwise no.
I find hilarious this FOSS concept that developers only use Linux, I wonder who writes software for all other operating systems in the world.
Windows still offers other options, even if MS itself tend to ignore them.
> If you are a developer you should have switched to Linux years ago anyway.
Developers is much broader set than web developers, and even then advantages of Linux escape me.
While I doubt this had anything to do with the decision by the Zed team to make their own toolkit, it is something becoming more common. Hopefully it doesn't start happening in the encryption space.
There aren't too many epithets floating around that offend me specifically. And I haven't heard anyone say I shouldn't/don't exist. So it's hard for me personally to feel the need for CoC and the like. But I'm all for policy that protects everyone against that kind of abuse -- which seems to be on the rise. Are there better alternatives?
Lets recap that this lesson has been learned by Godot developers, regarding their backends as well.
The ICD mechanism is a kind of escape hatch leftover due to backwards compatibility, and even user mode drivers build on top of DirectX runtime infrastructure.
It really depends how you define scope, but I don't think I would've taken on another GPU backend for that.
This is especially visible when buying random asian cards that aren't the reference designs from AMD and NVidia. Intel was never great regardless of the API.
Additionally we have the usual extension spaghetti, which is one thing that has beaten them here.
Require too many of them, and coding around their inexistence becomes like using yet another API, that is similar but not quite.
Meanwhile Windows 7 is already over 2 years past the end of extra-extended support at the time this new code was written with Windows 7 support still in mind. Which is nice, but a very different scenario.
This is incongruous given Zed uses modern frameworks (which is why they moved to D3D11 from Vulkan in the first place).
If Zed really wanted to target 'old Windows' then they might have used Win32 and GDI+, not D3D11. In fact they could've stuck to D2D (which was released with Windows 7 and back-ported to Vista), and not used their own rendering at all, since D2D is already a GPU-accelerated text-rendering API, and then used Win32 windowing primitives for everything else.
It isn't just text editors—nowadays, everything renders on your GPU, even your desktop and terminal (unless you're on a tty). For example, at the bottom of Chromium, Electron, and Avalonia's graphics stack is Skia, which is a cross-platform GPU-accelerated windowing and 2D graphics library.
GPU compositing is what allows transparency, glass effects, shadowing, and it makes actually writing these programs much easier, as everything is the same interface and uses the same rendering pipeline as everything else.
A window in front of another, or a window partially outside the display? No big deal, just set the 3D coordinates, width, and height correctly for each window, and the GPU will do hidden-surface removal and viewing frustum clipping automatically and for free, no need for any sorting. Want a 'preview' of the live contents of each window in a task bar or during Alt-Tab, like on Windows 7? No problem, render each window to a texture and sample it in the taskbar panels' smaller viewports. Want to scale or otherwise squeeze/manipulate the contents of each window during minimise/maximise, like macOS does? Easy, write a shader.
This was a big deal in the early 2000s when GPUs finally had enough raw compute to always run everything, and basically every single OS and compositor switched to GPU rendering roughly in the same timeline—Quartz Extreme on Mac OS X, DWM.exe on Windows, and Linux's variety of compositors, including KWin, Compiz, and more.
There's a reason OSs from that time frame had so many glassy, funky effects—this was primarily to show off just how advanced their GPU-powered compositors were, and this was also a big reason why Windows Vista fell so hard on its face—its compositor was especially hard on the scrawny integrated GPUs of the time, enough that two themes—Aero Basic, and Aero Glass—had to be released for different GPUs.
If you're saying that Zed is built on something like Skia, then it would already be cross-platform and not have to worry about Vulkan vs. DirectX, right?
Happy to elaborate further.
Old school text rendering began with a table of character codes to actual, fixed-size bitmaps (this was a font), and rendering was straightforward: divide the framebuffer resolution by the bitmap resolution, clip/wrap the remaining, just place the bitmaps into the resultant grid, and pipe the framebuffer to the display. Done.
Nowadays, text editors don't just have text; they have markup like highlighting and syntax colouring (with 24-bit deep-colour, rather than the ANSI 16 colour codes), go-to, version control annotations, debug breakpoints, hover annotations, and in the case of 'notebooks' like Python notebooks, may have embedded media like images, videos, and even 3D renders. Many editor features may open pop-up windows or dialogue boxes, which will probably occlude the text 'behind'.
Now, most modern text editors also expect to work with non-bitmapped, non-monospaced typefaces in OpenType or TrueType format. These are complex beasts of their own with hinting, ligatures, variable weights, and more, and may even embed entire programs. They are usually Bezier/polynomial splines that the GPU can rasterise easily in hardware (no special shader required). After this rasterisation, any reasonable text editor will apply anti-aliasing, which is also work delegated to the GPU. There is probably a different algorithm for text (which needs to account for display subpixel layouts) versus UI elements (which may not).
The point I am driving at is that the proliferation of features expected from a modern text editor means that using a GPU for all of this is a natural evolution. As users, we may think 'it's just text' but from the perspective of the developer or the hardware, text and a ray-traced 3D game is no different: it's one 4D square matrix multiplied by another, one after another, and in the end reduced into a three-vector, representing the colour of a pixel.
> If you're saying that Zed is built on something like Skia, then it would already be cross-platform and not have to worry about Vulkan vs. DirectX, right?
Absolutely, because Skia handles that for the developer. And I suspect the reason why Zed didn't use Skia in the first place is ideological (Skia is by Google, written in C++), together with wanting to write 'the whole world' in Rust.
Since it's more than quick enough to do this on the CPU, they're likely doing it for things like animations and very high quality font rendering. There's image-processing going on when you really care about quality; oversampling and filtering.
I suspect one could do most everything Zed does without a GPU, but about 10 to 20% uglier, depending on how discerning the user is on such things.
This is true until it isn't. A modern-ish CPU at 1080p 60hz it'll be fine. At 4k 120hz even the fastest CPU on the market won't keep up. And then there's 8k.
> they're likely doing it for things like animations and very high quality font renderin
Since they're using native render functions this probably isn't the case.
What’s a native render function? Do you mean just using a graphics API as opposed to an off-the-shelf UI library?
As in using DirectWrite or GDI on Windows; or Core Text on macOS. As opposed to shipping your own glyph rasterizer.
> To work around this limitation, we decided to stop using Direct2D and switch to rasterizing glyphs using DirectWrite instead.
It doesnt really do a tech breakdown of why it’d be impossible CPU side, but mentions a couple of things about their design process for it.
[0]: https://zed.dev/blog/videogame
Zedless: Zed fork focused on privacy and being local-first - https://news.ycombinator.com/item?id=44964916
Sequoia backs Zed - https://news.ycombinator.com/item?id=44961172
I tried it and the experience (mainly visually, fonts colours etc) wasn't very good so I can understand why the Zed developers are reluctant to formally release windows binaries.
1 more comments available on Hacker News