Webgpu and the Price of Compiling Wgsl
Posted3 months agoActive3 months ago
hugodaniel.comTechstory
calmmixed
Debate
40/100
WebgpuWgslGPU Programming
Key topics
Webgpu
Wgsl
GPU Programming
The article discusses the author's experience with WebGPU and the performance issues encountered when compiling WGSL shaders, sparking a discussion on shader compilation, GPU programming, and potential optimizations.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
21
0-12h
Avg / period
8.7
Comment distribution26 data points
Loading chart...
Based on 26 loaded comments
Key moments
- 01Story posted
Oct 6, 2025 at 5:14 PM EDT
3 months ago
Step 01 - 02First comment
Oct 6, 2025 at 7:14 PM EDT
2h after posting
Step 02 - 03Peak activity
21 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 13, 2025 at 8:39 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45496406Type: storyLast synced: 11/20/2025, 12:47:39 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
And WGSL will still bounce through HLSL for DirectX because DXIL is an awful, undocumented mess of ancient LLVM-IR with a giant pile of bolt-on special semantics. Directly authoring DXIL is awful.
It's not the end of the world for web-only projects which can just target WGSL exclusively, but it's a pain in the ass for cross platform engines which now need to support Yet Another Shader Backend. From the old minutes:
> Eric B (Adobe): Creating a new high level language is a cardinal sin. Don’t. Do. That. Don’t want to rewrite all my shaders AGAIN.
> Jesse B (Unity): If we can transcode to HLSL to whatever you need, great. If we can’t, we may not support your platform at all.
> Eric B: Would really not like even to write another transcoder. If there’s an existing tool to get to an intermediate representation, that’s good. Would suggest SPIRV is an EXCELLENT existing intermediate representation.
WGSL vs SPIRV is really just a side issue that people want to focus on but doesn't really matter much in the bigger picture.
>It's literally in past WebGPU meeting minutes: Apple objected to SPIR-V due to disputes with Khronos. Tint is a compromise, it doesn't matter who proposed it.
>"MS: Apple is not comfortable working under Khronos IP framework, because of dispute between Apple Legal & Khronos which is private. Can’t talk about the substance of this dispute. Can’t make any statement for Apple to agree to Khronos IP framework. So we’re discussing, what if we don’t fork? We can’t say whether we’re (Apple) happy with that. NT: nobody is forced to come into Khronos’ IP framework."
>https://docs.google.com/document/d/1F6ns6I3zs-2JL_dT9hOkX_25...
Also I wouldn't rule out that some browser vendors will accept SPIRV as WebGPU shader input as a non-standard extension one day (disclaimer: talking out of my ass here since I'm not on any of the WebGPU implementation teams). The WebGPU API is prepared for accepting different types of shader inputs (that's how the native implementations accept SPIRV instead of WGSL). I bet that this would solve exactly zero problems though ;)
> DirectX accepts SPIR-V nowadays
That doesn't mean much since SPIRV has different incompatible flavours, e.g. you can't feed a GL SPIRV blob into Vulkan, or a Vulkan SPIRV blob into D3D (does D3D actually already accept SPIRV or is this still in the 'planning stage'?)
https://themaister.net/blog/2021/09/05/my-personal-hell-of-t...
The TLDR is SSA erases the structured control flow (like loops), which need to be recovered to support how branching is done on SIMD architectures (enabling/disabling lanes).
The expensive part isn't parsing text into an intermediate bytecode format like SPIRV (especially for WGSL which mostly just maps SPIRV-semantics to text), but what happens after that.
And specifically in WebGPU, even when it takes SPIRV as input, that SPIRV is taken apart, validated and then translated into bytecode formats for D3D, Metal or reassembled into another SPIRV blob for Vulkan.
From what I'm understanding from lurking in the WebGPU discussion group, the current main problem with shader compilation is that some innocent looking shaders may unexpectedly 'explode' due to loop unrolling down in the backend 3D API and a few lines of input may take seconds to compile in the worst case - and for that problem, WGSL vs SPIRV input would be irrelevant.
See here for one 'technical pov' from somebody who was involved:
https://kvark.github.io/spirv/2021/05/01/spirv-horrors.html
https://dawn.googlesource.com/tint/+/refs/heads/chromium/466...