A Case for Learning GPU Programming with a Compute-First Mindset
Posted3 months agoActive3 months ago
themaister.netTechstory
supportivepositive
Debate
20/100
GPU ProgrammingCompute-First MindsetParallel Computing
Key topics
GPU Programming
Compute-First Mindset
Parallel Computing
The article argues for learning GPU programming with a compute-first mindset, sparking a discussion among HN users about the effectiveness of this approach and their personal experiences with GPU programming.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
3d
Peak period
4
78-84h
Avg / period
2.8
Comment distribution11 data points
Loading chart...
Based on 11 loaded comments
Key moments
- 01Story posted
Oct 6, 2025 at 7:57 AM EDT
3 months ago
Step 01 - 02First comment
Oct 9, 2025 at 5:32 PM EDT
3d after posting
Step 02 - 03Peak activity
4 comments in 78-84h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 10, 2025 at 12:08 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45490403Type: storyLast synced: 11/20/2025, 1:42:01 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
> The debug flow I propose with RenderDoc will rely on a lot of shader replacements and roundtrips via SPIRV-Cross’ GLSL backend, so Vulkan GLSL is the appropriate language to start with.
Where, as someone who never did GPU programming, I feel like the author was explaining GPU programming to people who already know GPU programming.
GLSL is ABSOLUTELY NOT an appropriate language to start with anymore for a huge number of reasons, and I dearly wish people would quit using it for tutorials.
Anyone trying to get into GPU programming is going to be much, much more familiar programming in Slang which is effectively HLSL plus some stuff and HLSL is superficially like C++. As a bonus, you are learning a shader language actually used on Windows.
> For example people on AMD chips seem to be gravitating to Vulkan for LLM stuff now and Slang/HLSL are not involved.
You seem to be confusing different layers of the graphics stack. The SDK operates on the host CPU but the GPU needs something else. For Windows, that was DirectX (CPU) with HLSL compiled to DXIL (GPU). For Linux/Android, that was Vulkan (CPU) with GLSL compiled to SPIR-V (GPU).
As of now, those are DirectX with (HLSL or Slang) to SPIR-V and Vulkan with Slang to SPIR-V.
Microsoft announcement of support for SPIR-V in SM7: https://devblogs.microsoft.com/directx/directx-adopting-spir...
Vulkan announcement of Slang as supported shading language: https://www.khronos.org/news/press/khronos-group-launches-sl... . You can also see it in the fact that the Vulkan examples now all have slang shaders as well.
The biggest reason to quit using GLSL is simply that development of it occurs at a snail's pace relative to the others because Microsoft and NVIDIA pour so much resource at HLSL and Slang, respectively.
Excited to get into a bit and am bookmarking this guide for when I've got my basic setup going.