Gluon: a GPU Programming Language Based on the Same Compiler Stack as Triton
Posted4 months agoActive4 months ago
github.comTechstory
calmmixed
Debate
60/100
GPU ProgrammingTritonGluonCuda
Key topics
GPU Programming
Triton
Gluon
Cuda
Gluon is a new GPU programming language based on the same compiler stack as Triton, sparking discussion about its design and potential competition with NVIDIA's CUDA and other similar projects.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
14m
Peak period
5
3-4h
Avg / period
2
Comment distribution24 data points
Loading chart...
Based on 24 loaded comments
Key moments
- 01Story posted
Sep 17, 2025 at 3:50 PM EDT
4 months ago
Step 01 - 02First comment
Sep 17, 2025 at 4:04 PM EDT
14m after posting
Step 02 - 03Peak activity
5 comments in 3-4h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 18, 2025 at 7:03 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45280592Type: storyLast synced: 11/20/2025, 1:30:03 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Do any of y’all have clear ideas about why it is that way? Why not have a really great bespoke language?
But they end up adding super sophisticated concepts to the familiar language. Makes me wonder if the end result is actually better than having a bespoke language.
[1] https://github.com/NVIDIA/tilus
[1]: https://docs.nvidia.com/cutlass/media/docs/pythonDSL/cute_ds...
- most of the trillion dollar companies have their own chips with AI features (Apple, Google, MS, Amazon, etc.). Gpus and AI training are among their biggest incentives. They are super motivated to not donate major chunks of their revenue to nvidia.
- Mac users don't generally use nvidia anymore with their mac hardware and the apple's CPUs are a popular platform for doing stuff with AI.
- AMD, Intel and other manufacturers want in on the action
- The Chinese and others are facing export restrictions for Nvidia's GPUs.
- Platforms like mojo (a natively compiled python with some additional language features for AI) and others are getting traction.
- A lot of the popular AI libraries support things other than Nvidia at this point.
This just adds to that. Nvidia might have to open up CUDA to stay relevant. They do have a performance advantage. But forcing people to chose, inevitably leads to plenty of choice being available to users. And the more users choose differently the less relevant CUDA becomes.
Some more info in this issue: https://github.com/triton-lang/triton/issues/7392
Is there a big reason why Triton is considered a "failure"?
autogluon is popular as well: https://github.com/autogluon/autogluon