Why Is Zig So Cool?
Postedabout 2 months agoActiveabout 2 months ago
nilostolte.github.ioTechstoryHigh profile
heatedmixed
Debate
80/100
Zig Programming LanguageSystems ProgrammingLanguage Design
Key topics
Zig Programming Language
Systems Programming
Language Design
The article 'Why is Zig so cool?' sparks a lively discussion on the merits and drawbacks of the Zig programming language, with commenters debating its unique features, performance, and use cases.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
48m
Peak period
76
0-6h
Avg / period
16
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Nov 7, 2025 at 6:04 PM EST
about 2 months ago
Step 01 - 02First comment
Nov 7, 2025 at 6:53 PM EST
48m after posting
Step 02 - 03Peak activity
76 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 11, 2025 at 5:42 PM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45852328Type: storyLast synced: 11/22/2025, 11:00:32 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I don't understand how the things presented in this article are surprising. Zig has several nice features shared by many modern programming languages?
That the author feels the need to emphasize this means either that they haven't paid attention to modern languages for a very long time, or this article is for people who haven't paid attention to modern languages for a very long time.
Type inference has left academy and proliferated into mainstream languages for so many years that I almost forgot that it's a worth mentioning feature.
> One is Zig’s robustness. In the case of the shift operation no wrong behavior is allowed and the situation is caught at execution time, as has been shown.
Panicking at runtime is better than just silently overflowing, but I don't know if it's the best example to show the 'robustness' of a language...
It’s not common in lower level languages without garbage collectors or languages focused on compilation speed.
“Low-level” languages — Rust, C++, D
> what the heck does it matter what "much of the standard library uses" to this issue?
It matters in that most people looking for a low level manually memory managed language won’t likely choose D, so for the purposes of “is this relatively novel among lower level, memory managed languages” D doesn’t fit my criteria.
> Even C now has type inference. The plain fact is that the claim is wrong.
Almost no one is using C23 yet.
I'm not even sure I'd call this type inference (other people definitely do call it type inference) given that it's only working in one direction. Even Java (var) and C23 (auto), the two languages the author calls out, have that. It's much less convenient than something like Hindley-Milner.
Or it's hyperbolic.
I got that impression as well.
Xi's impressed about types being optional because they can be inferred.
That's ... hardly a novelty ...
Funny they mention Java that has got type inference few years now. Even C got a weaker version of C++'s auto in C23.
What Zig really does is make systems programming more accessible. Rust is great, but its guarantees of memory safety come with a learning curve that demands mastering lifetimes and generics and macros and a complex trait system. Zig is in that class of programming languages like C, C++, and Rust, and unlike Golang, C#, Java, Python, JS, etc that have built-in garbage collection.
The explicit control flow allows you as a developer to avoid some optimizations done in Rust (or common in 3rd party libraries) that can bloat binary sizes. This means there's no target too small for the language, including embedded systems. It also means it's a good choice if you want to create a system that maximizes performance by, for example, preventing heap allocations altogether.
The built-in C/C++ compiler and language features for interacting with C code easily also ensures that devs have access to a mature ecosystem despite the language being young.
My experience with Zig so far has been pleasurable. The main downside to the language has been the churn between minor versions (language is still pre-1.0 so makes perfect sense, but still). That being said, I like Zig's new approach to explicit async I/O that parallels how the language treats Allocators. It feels like the correct way to do it and allows developers again the flexibility to control how async and concurrency is handled (can choose single-threaded event loop or multi-threaded pool quite easily).
I don't think there's is any significant different here between zig, C and Rust for bare-metal code size. I can get the compiler to generate the same tiny machine code in any of these languages.
C, yes, you can compile C quite small very easily. Zig is like a simpler C, in my mind.
Zig on the other hand does lazy evaluation and tree shaking so you can include a few features of the std library without a big concern.
IIRC there's also a mutex somewhere in there used to workaround some threading issues in libc, which brings in a bespoke mutex implementation; I can't remember whether that mutex can be easily disabled, but I think there's a way to use the slower libc mutex implementation instead.
Also, std::fmt is notoriously bad for code size, due to all the dyn vtable shenanigans it does. Avoid using it if you can.
Regardless, the only way to fix many of the problems with std is rebuilding it with the annoying features compiled out. Cargo's build-std feature should make this easy to do in stable Rust soon (and it's available in nightly today).
Zig is a good language. So are Rust, D, Nim, and a bunch of others. People tend to think that the ones they know about are better than all the rest because they don't know about the rest and are implicitly or explicitly comparing their language to C.
Of course both Zig and Rust are good languages. But my experience, and I believe your experience will be too if you try to compile programs of similar complexity using standard practices of each language, is that Zig compiles much more compactly in .ReleaseSmall mode than Rust does even with optimization flags, which makes it more ideal for embedded systems, in my opinion. I learned this on my own by implementing the same library in both languages using standard default practices of each.
Of course, at the desktop runtime level, binary size is frequently irrelevant as a concern. I just feel that since Zig makes writing "magic" code more difficult while Rust encourages things like macros, it is much easier to be mindful of things that do impact binary size (and perhaps performance).
This is not true. Zig, D, and Nim all have full-language interpreters built into the compiler; Rust does not. Its macros (like macros generally) manipulate source tokens, they don't do arbitrary compile-time calculations (they live in separate crates that are compiled and then run on source code, which is very different from Zig/D/Nim comptime which is intermixed with the source code and is interpreted). Zig has no macros (Andrew hates them)--you cannot "generate code" in Zig (you can in D and Nim); that's not what comptime does. Zig's comptime allows functions written in Zig to execute at compile time (the same functions can also be used to run at execution time if they only use execution-time types). The Zig trick is that comptime code can not only operate on normal data like ints and structs, but also types, which are first class comptime objects. Comptime code has access to the TypeInfo of types, both to read the attributes of types and to create types with specified attributes, which is how Zig implements generics.
Zig feels like one of the few programming languages that mostly just avoids gigantic blunders.
I have some beefs with some decisions, but none of them that are an immutable failure mode that couldn't be fixed in a straightforward manner.
I just use a it as cross-compiler for my Nim[0] programs.
[0] - https://nim-lang.org
I can say the same (although my career spans only 30 years), or, more accurately, that it's one of the few languages that surprised me most.
Coming to it from a language design perspective, what surprised me is just how far partial evaluation can be taken. While strictly weaker than AST macros in expressive power (macros are "referentially opaque" and therefore more powerful than a referentially transparent partial evaluation - e.g. partial evaluation has no access to an argument's name), it turns out that it's powerful enough to replace not only most "reasonable" uses of macros, but also generics and interfaces. What gives Zig's partial evaluation (comptime) this power is its access to reflection.
Even when combined with reflection, partial evaluation is more pleasurable to work with than macros. In fact, to understand the program's semantics, partial evaluation can be ignored altogether (as it doesn't affect the meaning of computations). I.e. the semantics of a Zig program are the same as if it were interpreted by some language Zig' that is able to run all of Zig's partial-evaluation code (comptime) at runtime rather than at compile time.
Since it also removes the need for other specialised features (generics, interfaces) - even at the cost of an aesthetic that may not appeal to fans of those specialised features - it ends up creating a very expressive, yet surprisingly simple and easy-to-understand language (Lisps are also simple and expressive, but the use of macros makes understanding a Lisp program less easy).
Being simple and easy to understand makes code reviews easier, which may have a positive impact on correctness. The simplicity can also reduce compilation time, which may also have a positive impact on correctness.
Zig's insistence on explicitness - no overloading, no hidden control flow - which also assists reviews, may not be appropriate for a high-level language, but it's a great fit for an unabashedly low-level language, where being able to see every operation as explicit code "on the page" is important. While its designer may or may not admit this, I think Zig abandons C++'s belief that programs of all sizes and kinds will be written in the same language (hence its "zero-cost abstractions", made to give the illusion of a high-level language without its actual high-level abstraction). Developers writing low-level code lose the explicitness they need for review, while those writing high-level programs don't actually gain the level of abstraction required for a smooth program evolution that they need. That belief may have been reasonable in the eighties, but I think it has since been convincingly disproved.
Some Zig decisions surprised me in a way that made me go more "huh" than "wow", such as it having little encapsulation to speak of. In a high-level language I wouldn't have that (after years of experience with Java's wide ecosystem of libraries, we learned that we need even more and stronger encapsulation than we originally had to keep compatibility while evolving code). But perhaps this is the right choice for a low-level language where programs are expected to be smaller and with fewer dependencies (certainly shallower dependency graphs). I'm curious to see how this pans out.
Zig's terrific support for arenas also makes one of the most powerful low-level memory management techniques (that, like a tracing garbage collector, gives the developer a knob to trade off RAM usage for CPU) very accessible.
I have no idea or prediction on whether Zig will become popular, but it's certainly fascinating. And, being so remarkably easy to learn (especially if you're familiar with low-level programming), it costs little effort to give it a try.
Every language at scale needs a preprocessor (look at the “use server” and “use gpu” silliness happening in TS) - why is it not the the same as the language you use?
I look forward to a future high-level language that uses something like comptime for metaprogramming/interfaces/etc, is strongly typed, but lets you write scripts as easily as python or javascript.
For me it'd be hard to go back to languages that don't have all that. Only swift comes close.
#!/usr/bin/env rdmd
[D code]
and run it as if it were an executable. (The compilation is cached so it runs just as fast on subsequent runs.)
#!/usr/bin/env rdmd D code ...
and run it as if it were an executable.
IMHO "clearly better" might be a matter of perspective; my impression is that this is one of those things where the different approaches buy you different tradeoffs. For example, by my understanding Rust's generics allows generic functions to be completely typechecked in isolation at the definition site, whereas Zig's comptime is more like C++ templates in that type checking can only be completed upon instantiation. I believe the capabilities of Rust's macros aren't quite the same as those for Zig's comptime - Rust's macros operate on syntax, so they can pull off transformations (e.g., #[derive], completely different syntax, etc.) that Zig's comptime can't (though that's not to say that Zig doesn't have its own solutions).
Of course, different people can and will disagree on which tradeoff is more worth it. There's certainly appeal on both sides here.
It's possible that something similar might be the right path for metaprogramming. Rust's generics are simple and weaker than Zig's comptime, while proc macros are complicated and stronger than Zig's comptime.
So I think the jury's still out on whether Rust's metaprogramming is "better" than Zig's.
What does this mean?
For example (you can pick another example if you want), how is C++'s std::vector less abstract than Java's ArrayList?
I've described this in the past as languages being "too general purpose" or too "multi-paradigm". Languages like Scala that try to be Haskell and Java in one.
> I have no idea or prediction on whether Zig will become popular
I think LLMs may be able to assist to move large C codebases to Zig in the next decade. Once zigc compiles C-Linux, bit-by-bit can be (LLM-assistedly) ported to Zig. This it not soon, but I think will be it's killer feature.
I don't mind if Linux becomes Rust+Zig codebase in, say, 10y from now. :)
I like languages that dare to try to do more with less. Zig's comptime, especially the way it supplants generics, is pretty darn awesome.
I was having a similar feeling with Elixir the other day, when I realized that I could built every single standard IPC mechanism that you might find in something like python.threading (Queue, Mutex, RecursionLock, Condition, Barrier, etc) with the Erlang/Beam/Process mailbox.
This is what I've started doing for every library I use. I go to their Github, download their docs, and drop the whole thing into my project. Then whenever the AI gets confused, I say "consult docs/somelib/"
During the last year I have been observing how MCP, tools and agents, have reduced the amount of language specific code we used to write.
You could go further like in this case, and use wheels + PyPi for something unrelated to Python.
Or I should say it was useful as a distribution method, because most people had Python already available. Since most distros now don't allow you to install stuff outside a venv you need uv to install things (via `uv tool install`) and we're not yet at the point where most people already have uv installed.
I know some of it has already happened with rust, but perhaps there’s a broader reckoning that needs to occur here wrt standards around how language specific build and packaging systems handle cross language projects… which could well point to phasing those in favour of nix or pixi, which are designed from the getgo to support this use case.
Usually arbitrary binaries stuffed in Python wheels are mostly self contained single binaries and such, with as little dynamic linking nonsense as possible, so they don't break all the time, or have dependency conflicts.
It seems to consistently work really well for binaries, although it would be nice to have first class support for integrating npm packages.
I feel what is missing is how each feature is so cool compared to other languages.
As a language nerd zig syntax is just so cool. It doesn’t feel the need to adhere to any conventions and seems to solve the problems in the most direct and simple way.
An example of this declaring a label and referring to a label. By moving the colon to either end it makes labels instantly understood which form it is.
And then there is the runtime promises such as no hidden control flow. There are no magical @decorators or destructors. Instead we have explicit control flow like defer.
Finally there is comptime. No need to learn another macro syntax. It’s just more zig during compilation
https://matklad.github.io/2025/08/09/zigs-lovely-syntax.html
Zig's big feature imo is just the relative absence of warts in the core language. I really don't know how to communicate that in an article. You kind of just have to build something in it.
Programming with it is magical, and its a huge drag to go back to languages without it. Just so much better than common OOP that depends only on the type of one special argument (self, this etc).
Common Lisp has had it forever, and Dylan transferred that to a language with more conventional syntax -- but is very near to dead now, certainly hasn't snowballed.
On the other hand Julia does it very well and seems to be gaining a lot of traction as a very high performance but very expressive and safe language.
Julia is phenomenally great for solo/small projects, but as soon as you have complex dependencies that _you_ can't update - all the overloading makes it an absolute nightmare to debug.
What Ada (and Rust) calls generics is very different -- it is like template functions in C++.
In those languages the version of the function that is selected is based on the declared type of the arguments.
In CLOS, Dylan, Julia the version of the function that is selected is based on the runtime type of the actual arguments.
Here's an example in Dylan that you can't do in Ada / Rust / C++ / Java.
The `n == 1` is actually syntactic sugar for the type declaration `n :: singleton(1)`.The Julia version is slightly more complex.
This is perhaps a crazy way to write `fib()` instead of a conventional `if/then/else` or `?:` or switch with a default case, but kinda fun :-)This of course is just a function with a single argument, but you can do the same thing across multiple arguments.
As you can see from my comment history, I am quite aware of CLOS, Lisp variants and Dylan.
The tooling makes it easy to tell which version of a method you're using, though that's rarely an issue in practice. And the fact that methods are open to extension makes it really easy to fix occasional upstream bugs where the equivalent has to wait for a library maintainer in Python.
500kloc Julia over 4 years, so not a huge codebase, but not trivial either.
>Programming with it is magical, and its a huge drag to go back to languages without it. Just so much better than common OOP that depends only on the type of one special argument (self, this etc).
Can you give one or two examples? And why is programming with it magical?
Because methods aren't "inside" objects, but just look like functions taking (references to) structs, you can add your own methods to someone else's types.
It's really hard to give a concise example that doesn't look artificial, because it's really a feature for large code bases.
Here's a tutorial example for Julia
https://scientificcoder.com/the-art-of-multiple-dispatch
The need for this jumped out at me during Writergate. People had alot of trouble understanding exactly how all the pieces fit together, and there was no good place to document that. The documentation (or the code people went to to understand it) was always on an implementation. Having an interface would have given Zig a place to hang the Reader/Writer documentation and allowed a quick way for people to understand the expectations it places on implementations without further complications.
For Zig, I don't even want it to automatically handle the vtable like other languages...I'm comfortable with the way people implement different kinds of dynamic dispatch now. All I want is a type-level construct that describes what fields/functions a struct has and nothing else. No effect on runtime data or automatic upcasting or anything. Just a way to say "if this looks like this, it can be considered this type."
I expect the argument is that it's unnecessary. Technically, it is. But Zig's biggest weakness compared to other languages is that all the abstractions have to be in the programmer's head rather than encoded in the program. This greatly hampers people's ability to jump into a new codebase and help themselves. IMO this is all that's needed to remedy that without complicating everything.
You can see how much organizational power this has by looking at the docs for Go's standard library. Ignore how Go's runtime does all the work for you...think more about how it helps make the _intent_ behind the code clear.
Check back in on Zig after another decade.
There's a certain beauty in only having to know 1~2 loops/iteration concepts compared to 4~5 in modern multi paradigm languages(various forms of loops, multiple shapes of LINQ, the functional stuff etc).
Skipping other minor changes.
However I do agree C# is adding too much stuff, the team seems trying to justify their existence.
My experience with Golang so far is biased because i only recently looked at golang, for the past decade i have been working mostly in java and c#, so most of those newly added features in golang is stuff i'm already deeply familiar with conceptually.
That's been my exact experience too. I was surprised how fast I felt confident in writing zig code. I only started using it a month ago, and already I've made it to 5000 lines in a custom tcl interpreter. It just gets out of the way of me expressing the code I want to write, which is an incredible feeling. Want to focus on fitting data structures on L1 cache? Go ahead. Want to automatically generate lookup tables from an enum? 20 lines of understandable comptime. Want to use tagged pointers? Using "align(128)" ensures your pointers are aligned so you can pack enough bits in.
a few of those decisions seem radical, and I often disagreed with them.. but quite reliably, as I learned more about the decision making, and got deeper into the language, I found myself agreeing with them afterall. I had many moments of enlightenment as I dug deeper.
so anyways, if you're curious, give it an honest chance. I think it's a language and community that rewards curiosity. if you find it fits for you, awesome! luckily, if it doesn't, there's plenty of options these days (I still would like to spend some quality time with Odin)
There might be a few pathological code paths in the core libraries or whatever for certain things that aren't what they should be, but in terms of the raw language you're in the land of C as much as with any of these languages; Odin really doesn't do much on top of C, and what it's doing is identifiable and can be opted out of; if you find that a function in a hot loop is marginally slower than it ought to be, you can make it contextless, for example, and see whether that makes a difference.
We haven't found (in a product where performance is explicitly a feature, also containing a custom 3D engine on top of that) that the context being passed automatically in Odin is of much concern performance-wise.
Out of the languages mentioned Rust is the one I've seen in benchmarks be routinely marginally slower, but it's not by a meaningful amount.
https://tigerbeetle.com/blog/2025-10-25-synadia-and-tigerbee...
Aside from the fact that Zig is still a bit immature in its std library and ecosystem, I mean. Is it a suitable systems language going forward?
Zig is actually perfect for production network services (that’s all TB is essentially, or how I see it, and what I was looking for in Zig—how to create something with explicit limits that can handle overload—it’s hard to build anything production-grade if it’s not doing NASA’s Power of Ten and getting allocation right—GC is not a good idea for a network service).
I wouldn’t say Zig’s std lib is immature. Or if anything, it has higher quality than most std libs. For example, the unmanaged hashmap interface is :chefskiss. In comparison, many std libs are yet to get non-global allocators or static allocation or I/O right.
[citation needed]
If we are to trust this page [0] Rust beats Zig on most benchmarks. In the Techempower benchmarks [1] Rust submissions dominate the TOP, while Zig is... quite far.
Several posts which I've seen in the past about Zig beating Rust by 3x or such all turned to be based on low quality Rust code with some performance pitfalls like measuring performance of writing into stdout (which Rust locks by default and Zig does not) or iterating over ..= ranges which are known to be problematic from the performance perspective.
[0]: https://programming-language-benchmarks.vercel.app/rust-vs-z...
[1]: https://www.techempower.com/benchmarks/
In my mind, it's an accessible systems language. Very readable. Minimal footprint.
If you are not using a GC language, you WILL be managing lifetimes. Rust just makes it explicit, when the compiler can’t prove it’s safe, which Zig, C don't really care.
In Zig and C, it's always expected that you will explicitly manage your lifetimes. Zig uses the allocator interface to explicitly allocate new buffer or heap values and its keyword 'defer' to clean up allocated variables after the scope exits so that allocations and frees generally live next to each other.
C on the other hand, is relatively unopinionated about how lifetimes are managed. The defer keyword honestly takes most of the pain of managing lifetimes away.
for IO, which is new and I have not actually used yet, here are some relevant paragraphs:
https://kristoff.it/blog/zig-new-async-io/https://github.com/ziglang/zig/issues/23367
For simple projects where you don't want to pass it around in function parameters, you can create a global object with one implementation and use it from everywhere.
Don’t know. That’s how people usually get rid of repeat arguments (or OOP constructor).
Isn't cross compilation very, very ordinary? Inline C is cool, like C has inline ASM (for the target arch). But cross-compiling? If you built a phone app on your computer you did that as a matter of course, and there are many other common use cases.
Working cross compilation out of the box any-to-any still isn't.
From helicoptering folks onto steering committee and indoctrination of young CS majors.
Not uncommon in this space though, especially as you get closer to the metal (close as cross-compilation is relative to something like React frontends, at least)
I guess it's convenient to have support for many target architectures built in by default. I wonder how big that package is.
There's no reason to use cygwin with Rust, since Rust has native Windows support. The only reason to use x86_64-pc-cygwin is if you would need your program to use a C library that is not available for Windows, but is available for cygwin.
If you don't want to/can't use the MSVC linker, the usual alternative is Rust's `x86_64-pc-windows-gnu` toolchain.
(also expected tesseract to do a bit better than this:
Unfortunately I get the same kind of garbage around closing curly braces / closing parenthesis / dots with this magick filter... It seems to do slightly better with an extra `-resize 400%`, but still very far from as good as what you're getting (to be fair the monochrome filter is not pretty (bleeding) when inspecting the result).
I wonder what's different? ( ImageMagick-7.1.1.47-1.fc42.x86_64 and tesseract-5.5.0-5.fc42.x86_64 here, no config, langpack(s) also from the distro)
I like the simplicity and speed of Rust's eGUI. Something similar for Zig would be amazing.
346 more comments available on Hacker News