Zig Builds Are Getting Faster
Key topics
The Zig programming language's build times are improving, with the author discussing the progress and trade-offs of using LLVM and other backend options, sparking a discussion on the implications for development workflows and the language's design.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
47
6-12h
Avg / period
14.5
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 3, 2025 at 6:45 PM EDT
3 months ago
Step 01 - 02First comment
Oct 3, 2025 at 7:53 PM EDT
1h after posting
Step 02 - 03Peak activity
47 comments in 6-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 7, 2025 at 4:25 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
But from what I remember, this still uses the LLVM backend, right? Sure, you can beat LLVM on compilation speed and number of platforms supported, but when it comes to emitting great assembly, it is almost unbeatable.
Time complexity may be O(lines), but a compiler can be faster or slower based on how long it takes. And for incremental updates, compilers can do significantly better than O(lines).
In debug mode, zig uses llvm with no optimization passes. On linux x86_64, it uses its own native backend. This backend can be significantly faster to compile (2x or more) than llvm.
Zig's own native backend is designed for incremental compilation. This means, after the initial build, there will be very little work that has to be done for the next emit. It needs to rebuild the affected function, potentially rebuild other functions which depend on it, and then directly update the one part of the output binary that changed. This will be significantly faster than O(n) for edits.
Color me skeptical. I've only got 30 years of development under the belt, but even a 1 minute compile time is dwarfed by the time it takes to write and reason about code, run tests, work with version control, etc.
Further, using Rust as an example, even a project which takes 5 minutes to build cold only takes a second or two on a hot build thanks to caching of already-built artifacts.
Which leaves any compile time improvements to the very first time the project is cloned and built.
Consequently, faster compile times would not alter my development practices, nor allow me to iterate any faster.
I think the web frontend space is a really good case for fast compile times. It's gotten to the point that you can make a change, save a file, the code recompiles and is sent to the browser and hot-reloaded (no page refresh) and your changes just show up.
The difference between this experience and my last time working with Ember, where we had long compile times and full page reloads, was incredibly stark.
As you mentioned, the hot build with caching definitely does a lot of heavy lifting here, but in some environments, such as a CI server, having minutes long builds can get annoying as well.
> Consequently, faster compile times would not alter my development practices, nor allow me to iterate any faster.
Maybe, maybe not, but there's no denying that faster feels nicer.
Given finite developer time, spending it on improved optimization and code generation would have a much larger effect on my development. Even if builds took twice as long.
I'm much more productive when I can see the results within 1 or 2 seconds.
That's my experience today with all my Rust projects. Even though people decry the language for long compile times. As I said, hot builds, which is every build while I'm hacking, are exactly that fast already. Even on the large projects. Even on my 4 year old laptop.
On a hot build, build time is dominated by linking, not compilation. And even halving a 1s hot build will not result in any noticeable change for me.
Rust has excellent support for shared libraries. Historically they have involved downcasting to C types using the C ABI, but now there are more options like:
https://lib.rs/crates/stabby
https://lib.rs/crates/abi_stable
and https://github.com/rust-lang/rfcs/pull/3470
With fast compile time, running the test suite (which implies to recompile it) is fast too.
Also if the language itself is optimized towards making easy to write a fast compiler, this also makes your IDE fast.
And just if you're wondering, yes, Go is my dope.
You are far from the embedded world if you think 1 minute here or there is long. I have been involved with many projects that take hours to build, usually caused by hardware generation (fpga hdl builds) or poor cross compiling support (custom/complex toolchain requirements). These days I can keep most of the custom shenanigans in the 1hr ballpark by throwing more compute at a very heavy emulator (to fully emulate the architecture) but that's still pretty painful. One day I'll find a way to use the zig toolchain for cross compiles but it gets thrown off by some of the c macro or custom resource embedding nonsense.
Edit: missed some context on lazy first read so ignore the snark above.
Yeah, 1 minute was the OP's number, not mine.
> fpga hdl builds
These are another thing entirely from software compilation. Placing and routing is a Hard Problem(TM) which evolutionary algorithms only find OK solutions for in reasonable time. Improvements to the algorithms for such carry broad benefits. Not just because they could be faster, but because being faster allows you to find better solutions.
So optimizing compile times isn’t worthwhile because we already do things to optimize compile times? Interesting take.
What about projects for which hot builds take significantly longer than a few seconds? That’s what I assumed everyone was already talking about. It’s certainly the kind of case that I most care about when it comes to iteration speed.
That seems strange to you? If build times constituted a significant portion of my development time I might think differently. They don't. Seems the compiler developers have done an excellent job. No complaints. The pareto principle and law of diminishing returns apply.
> What about projects for which hot builds take significantly longer than a few seconds?
A hot build of Servo, one of the larger Rust projects I can think of off the top of my head, takes just a couple seconds, mostly linking. You're thinking of something larger? Which can't be broken up into smaller compilation units? That'd be an unusual project. I can think of lots of things which are probably more important than optimizing for rare projects. Can't you?
Just for fun, I kicked off a cold build of Bevy, the largest Rust project in my working folder at the moment, which has 830 dependencies, and that took 1m 23s. A second hot build took 0.22s. Since I only have to do the cold build once, right after cloning the repository which takes just as long, that seems pretty great to me.
Are you telling me that you need faster build times than 0.22s on projects with more than 800 dependencies?
> > The reason to care about compile time is because it affects your iteration speed. You can iterate much faster on a program that takes 1 second to compile vs 1 minute.
> Color me skeptical. I've only got 30 years of development under the belt, but even a 1 minute compile time is dwarfed by the time it takes to write and reason about code, run tests, work with version control, etc.
If your counterexample to 1-minute builds being disruptive is a 1-second hot build, I think we’re just talking past each other. Iteration implies hot builds. A 1-minute hot build is disruptive. To answer your earlier question, I don’t experience those in my current Rust projects (where I’m usually iterating on `cargo check` anyway), but I did in C++ projects (even trivial ones that used certain pathological libraries) as well as some particularly badly-written Node ones, and build times are a serious consideration when I’m making tech decisions. (The original context seemed language-agnostic to me.)
I too have encountered slow builds in C++. I can't think of a language with a worse tooling story. Certainly good C++ tooling exists, but is not the default, and the ecosystem suffers from decades of that situation. Thankfully modern langs do not.
I found I work very differently in the two cases. In Delphi I use the compiler as a spell checker. With the C++ code I spent much more time looking over the code before compiling.
Sometimes though you're forced to iterate over small changes. Might be some bug hunting where you add some debug code that allows you to narrow things a bit more, add some more code and so on. Or it might be some UI thing where you need to check to see how it looks in practice. In those cases the fast iteration really helps. I found those cases painful in C++.
For important code, where the details matter, then yeah, you're not going to iterate as fast. And sometimes forcing a slower pace might be beneficial, I found.
For some work I tend to take a pen and paper and think about a solution, before I write code. For these problems compile time isn't an issue.
For UI work on the other hand, it's invaluable to have fast iteration cycles to nail the design, because it's such an artistic and creative activity
Umm this is completely wrong. Compiling involves a lot of stuff, and the language design, as well as compiler design can make or break them. Parsing is relatively easy to make fast and linear, but the other stuff (semantic analysis) is not. Hence why we have a huge range of compile times across programming languages that are (mostly) the same.
"Proebsting's Law: Compiler Advances Double Computing Power Every 18 Years"
The implication is that doing the easy, obvious and fast optimizations is Good Enough(tm).
Even if LLVM is God Tier(tm) at optimizing, the cost for those optimizations swings against them very quickly.
I think we'll see cranelift take off in Rust quite soon, though I also think it wouldn't be the juggernaut of a language if they hadn't stuck with LLVM those early years.
Go seems to have made the choice long ago to eschew outsourcing of codegen and linking and done well for it.
-- https://cranelift.dev/
> The resulting code performs on average 14% better than LLVM -O0, 22% slower than LLVM -O1, 25% slower than LLVM -O2, and 24% slower than LLVM -O3
So it is more like 24% slower, not 14%. Perhaps a typo (24/14), or they got the direction mixed up (it is +14 vs -24), or I'm reading that wrong?
Regardless, those numbers are on a particular set of database benchmarks (TPC-H), and I wouldn't read too much into them.
I don’t think that means it’s not doable, though.
That's not to say Cranelift isn't a fantastic piece of tech, but I wouldn't take the "14%" or "24%" number at face value.
I cant see myself ever again working on a system with compile times that take over a minute or so (prod build not counting).
I wish more projects would have they own "dev" compiler that would not do all the shit llvm does, and only use llvm for the final prod build.
Eiffel, Common Lisp, Java and .NET (C#, F#, VB) are other examples, where we can enjoy fast development loops.
By combining JIT and AOT workflows, you can get best of both worlds.
I think the main reason this isn't as common is the effort that it takes to keep everything going.
I am on the same camp regarding build times, as I keep looking for my Delphi experience when not using it.
They aren't C++ levels bad, but they are slow enough to be distracting/flow breaking. Something like dart/flutter or even TS and frontend with hot reload is much leaner. Comparing to fully dynamic languages is kind of unfair in that regard.
I did not try Go yet but from what I've read (and also seeing the language design philosophy) I suspect it's faster than C#/Java.
This idea that it's all sunshine and lollipops for other languages is wrong.
Lets not mix build tools with compilers.
This however has nothing to do with Java — Kotlin compiler is written Kotlin, and Gradle is written in unholy mix of Kotlin, Java and Groovy (with later being especially notorious for being slow).
Many things that Google uses to sell Java (the language) over Kotlin also steems from how bad they approach the whole infrastructure.
Try using Java on Eclipse with compilation on save.
I wouldn’t put them together. C compilation is not the fastest but fast enough to not be a big problem. C++ is a completely different story: not only it orders of magnitude slower (10x slower probably not the limit) on some codebases compiler needs a few Gb RAM (so you have to set -j below the number of CPU cores to avoid OOM).
C++ builds can be very slow versus plain old C, yes, assuming people do all mistakes there can be done.
Like overuse of templates, not using binary libraries across modules, not using binary caches for object files (ClearMake style already available back in 2000), not using incremental compilation and incremental linking.
To this day, my toy GTK+ Gtkmm application that I used for a The C/C++ Users Journal article, and have ported to Rust, compiles faster in C++ than Rust in a clean build, exactly because I don't need to start from the world genesis for all dependencies.
Granted there are ways around it for similar capabilities, however they aren't the default, and defaults matter.
I do think that dynamic libraries are needed for better plugin support, though.
Unless a shared dependency gets updated, RUSTFLAGS changes, a different feature gets activated in a shared dependency, etc.
If Cargo had something like binary packages, it means they would be opaque to the rest of your project, making them less sensitive to change. Its also hard to share builds between projects because of the sensitivity to differences.
A lot of Rust packages that people ust are setup more like header-only libraries. We're starting to see more large libraries that better fit the model of binary libraries, like Bevy and Gitoxide. I'm laying down a vague direction for something more binary-library like (calling them opaque dependencies) as part of the `build-std` effort (allow custom builds of the standard library) as that is special cased as a binary library today.
Plenty of code was Tcl scritping, and when re-compiling C code, only the affected set of files would be re-compiled, everything else was kept around in object files and binary libraries, and if not affected only required re-linking.
> Fast! Compiles 34 000 lines of code per minute
This was measured on a IBM PS/2 Model 60.
So lets put this into perspective, Turbo Pascal 5.5 was released in 1989.
IBM PS/2 Model 60 is from 1987, with a 80286 running at 10 MHz, limited by 640 KB, with luck one would expad it up to 1 MB and use HMA, in what concerns using it with MS-DOS.
Now projecting this to 2025, there is no reason that compiled languages, when using a limited set of optimizations like TP 5.5 on their -O0, can't be flying in their compilation times, as seen in good examples like D and Delphi, to use two examples of expressive languages with rich type systems.
- Turbo Pascal was compiling at o-1, at best. For example, did it ever in-line function calls?
- its harder to generate halfway decent code for modern CPUs with deep pipelines, caches, and branch predictors, than it was for the CPUs of the time.
Shouldn't be the case for an O0 build.
In my computer science class (which used Turbo C++), people would try to get there early in order to get one of the two 486 machines, as the compilation times were a huge headache (and this was without STL, which was new at the time).
But I somewhat agree for an O0 the current times are not satisfactory, at all.
A Personal History of Compilation Speed (2 parts): https://prog21.dadgum.com/45.html
"Full rebuilds were about as fast as saying the name of each file in the project aloud. And zero link time. Again, this was on an 8MHz 8088."
Things That Turbo Pascal is Smaller Than: https://prog21.dadgum.com/116.html
Old versions of Turbo Pascal running in FreeDOS on the bare metal of a 21st century PC is how fast and responsive I wish all software could be, but never is. Press a key and before you have time to release it the operation you started has already completed.
That's with optimizations turned on, including automatic inlining, as well as a lot of generics and such jazz.
The C# compiler is brutally slow and the language idioms encourage enormous amounts of boilerplate garbage, which slows builds even further.
Lets apply the same rules then.
People tend to forget that LLVM was pretty much that for the C/C++ world. Clang was worlds ahead of GCC when first released (both in speed and in quality of error messages), and Clang was explicitly built from the ground up to take advantage of LLVM.
One example from 1980s, there are others to pick from as example,
https://en.wikipedia.org/wiki/Amsterdam_Compiler_Kit
So naturally a way to quickly reach the stage where a compiler is avaible for a brand new language, without having to write all compiler stages by hand.
A kind of middleware for writing compilers, if you wish.
MLIR is part of LLVM tooling, is the evolution of LLVM IR.
Is it? I think Rust is a great showcase for why it isn't. Of course it depends somewhat on your compiler implementation approach, but actual codegen-to-LLVM tends to only be a tiny part of the compiler, and it is not particularly hard to replace it with codegen-to-something-else if you so desire. Which is why there is now codegen_cranelift, codegen_gcc, etc.
The main "vendor lock-in" LLVM has is if you are exposing the tens of thousands of vendor SIMD intrinsics, but I think that's inherent to the problem space.
Of course, whether you're going to find another codegen backend (or are willing to write one yourself) that provides similar capabilities to LLVM is another question...
> You bootstrap extra fast, you get all sorts of optimization passes and platforms for free, but you lose out on the ability to tune the final optimization passes and performance of the linking stages.
You can tune the pass pipeline when using LLVM. If your language is C/C++ "like", the default pipeline is good enough that many such users of LLVM don't bother, but languages that differ more substantially will usually use fully custom pipelines.
> I think we'll see cranelift take off in Rust quite soon, though I also think it wouldn't be the juggernaut of a language if they hadn't stuck with LLVM those early years.
I'd expect that most (compiled) languages do well to start with an LLVM backend. Having a tightly integrated custom backend can certainly be worthwhile (and Go is a great success story in that space), but it's usually not the defining the feature of the language, and there is a great opportunity cost to implementing one.
Nothing about LLVM is a trap for C++ as that is what it was designed for.
How much do you think the BSDs get back from Apple and Sony?
For me beyond the initial adoption hump, programming languages should bootstrap themselves, if nothing else, reduces the urban myth that C or C++ have to always be part of the equation.
Not at all, in most cases it is a convenience, writing usable compilers takes time, and it is an easy way to get everything else going, especially when it comes to porting across multiple platforms.
However that doesn't make them magical tools, without which is impossible to write compilers.
That is one area where I am fully on board with Go team's decisions.
It is not even super optimized (single thread, no fancy tricks) but it is so far unbeaten by a large margin. Of course I use Clang for releases, but the codegen of tcc is not even awful.
Go’s main objectives were fast builds and a simple language.
Typescript is tacked onto another language that doesn’t really care about TS with three decades of warts, cruft and hacks thrown together.
One the one hand the go type system is a joke compared to typescript so the typescript compiler has a much harder job of type checking. On the other hand once type checking is done typescript just needs to strip the types and it's done while go needs to optimize and generate assembly.
vlang is really fast, recompiling itself entirely within a couple of seconds.
And Go's compiler is pretty fast for what it does too.
No one platform has a unique monopoly on build efficiency.
And also there are build caching tools and techniques that obviate the need for recompliation altogether.
Does V still just output C and use TCC under the hood?
As a warning, you need to be sure --lineDir=off in your nim.cfg or config.nims (to side-step the infinite loop mentioned in https://github.com/nim-lang/Nim/pull/23488). You may also need to --tlsEmulation=on if you do threads and you may want to --passL=-lm.
tcc itself is quite fast. Faster for me than a Perl5 interpreter start-up for me (with both on trivial files). E.g., on an i7-1370P:
{ EDIT: /n -> /dev/null and true.c is just "int main(int ac,char*av){return 0;}". }https://github.com/vlang/v/blob/master/Makefile#L18
In other words, one needs to have absolute trust in such tools to be able to rely on them.
https://modules.vlang.io/orm.html
I never tried vlang but I would say this is pretty niche, while C is a standard.
AFAIK tcc is unbeaten, and if we want to be serious about compilation speed comparison, I’d say tcc is a good baseline.
I only wish it supported C23...
However there are several C++23 goodies on latest VC++, finally.
Also lets not forget Apple and Google no longer are that invested into clang, rather LLVM.
It is up for others to bring clang up to date regarding ISO.
https://en.cppreference.com/w/c/compiler_support/23.html
(not sure how much Apple/Google even cared about the C frontend before though, but at least keeping the C frontend and stdlib uptodate by far doesn't require as much effort as C++).
Most of the work going into LLVM ecosystem is directly into LLVM tooling itself, clang was started by Apple, and Google picked up on it.
Nowadays they aren't as interested, given Swift, C++ on Apple platforms is mostly for MSL (C++14 baseline) and driver frameworks (also a C++ subset), Google went their own way after the ABI drama, and they care about what fits into their C++ style guide.
I know Intel is one of the companies that picked up some of the work, yet other compiler vendors that replaced their proprietary forks with clang don't seem that eager to contribute upstream, other than LLVM backends for their platforms.
Such is the wonders of Apache 2.0 license.
[1]: https://andrewkelley.me/post/zig-cc-powerful-drop-in-replace...
https://github.com/aherrmann/rules_zig
Real world projects like ZML uses it:
https://github.com/zml/zml
E.g., here it is kqueue-aware on FreeBSD: https://github.com/mitchellh/libxev/blob/34fa50878aec6e5fa8f...
Might not be that different to add OpenBSD. Someone would begin here: https://github.com/mitchellh/libxev/blob/main/src/backend/kq... It's about 1/3 tests and 2/3 mostly-designed-to-be-portable code. Some existing gaps for FreeBSD, but fixing those (and adding OpenBSD to some switch/enums) should get you most of the way there.
If you care about compilation speed b/c it's slowing down development - wouldn't it make sense to work on an interpreter? Maybe I'm naiive, but it seems the simpler option
Compiling for executable-speed seems inherently orthogonal to compilation time
With an interpreter you have the potential of lots of extra development tools as you can instrument the code easily and you control the runtime.
Sure, in some corner cases people need to only be debugging their full-optimization-RELEASE binary and for them working on a interpreter, or even a DEBUG build just doesn't makes sense. But that's a tiny minority. Even there, you're usually optimizing a hot loop that's going to be compiling instantly anyway
That true at the limit. As is often the case there's a vast space in the middle, between the extremes of ultimate executable speed and ultimate compiler speed, where you can make optimizations that don't trade off against each other. Where you can make the compiler faster while producing the exact same bytecode.
Even with C++ and heavy stdlib usage it's possible to have debug builds that are only around 3..5 times slower than release builds in C++. And you need that wiggle room anyway to account for lower end CPUs which your game should better be able to run on.
Debug builds are used sometimes, just not most of the time.
Naturally you can abstract them as middleware does it, however it is where tracking performance bugs usually lies.
I've never done it, but I just find it hard to believe the slow down would be that large. Most of the computation is on GPU, and you can set your build up such that you link to libraries built at different compilation optimizations.. and they're likely the ones doing most of the heavy lifting. You're not rebuilding all of the underlying libs b/c you're not typically debugging them.
EDIT:
if you're targeting a console.. why would you not debug using higher end hardware? If anything it's an argument in favor of running on an interpreter with a very high end computer for the majority of development..
That said, there are definitely still bugs to their self hosted compiler. For example, for SQLite I have to use llvm - https://github.com/vrischmann/zig-sqlite/issues/195 - which kinda sucks.
49 more comments available on Hacker News