What to Do with C++ Modules?
Posted4 months agoActive4 months ago
nibblestew.blogspot.comTechstoryHigh profile
heatedmixed
Debate
85/100
C++ModulesCompilationSoftware Development
Key topics
C++
Modules
Compilation
Software Development
The article discusses the challenges and potential solutions for implementing C++ modules, sparking a debate among commenters about their usefulness and feasibility.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
84
0-12h
Avg / period
20
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Aug 31, 2025 at 3:22 PM EDT
4 months ago
Step 01 - 02First comment
Aug 31, 2025 at 5:40 PM EDT
2h after posting
Step 02 - 03Peak activity
84 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 8, 2025 at 7:03 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45086210Type: storyLast synced: 11/20/2025, 8:28:07 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
This is exactly WHY we dont see a rush movement of C++ developers to Rust throwing away everything for Rust. Rust is trying to solve problems that already not exist 99.9999% of time in modern C++ code style and standards.
Also, some day C++ compilers or tooling will get its own Borrow Checker to completely forget about Rust - this will be done just for fun just to stop arguing with rust-fans :)
No amount of fallible human vigilance will stop you from forgetting the existence of a C++ quirk in the code you're rushing out before heading out for the night. Human oversight does not scale.
Rust solves 1 category of problems in a way that is not without its costs and other consequences. That is it. There are projects where this is very important, there are other projects where its virtually useless and the consequences just get in the way. It is not magic. It doesn't make anything actually 'safe'.
That being said, Rust is really about lifetimes. That's the big ticket selling point. My point above was that 1) it isn't a silver bullet and 2) it can be a real hindrance is many applications.
I bet there's easily tens of thousands of times more C++ code than Rust code out there.
The number of people I met in Rust conferences that rewriting at least parts of rather big C++ codebases weren't small either.
However, there is still big amount of code that is purely C++. Many of the older code bases still use C++03-style code too. Or they were written in the OOP design pattern golden era that requires huge reactors to adapt functional / modern code. Anything with Qt will not benefit from smart pointers. Even with Qt 6.
Rust cannot solve these problems since the challenges are not purely technical but social too.
Rust just doesn't have close to the same type of adoption/support yet, especially when considering various embedded platforms.
C++ as C with classes is a pretty good language!
And for the safe parts, the posts that I've read from people who have spent a non-trivial amount of effort with the language do not paint a clear picture either on whether there was really a benefit to the language overall.
So to say that "the world has moved on" in light of all of this is pure hubris.
- GCC switched from C to C++
- CUDA switched from C to C++
But I can understand the decision, and at that time , C++ frontend features and libs were a little bit less horrible.
The C++ frontend of MSVC handles (the common) compiler-specific language extensions differently than the other compilers. Besides, its pre-processor behaves differently too. It is now good that there is a clang frontend for MSVC.
Please explain in detail how alternatives would have worked better for GCC and CUDA. Also, if you could share some historical context about how those alternatives could realistically have been implemented at the time, that would be helpful too.
I love to hear all the "would've" and "should've" scenarios.
C++ code bases are really a lot longer-lived than any other software builds upon them. Hence we cannot drop it.
Starting with C++17, I think committee has been doing the language a disservice and piled even more and more unintended complexity by rushing "improvements" like modules.
I don't write C++ anymore due to my team switching to Rust and C only (for really old stuff and ABI). I am really scared of having to return though. Not because I am spoiled by Rust (I am though), but because catching up with all the silly things they added on top and how they interact with earlier stuff like iterators. C++ is a perfect language for unnecessary pitfalls. Newer standards just exploded this complexity.
https://en.cppreference.com/w/cpp/locale/codecvt_utf8.html
https://en.cppreference.com/w/cpp/algorithm/random_shuffle.h...
Mozilla and Dropbox did it. LLMs are good at translating between languages, and writing unit tests to make sure things still work the same.
12%. Assume the progress is linear (not logarithmic like most cases), we just need 60 more years to migrate those c/c++ code.
https://www.phoronix.com/news/Google-Linux-Binder-In-Rust
https://arxiv.org/abs/2503.23791v1
https://www.darpa.mil/research/programs/translating-all-c-to...
https://link.springer.com/content/pdf/10.1007/s10664-024-105...
> Everybody would be doing it by now
Models and agents have progressed significantly in the last few months. Migrating projects to rust can definitely be a thing in the coming years if there is sufficient motivation. But oftentimes c/c++ devs have aversions to the rust language itself, so the biggest challenge can be an issue of motivation in general.
There are things you can add, but the rot still permeates the foundations, and much of the newness goes partially unused because they're just not at home in C++. Use of `std::optional` and `std::variant` is, as far as I know, still limited, even in newer C++ code, because the ergonomics just aren't there.
variant isn't, yet. We'll eventually get some kind of structural pattern matching that will make variant or it's successor more idiomatic.
C++ does have quite a bit of rot, you're right. But that's the price of building technology people actually use.
Carbon doesn't seem to have any fundamentally new ideas, we'll see how well it fares in the wild.
Thats the fantastic thing about c++, you can already write an easy to use match, but they just chose not to include that in the stdlib but rather want you write that yourself.
Example:
Also optional and expected are ergonomic nightmares since there is no "try!" macro like rust has. In general it lacks the infrastructure that is needed to make these types nice to use. It also clashes with other concepts like RAII where you kinda have to use exceptions when it comes to ctors that may fail.For example, if you had to match patterns to distinguish between these three possibilities when looking at a tree: (val), (val (tree left) (tree right)), (val (tree subtree)). Here, the possibilities are not really separate types, but rather different ways of constructing a type. This sort of pattern shows up in general purpose programming (even meat and potato imperative programming) pretty often.
There was a proposal for this championed by Stroustrup over ten years ago, but it went nowhere IIRC. https://www.stroustrup.com/pattern-matching-November-2014.pd...
I have some hope that the upcoming compile time reflection will make it easier to implement said syntactic sugar.
So I'm afraid no by definition C++ can't adopt Rust's ideas because Rust's ideas were originally impossible C++ ideas.
I agree with your point except for the 'never' qualifier. It was certainly true when Rust was born.
C++ has proven the 'never' part wrong multiple times. I think, by 2030, the only thing that C++ would lack that Rust has right now is the unified toolchain/packaging ecosystem because people are not going to settle that debate.
Everything else is well on its way to be implemented, in one of three forms - core language features (eg: concepts), language features that primarily enable writing more powerful libraries so that you do not have to come up with language features for everything (eg: reflection), and finally tooling support from the compiler as a test bed of what the language could guarantee (lifetime checks and annotations in clang).
Of course Rust is innovating pretty well too, I am very interested in seeing what async/coroutines are going to look like in a few years.
You are free to design your library in a way that your users only see one of these.
C++ didn't get that. The proposal paper at the time says it's impossible (for C++). But what they did propose was the feature you've seen in C++ today, which they call "move", but it has slightly odd (though convenient to implement, Worse Is Better after all) behaviour.
Now you can make this C++ "move" behaviour out of destructive move, that behaviour is roughly what Rust calls std::mem::take and it's sometimes useful, which is why that function is provided. But, often it's not really what you wanted, and if you actually wanted destructive move but only have this C++ imposter then you need to perform the entire take, then throw away the newly created object. You will find lots of C++ code doing exactly that.
So, no, you can't "design your library" to deliver the desirable property in C++. It's just another of the dozens of nagging pains because of design mistakes C++ won't fix.
Yes, I understand that there’s a lot of bad code out there and C++ happily enables that. But that was not my point.
template <typename T> void drop(std::unique_ptr<T> &&) {}
Also, you don’t really need this because of RAII. You can make it simpler, and here’s how you’d do this in C++26.
template <std::movable T> void drop(T &&) {}
It can get even simpler!
void drop(std::movable auto){}
Do you see my point about C++ incorporating the good ideas at a glacial pace?
Entire API’s are designed around this: my type that tracks a background task can have a shutdown() method that moves self, so that if you call foo.shutdown(), you can’t use foo any more.
This is more than just preventing a value from being used, it also facilitates making a rule that “moves are only just memcpy()”, and it can actually be enforced:
C++ move semantics require you to write arbitrary code that pillages the moved-from value (like setting the heap pointer of a moved-from value to nullptr) to ensure the old value is safe: rust just says “nope, there’s no constructor, moves are just a memcpy. We will keep it safe by simply not letting code use the old value.”
C++ can never have this unless they offered yet another mutually incompatible form of move semantics (lol, what would the sigil be? &*&?)
I agree.
Probably depends on what you mean by "works out". I don't think GP would agree that delivering a less capable alternative qualifies.
For example, one major feature C++0x concepts was supposed to have but got removed was definition-time checking - i.e., checking that your template only used capabilities promised by the concepts it uses, so if you defined a template with concepts you could be assured that if the definition compiled it'd work with all types that satisfied the concept. That feature did not make it to C++20 concepts and as far as I know there are no plans on the horizon to add that feature.
C++26 concepts has more or less everything you mention, and you can try it out with all the major compilers right now.
You're quite a bit off. Tialaramex covered this well enough.
> C++26 concepts has more or less everything you mention, and you can try it out with all the major compilers right now.
Uh, no. No, it doesn't. Here's an example I wrote up earlier that demonstrates how concepts (still) don't have definition-time checking:
Here's Clang 21.1.0 compiling this in C++26 mode: https://cpp.godbolt.org/z/znPGvcTqs . Note that as-is the snippet compiles fine, but if you uncomment the last line you get an error despite only_foo satisfying fooable.Contrast this with Rust:
trait Fooable { fn foo(self) -> i32; }
fn do_foo_bar<T: Fooable>(t: T) -> i32 { let _ = t.bar(); // error[E0599]: no method named `bar` found for type parameter `T` in the current scope t.foo() }
Notice how do_foo_bar didn't need to be instantiated for the compiler to catch the error. That's what C++ concepts are unable to do, and as far as I know there is nothing on the horizon to change that.
C++ 0x is what people called the proposed new C++ language standard from about 2005 through 2009 or so under the belief that maybe it would ship in 2008 or 2009. Because you're here, now, you know this didn't end up happening and actually the next standard would be C++ 11. For a little while they even jokingly talked about C++ 0A where A is of course hexadecimal for ten, but by the time it was clear it wouldn't even make 2010 that wasn't funny.
So C++ 0x isn't five years ago, it's about 15-20 years ago and in this context it's about the draft revision of C++ in which for some time the Concepts feature existed, but Bjarne insisted that this feature (which remember is roughly Rust's traits) was not implementable in reasonable time, and frankly was not as much needed as people had believed.
This argument swayed enough committee members that Concepts was ripped back out of the draft document, and so C++ 11 does not have Concepts of any sort. Because this particular history is from the relatively recent past you can go read the proposal documents, there might even be Youtube videos about it.
OK, so, now you at least know what these terms mean when other people use them, that can't hurt.
As to your next claim er, no, not even close. Barry Revzin wrote a really nice paper connecting the dots on this, which probably passed into legend specifically for saying hey C++ 0x Concepts are the same thing as Rust traits. C++ proposal paper P2279 is what you're looking for if that interests you. That'll be less confusing for you now because you know what "C++ 0x" even means.
Now, Barry wrote that paper in the C++ 23 cycle, and we're now at / just past the end of the C++ 26 cycle, but I assure you that nothing relevant has changed. You can't magically have model checking in C++ that's not there. You can't provide concept maps, it's not in the language and so on.
But as it stands, I expect that whatever innovations these languages produce will be picked up by C++ in ~5 years.
https://play.rust-lang.org/?version=nightly&mode=debug&editi...
These are all highly non-parallel problems. They don't gain much from being parallel, and because Rust imposes 'restrict' semantics on even single threaded code you end up making it much harder to write code in these domains.
This has been my experience with Rust. Shared mutability is safe on a single thread without 'restrict' on all your pointers, and Rust has limited options to opt into shared mutability (with lots of ergonomic caveats).
Don't get me wrong though, I still think Rust is a great tool. It just has tradeoffs.
The idiomatic Rust equivalent of a C non-restrict pointer is arguably &Cell<T>. The biggest problem with it is library code that takes &mut T when the likes of &Cell<T> might suffice (because potential aliasing does not affect the semantics of what that Rust code is doing), but this is an acknowledged problem and the Rust project will take pull req's that fix it where it occurs.
If you're sure you're never going to need multi-threaded environment, you have an option as well: Replace std::sync with std::rc, Mutex with RefCell in the above toy example and that's about it.
If you want to use some asynchronous runtime, replace std::sync with tokio::sync (or std::rc), slap async/awaits along with a single-threaded runtime and that's about it.
Of course, the code above is just a toy example and business logic is much more complex in real world, but compare this to what it would take to write same C++ logic in async.
I found Rust's approach massively more ergonomic compared to C++ approach of passing closures around for asio-like IO contexts or coroutine compiler-magic which opens new novel avenues to shoot myself on the foot, well, to the extent I could grasp it.
It's true Rust forces you to pay all this cost ahead of time. It's also true most applications don't require this level of safety really, so it becomes ridiculous to pay it upfront. And even for some that require such high level of safety, you can skip bunch of bolts on a plane door and it will still be a billion dollar company at the end of the day, so...
You’re forced into function-call or jump-table dispatch, which tends to be slower.
On the other hand, there's the recently-added-to-nightly `become` for guaranteed tail calls, which might work better than computed gotos if CPython is a good example [0]
> When using both attributes [[[clang::musttail]] and preserve_none], Jin's new tail-call-based interpreter inherits the nice performance benefits of the computed-goto-based version, while also making it easier for the compiler to figure out optimal register allocations and other local optimizations.
To be fair I don't think Rust has a preserve_none equivalent yet, but given naked functions are a thing in Rust I'd hope it isn't too bad to support?
[0]: https://lwn.net/Articles/1010905/
The C++ solution would be to start the threads and use an MPSC queue (which, ironically, Rust also has) in order to update the UI.
Rust will eventually stumble upon ergonomics and allow portions of code to be specified as single threaded or embarrassingly parallel, but unfortunately the community evolved horse blinders early on and isn't letting them go any time soon.
Internet hype meets actual industry reality :-).
I'm thinking the same about your comment :D
And give Rust one or two more decades and it will be the same messy kitchen sink language as C++. If anything, Rust is speedrunning C++ history (with the notable exception of fixing static memory safety of course).
If you think that "the world seems to have moved on to Rust", I would recommend to look at the job listings. For example, devjobs.de: Rust: 61, C++: 1546. That's 1:25. Maybe in other countries there are more Rust jobs?
1. all the .h files were compiled, and emitted as a binary that could be rolled in all at once
2. each .h file created its own precompiled header. Sounds like modules, right?
Anyhow, I learned a lot, mostly that without semantic improvements to C++, while it made compilation much faster, it was too sensitive to breakage.
This experience was rolled into the design of D modules, which work like a champ. They were everything I wanted modules to be. In particular,
The semantic meaning of the module is completely independent of wherever it is imported from.
Anyhow, C++ is welcome to adopt the D design of modules. C++ would get modules that have 25 years of use, and are very satisfying.
Yes, I do understand that the C preprocessor macros are a problem. My recommendation is, find language solutions to replace the preprocessor. C++ is most of the way there, just finish the job and relegate the preprocessor to the dustbin.
It seems particularly tricky to define a template in a module and then instantiate it or specialize it somewhere else.
D also has an `alias` feature, where you can do things like:
where from then on, `abc.T` can be referred to simply as `Q`. This also eliminates a large chunk of purpose behind the preprocessor.C++ has adapted the ‘using’ keyword now to seem fairly similar to alias, but can’t completely subsume macros unfortunately.
It replaces the preprocesser:
with hygiene. Once you get used to it, it has all kinds of nice uses.This seems incredibly wasteful, but of course still marginal better than just #including code which is the alternative.
For normal functions or classes, we have forward declarations. Something similar needs to exist for templates.
D does not require names in global scope to be declared lexically before they are used. C++ only does this for class/struct scopes. For example:
compiles and runs (and runs, and runs, and runs!!!).But how do you handle a template substitution failure? In C++:
The compiler has no idea whether bar(1, 2); will compile unless it parses the full definition. I don't understand how the compiler can avoid parsing the full definition.The expensive bit in my experience isn't parsing the declaration, it's parsing the definition. Typically redundantly over thousands of source files for identical types.
Herb Sutter, Andrei Alexandrescu and myself once submitted an official proposal for "static if" for C++, based on the huge success it has had in D. We received a vehement rejection. It demotivated me from submitting further proposals. ("static if" replaces the C preprocessor #if/#ifdef/#ifndef constructions.)
C++ has gone on to adopt many features of D, but usually with modifications that make them less useful.
- D had this feature long before C++ did.
- It isn't the same thing as "static if". Without "static if", conditionally compiling variables into classes is much more elaborate, basically requiring subclassing to do, which is not really semantically how subclassing should be used (the result is also way more confusing and oblique than the equivalent directly expressed construct you'd have using "static if").
Yup, I think this is the core of the problem with C++. The standards committee has drawn a bad line that makes encoding the modules basically impossible. Other languages with good module systems and fast incremental builds don't allow for preprocessor style craziness without some pretty strict boundaries. Even languages that have gotten it somewhat wrong (such as rust with it's proc macros) have bound where and how that sort of metaprogramming can take place.
Even if the preprocessor isn't dustbined, it should be excluded from the module system. Metaprogramming should be a feature of the language with clear interfaces and interactions. For example, in Java the annotation processor is ultimately what triggers code generation capabilities. No annotation, no metaprogramming. It's not perfect, but it's a lot better than the C/C++'s free for all macro system.
Or the other option is the go route. Don't make the compiler generate code, instead have the build system be responsible for code generation (calling code generators). That would be miles better as it'd allow devs to opt in to that slowdown when they need it.
The truth is 98% of the preprocessor is fine - it's ifdefs for platforms and defines of constants and inline functions that are defined exactly once and never redefined. Because modules supports none of this, that means we can't modulize Windows.h. Or zlib. Or gtest.
The committee should have remembered that one of the selling points of C++ is C compatibility, and figured out a way to get modules to work with 98% or the preprocessor and forbid only the nasty 2%.
Importantly, no they did not.
They put a boundary on the preprocessor containing preprocessing within the module definition and not the code importing the module.
And that's where a lot of the loss and compatibility problems have come into play. That's why you can't, for example, share a module between builds. Because the ifdefs that built the module in the first place may have changed from one build to the next.
It was good that the committee bound the preprocessor, but they simply didn't go far enough.
C++ is making strides to adding the language features needed to dustbin modules. A lot of the work of consteval can replace a lot of what the preprocessor is doing.
> Because modules supports none of this, that means we can't modulize Windows.h. Or zlib. Or gtest.
And see, that's the issue. Modules do actually support all this. We can in fact modularize windows.h, zlib, or gtest. The issue is the `windows.module` still has to be rebuilt with every application that imports it because those `ifdefs` could evaluate differently depending on what env variables the build system sets before building. The module can't be just a simple AST built once. Maybe once for a project, but that's about it. And that's the rub. Change anything that causes the module to recompile and you spend exactly the same time you'd spend on precompiled headers.
Yes, this was tedious, but we do it for each of our supported platforms.
But we can't do it for various C libraries. This created a problem for us, as it is indeed tedious for users. We created a repository where people shared their conversions, but it was still inadequate.
The solution was to build a C compiler into the D compiler. Now, you can simply "import" a C .h file. It works surprisingly well. Sure, some things don't work, as C programmers cannot resist put some really crazy stuff in the .h files. The solution to that problem turned out be we discovered that the D compiler was able to create D modules from C code. Then, the user could tweak by hand the nutburger bits.
This is the same solution that Apple chose for Swift <-> Objective C interop. I wonder if someone at Apple was inspired by this decision in D!
See the book "Large Scale C++ Software Design" by John Lakos for more detail in this direction.
The next major advance to be completely ignored by standards committees will be the 100% memory safe C/C++ compiler, which is also implemented and works amazingly well: https://github.com/pizlonator/fil-c
EDIT: Oh, found the tradeoff:
hollerith on Feb 21, 2024 | prev | next [–]
>Fil-C is currently about 200x slower than legacy C according to my tests
But also consider that it's one guy's side project! If it was standardized and widely adopted I'm certain the performance penalty could be reduced with more effort on the implementation. And I'm also sure that for new C/C++ code that's aware of the performance characteristics of Fil-C that we could come up with ways to mitigate performance issues.
For the high-end performance-engineered cases that C++ is famously used for, the performance loss may be understated since it actively interferes with standard performance-engineer techniques.
It may have a role in boring utilities and such but those are legacy roles for C++. That might be a good use case! Most new C++ code is used in applications where something like Fil-C would be an unacceptable tradeoff.
Of course it was completely ignored. Did you expect the standards committee to enforce caching in compilers? That's just not its job.
> The next major advance to be completely ignored by standards committees will be the 100% memory safe C/C++ compiler, which is also implemented and works amazingly well: https://github.com/pizlonator/fil-c
Again—do you expect the standards committee to enforce usage of this compiler or what? The standards commitee doesn't "standardize" compilers...
Of course, these tools are of interest to the broader C++ community. Thanks for sharing.
>Both zapcc and Fil-C could benefit from the involvement of the standards committee.
What exactly does the standards committee do for these software projects without being involved in their development? I think there is nothing to do here that is within the scope of the language itself. Of course, if the creators of those projects come up with a cool new idea, they can submit to the standards committee for comment. They can also comment on new standards that make the tools not work anymore. But that is help going from the project to the committee, not the other way around.
That's great to hear. It sounds like you have everything set to put together a proposal. Do you have any timeline in mind to present something?
> I'm saying that the committees should acknowledge their existence, (...)
Oh does this mean any of the tools you're praising was already proposed to be included in the standard? Do you mind linking to the draft proposal? It's a mailing list, and all it takes is a single email, so it should be easy to link.
Where's the link?
https://isocpp.org/std/submit-a-proposal
I think there is a hefty deal of ignorance in your comment. A standardization process is not pull-based, it's push-based.
If you feel you have a nice idea that has technical legs to stand, you write your idea down and put together a proposal and then get in touch with committee members to present it.
The process is pretty open.
> Certainly more useful than anything else the standards committees have done in the past 10 years.
Do you understand the "standards committee" is comprised of people like you and me, except they got off their rear-end and actually contribute to it? You make it sound like they are a robe-wearing secret society that is secluded from the world.
Seriously, spend a few minutes getting acquainted with the process, what it takes to become a member, and what you need to do to propose something.
There are also quite a few compiler cache systems around.
For example, anyone can onboard tools like ccache by installing it and setting an environment variable.
I'm sure you must be aware, these compiler tools do not constitute a language innovation. I'd also imagine that both are not productions ready in any sense, and would be very difficult to debug if they were not working correctly.
ccache doesn't add anything over make for a single project build.
C++ build times are actually dominated by redundant parsing of headers included in multiple .cpp files. And also redundant template instantiations in different files. This redundancy still exists when using ccache.
By caching the individual language constructs, you eliminate the redundancy entirely.
Tools like ccache have been around for over two decades, and all you need to do to onboard them is to install the executable and set an environment flag.
What value do you think something like zapcc brings that tools like ccache haven't been providing already?
https://en.wikipedia.org/wiki/Ccache
It avoid instantiating the same templates over and over in every translation unit, instead caching the first instantiation of each. ccache doesn't do this: it only caches complete object files, but does not avoid repeated instantiation costs in each object file.
I'm afraid this feature is at best a very minor improvement that hardly justifies migrating a whole compiler. To be blunt, it's not even addressing a problem that exists or makes sense to even think about. I will explain to you why.
I've been using ccache for years and I never had any problem getting ccache to support template code. Why? Because the concept of templates is ortogonal to compiler caches. It matters nothing, if you understand how compiler caches work. Think about it. You have the source file you are compiling, you have the set of build flags passed to the compiler, and you have the resulting binary.
That's the whole input, and output.
It's irrelevant if the code features templates or not.
Have you checked if the likes of zapcc is fixing a problem that actually doesn't exist?
Someone else in this thread already pasted benchmarks. The observation was, and I quote:
> Zapcc focuses on super fast compile times albeit the speed of the generated code tends to be comparable with Clang itself, at least based upon last figures.
Here are some performance numbers: https://www.phoronix.com/news/Zapcc-Quick-Benchmarks
> To be blunt, it's not even addressing a problem that exists or makes sense to even think about. I will explain to you why.
Do you talk down to people like this IRL as well?
> I've been using ccache for years and I never had any problem getting ccache to support template code.
What I said is that zapcc has a different approach that offers even more performance benefits, answering the question of what zapcc offers that ccache doesn't offer.
> if you understand how compiler caches work. Think about it.
There's no need to use "condescending asshole" as your primary mode of communication, especially when you are wrong, such as in this case.
If you look at the benchmarks you just quoted, you see cache-based compilations outperforming zapcc in quite a few tests. I wonder why you missed that.
The ones that ccache fares as well as builds that don't employ caching at all are telling. Either ccache was somehow not used, or there was a critical configuration issue that prevented ccache from caching anything. This typically happens when projects employ other optimization strategies that mess with ccache, such as pipelining builds being enabled or extensive use of precompiled headers.
The good news is that in both cases these issues are fixed by either by actually configuring ccache or disabling these other conflicting optimization strategies. To be able to tell, it would be necessary to troubleshooting the build and take a look at ccache logs.
> Do you talk down to people like this IRL as well?
Your need to resort to personal attacks is not cool. What do you hope to achieve, other than not sounding like an adult?
And do you believe that pointing out critical design flaws is "talking down to people"?
My point is very clear: your baseline compiler cache system, something that exists for two decades, already supports caching template code. How? Because it was never an issue to begin with. I explained why: because a compiler cache fundamentally caches the resulting binary given a cache key, which is comprised of data such as the source file provided as input (basically the state of the translation unit) and the set of compiler flags used to compile it. What features in the translation unit is immaterial. It doesn't matter.
Do you understand why caching template code is a problem that effectively never existed?
> What I said is that zapcc has a different approach that offers even more performance benefits, answering the question of what zapcc offers that ccache doesn't offer.
It's perfectly fine if you personally have a desire to explore whatever idea springs to mind. There is no harm in that.
If you are presenting said pet project as any kind of solution, the very least that is required of you is to review the problem space, and also perform a honest review of the solution space. You might very well discover that your problem effectively does not exist, because some of your key assumptions do not hold.
I repeat: with pretty basic compiler caches, such as ccache which exists for over two decades, the only thing you need to do to be able to cache template code is to install ccache and set a flag in your build system. Tools such as cmake already support it out of the box, so onboarding work is negligible. Benchmarks already show builds with ccache outperforming builds with the likes of zapcc. What does this tell you?
You used many words just so say "ccache is a build cache".
> it still needs to rebuild the whole translation unit from scratch if a single line in some header changes.
You are using many words to say "ccache rebuilds a translation unit when it changes".
What point were you trying to make?
Frankly your attitude in this whole thread has been very condescending. Being condescending and also not understanding what you're talking about is a really bad combination. Reconsider whether your commenting style is consistent with the HN guidelines, please.
101 more comments available on Hacker News