Brent's Encapsulated C Programming Rules (2020)
Key topics
The eternal debate around C programming practices has been reignited by Brent's Encapsulated C Programming Rules, which champions separating interface from implementation through header files. Commenters largely agree that this practice is crucial for maintaining large codebases, with some nostalgic for Ada's superior handling of specs. However, others argue that modern languages have found better ways to achieve this separation without the need for manual header maintenance, sparking a lively discussion on the trade-offs. As the conversation unfolds, it becomes clear that the quest for a C successor remains an open question, with no clear contender in sight to tackle the complexities of emerging chips, operating systems, and programs.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
43m
Peak period
10
2-4h
Avg / period
4.9
Based on 49 loaded comments
Key moments
- 01Story posted
Dec 9, 2025 at 6:16 AM EST
about 1 month ago
Step 01 - 02First comment
Dec 9, 2025 at 6:59 AM EST
43m after posting
Step 02 - 03Peak activity
10 comments in 2-4h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 10, 2025 at 9:42 AM EST
about 1 month ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I hate this. If my intellisense isn't providing sufficient info (generated from doc comments), then I need to go look at the implementation. This just adds burden.
Headers are unequivocally a bad design choice, and this is why most of every language past the nineties got rid of them.
Still in the honeymoon phase granted, but I'm actually terrified that we have these new defense tech startups have no clue about Ada collectively.
Your startup MVP you wants to ship a SaaS product ASAP and iterate? Sure, grab Python or JS and whatever shitstorm of libraries you want to wrestle with.
Want to play God and write code that kills?
Total category error.
The fact that I'm sure there are at least a few of these defense tech startups yolo'ing our future away with vibe coded commits when writing code that... let's not mince our words... takes human life... prob says about how far we've fallen from "engineering".
Rust did the exact opposite and spread the interface language based across keywords in source files, making it simultaneously more complicated, more labor intensive and ultimately less powerful since it ends up lacking a cognate to higher order modules. This effectively lobotomizes the modularity paradigm as well.
For Haskell, the type system at least copes for the bad module system outside of compile times. For Rust, it's hard for me to say anything positive about its approach to interfaces and modules. I'd rather just have .mlis instead of either.
Languages that are technically capable of replacing C in all those applications include Ada, (and in certain applications, SPARK Ada), D, Zig, Rust, and the Pascal/Modula-2/Oberon family. None of those language use a purely textual preprocessor like C's. They all fix many of C's other design weaknesses that were excusable in the 1970s, but really aren't today.
----
This:
is better written as: The `const` is perhaps over-doing it, but it makes it clear that "for the rest of this scope, the value of this pointer won't change" which I think is good for readability. The main point is "locking" the size to the size of the type being pointed at, rather than "freely" using `sizeof` the type name. If the type name later changes, or `Vec4` is added and code is copy-pasted, this lessens the risk of allocating the wrong amount and is less complicated.----
This is maybe language-lawyering, but you can't write a function named `strclone()` unless you are a C standard library implementor. All functions whose names begin with "str" followed by a lower-case letter are reserved [1].
----
This `for` loop header (from the "Use utf8 strings" section:
is just atrocious. If you're not going to use `i`, you don't need a `for` loop to introduce it. Either delete (`for(; ...` is valid) or use a `while` instead.----
In the "Zero Your Structs" section, it sounds as if the author recommends setting the bits of structures to all zero in order to make sure any pointer members are `NULL`. This is dangerous, since C does not guarantee that `NULL` is equivalent to all-bits-zero. I'm sure it's moot on modern platforms where implementations have chosen to represent `NULL` as all-bits-zero, but that should at least be made clear.
[1]: https://www.gnu.org/software/libc/manual/html_node/Reserved-...
It's not possible to know C code and think that
and somehow mean the same thing, at least not to me.I don't know whether this counts as "very few use cases".
The Memory Ownership advice is maybe good, but why are you allocating in the copy routine if the caller is responsible for freeing it, anyway? This dependency on the global allocator creates an unnecessarily inflexible program design. I also don't get how the caller is supposed to know how to free the memory. What if the data structure is more complex, such as a binary tree?
It's preferable to have the caller allocate the memory.
^- this is preferable to the variant where it takes the value as the third parameter. Of course, an intrusive variant is probably the best.If you need to allocate for your own needs, then allow the user to pass in an allocator pointer (I guessed on function pointer syntax):
If your API instead accepts a size parameter, you can ignore it and still use these approaches, but it also opens up other possibilities that require less complexity and runtime space by relying on the client to provide this information.
> What if the data structure is more complex, such as a binary tree?
I think that's what the author was going with by exposing opaque structs with _new() and _free() methods.
But yeah, his good and bad versions of strclone look more or less the same to me.
I believe that "Casting away the const" is UB [1]
[1]: https://en.cppreference.com/w/c/language/const.html
Only things I disagree with:
- The out-parameter of strclone. How annoying! I don't think this adds information. Just return a pointer, man. (And instead of defending against the possibility that someone is doing some weird string pooling, how about jut disallow that - malloc and free are your friends.)
- Avoiding void. As mentioned in another comment, it's useful for polymorphism. You can do quite nice polymorphic code in C and then you end up using void a lot.
The solution, in my opinion, is to either document that strclone()'s return should be free()'d, or alternately add a strfree() declaration to the header (which might just be `#define strfree(x) free(x)`).
Adding a `char **out` arg does not, in my opinion, document that the pointer should be free()'d.
- Eskil Steenberg’s “How I program C” (https://youtu.be/443UNeGrFoM). Long and definitely a bit controversial in parts, but I find myself agreeing with most of it.
- CoreFoundation’s create rule (https://stackoverflow.com/questions/5718415/corefoundation-o...). I’m definitely biased but I strongly prefer this to OP’s “you declare it you free it” rule.
The reason is floating point precision errors, sure, but that check is not going to solve the problems.
Took a difference of two numbers with large exponents, where the result should be algebraically zero but isn't quite numerically? Then this check fails to catch it. Took another difference of two numbers with very small exponents, where the result is not actually algebraically zero? This check says it's zero.
[0] https://en.wikipedia.org/wiki/Unit_in_the_last_place
I’m seeing this way too often. It is a good idea to never ignore a warning, an developers without discipline may need it. But for god’s sake, there is a reason why there are warnings and errors ,and they are treated differently. I don’t think compiler writers and/or C standards will deprecate warnings and make them errors anytime soon, and for good reason. So IMHO is better to treat errors as errors and warnings as warnings. I have seen plenty of times this flag is mandatory, and to avoid the warning (error) the code is decorated with compiler pacifiers, which makes no sense!
So for some setups I understand the value, but doing it all the time shows some kind of lazyness.
How is that a bad thing, exactly?
Sure, just throwing in compiler pacifiers willy-nilly to squelch the warnings is terrible.
However, making developers explicitly write in the code "Yes, this block of code triggers a warning, and yes it's what I want to do because xyz" seems not only perfectly fine, but straight up desirable.
Again, IMHO the big problem is people think "warnings are ok, just warnings, can be ignored".
And just as anecdotal point "Sure, just throwing in compiler pacifiers willy-nilly to squelch the warnings is terrible." this is exactly what I have seen in real life, 100% of the time.
My rationale: if you do set warn->error, then there are 2 ways around it: change the code to eliminate the warning, or pacify the compiler. Note, the measure to set it to error, is to instigate lazy programmers to deal with it. If the lazy person is really lazy, then they will deal with it with a pacifier. You won nothing.
There is no one recipe for everything. That is why, even if I do not like to treat warnings as errors, sometimes may be a possible solution.
I think you should deal with warnings, you should have as few as possible, if any at all. So if you have just a couple, is not a problem to document them clearly. Developers building the project should be informed anyway of many other things.
In some projects I worked, we saw warnings as technical debt. So hiding them with a pacifier would make us forget. But we saw them in every build, so we were reminded constantly, we should rework that code. Again, it depends on the setup you have in the project. I know people now are working with this new trend "ci/cd" and never get to the see the compilation. So depending on the setup one thing or another may be better.
> You won nothing.
No, you won that you can distinguish between intended and not intended warnings. Specifying in the code, which warnings are expected makes all warnings that the compiler outputs something you want to get fixed. When you do not do that, than it is easy to miss that a new warning or that the warning changed. So you essentially say that you should not distinguish between intended and non-intended warnings?
Having no warnings is a worthwhile goal, but often not possible, since you want to be warned for some things, so you need that warning level, but you don't want to warned about that in a specific line.
No, I pretty clearly said the opposite. Please read what I wrote:
"[...] is not a problem to document them clearly. Developers building the project should be informed anyway of many other things"
I also stated "warnings, you should have as few as possible, if any at all" in the projects I worked we hardly had any in the final delivery, but we had many in-between, which I find ok. If there are only 2 warnings, I do not see a big risk of not seeing a 3rd. I expect developers to look the compiler output, carefully, as if it was a review from a coworker.
Last but not least you ignore my last paragraph, where I say warnings are typically technical debt. There should be in the long run no "expected" warnings. My whole point is that they are just no error, so you should allow the program to compile and keep working in other things. I do not think is ok to have warnings. Also (specially) I think is a bad idea to silence the compiler.
If you read my comments it should be clear. If not, I cannot help with that. If you want to disagree, as long as you don't work in my code, is ok. This is just my 2ct opinion.
For some reason people stop thinking when it comes to warnings. Often it is the warning which gets one to rethink and refactor the code properly. If for whatever reason you want to live with the warning, comment the code appropriately, do not squelch blindly.
I use uint8_t for 8-bit integers, unsigned char for memory and char for text. uint8_t for memory doesn't feels right.
I wonder what is author's view about user's reasons to choose a C API?
What I mean is users may want exactly the same freedom and immediacy of C that the author embraces. However, the very approach to encapsulation by hiding the layout of the memory, the use of accessor functions limits the user's freedom and robs them of performance too.
In my view, the choice of using C in projects comes with certain responsibilities and expectations from the user. Thus higher degree of trust to the API user is due.
So he said and then shows corrections to a manual implementation of character length instead of using the standard wcswidth.
It assumes you want a single malloc of Vec3. It tries to behave as if you are doing a 'new' in an OOP language.
Let the programmer decide the size of it.
Mock example (not tested)
``` struct Vec3* Vec3_new(size_t size) { if(size <= 0) { // todo: handle properly return NULL; }
} ```Why can't you just use an optimizing compiler? Trading casting const away, doesn't seem right to me.