C Is Best (2025)
Key topics
The debate over whether C is the best programming language has reignited, with some commenters passionately defending its simplicity and performance, while others argue that its syntax is either too simplistic or, conversely, "too rich." As the discussion unfolds, it becomes clear that opinions on C's merits are deeply divided, with some praising its reliability and others pointing out its limitations, such as the need for workarounds like The Lexer Hack to resolve ambiguities. Notably, some commenters took a more nuanced view, suggesting that the choice of language is less important than the quality of the end result. The thread's lively back-and-forth, featuring a mix of C enthusiasts and skeptics, makes for a fascinating read, particularly as it touches on the trade-offs between language complexity and performance.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
5m
Peak period
122
0-6h
Avg / period
22.9
Based on 160 loaded comments
Key moments
- 01Story posted
Jan 6, 2026 at 7:33 AM EST
3d ago
Step 01 - 02First comment
Jan 6, 2026 at 7:38 AM EST
5m after posting
Step 02 - 03Peak activity
122 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Jan 9, 2026 at 12:26 PM EST
8h ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Plain and simple C is, by far, one of the current _LESS WORSE_ performant alternatives which can be used, actually, from low level to large applications.
C syntax is already waaaaay to rich and complex (and ISO is a bit too much pushing feature creep over 5/10 years cycles).
I love C, when I wrote my first programs in it as a teenager, I loved the simplicity, I love procedural code. Large languages like C++ and or Rust are just too much for ME. Too much package. I like writing code in a simple way, if I can do that with a small amount of tools, I will. You can learn C and be efficient in it within a couple weeks. C systems are just built better, faster, more stable, portable, and more! This is my 2 cents.
Most honnest software developers know the answer: for any computer language with an ultra-complex syntax(c++,etc) or which requires an intrusive and tricky runtime (java,etc), the answer is obvsiously "nope". But we all know, for plain and simple C, the answer is "yes" (have a look at cproc and QBE, or scc, tinycc, etc). The new C I am talking about should reduce the effort related to such compiler developement and "fix" all C implicit stuff hiding too much things, kind of error prone.
There is a huge pitfall though: linux is not coded in plain and simple C, but in GCC C... which is VERY different, a GCC C alternative, is clancg/llvm, those are not to be called reasonable in any capacity.
If linux code would care about having clean assembly source files on top of plain and simple C code, unfortunately it would be slower. BUT, you guess what, for the ability to compile linux with an _REAL ALTERNATIVE_ C compiler (and a small assembler), I will happily accept to have a "slower" kernel.
>>"the spiral rule"
Not a problem in 99.99% of cases where it's just: type name = something;
with maybe * and [] somewhere.
We need a new C, fixed, leaner and less complex: primitives are all sized (u64, u8, f64, etc), only one loop primitive (loop{}), no switch, no enum, no typedef, no typeof and other c11 _generic/etc, no integer promotion, no implicit casts (except for void* and literal numbers?), real hard compiler constants, current "const" must go away (I don't even talk about "restrict" and similar), no anonymous code block, explicit differentiation between reinterpret,runtime,etc casts?, anonymous struct/union for complex memory layouts, and everything else I am currently forgetting about.
(the expression evaluation of the C pre-processor would require serious modifications too)
We need clean and clear cut definitions, in regards of ABIs, of argument passing by value of aggregates like structs/arrays, you know, mostly for those pesky complex numbers and div_t.
We need keywords and official very-likely-to-be-inline-or-hardware-accelerated-functions, something in between the standard library and the compiler keywords, for modern hardware architecture programming like memory barriers, atomics, byte swapping, some bitwise operations (like popcnt), memcpy, memcmp, memmove, etc.
In the end, I would prefer to have a standard machine ISA (RISC-V) and code everything directly in assembly, or with very high level languages (like [ecma|java]script, lua, etc) with a interpreter coded in that standard machine ISA assembly (all that with a reasonable SDK, ofc)
stdint.h already gives you that.
>> only one loop primitive (loop{}), no switch, no enum
I don't think you will find many fans of that.
>> atomics
check stdatomic.h
>>some bitwise operations (like popcnt)
Check stdbit.h in C23
The only language that would make sense for a partial/progressive migration is zig, in huge part due to its compatibility with C. It's not mentioned in the article though.
Almost certainly correct. It is however being rewritten in Rust by other people https://github.com/tursodatabase/turso. This is probably best thought of as a seperate, compatible project rather than a true rewrite.
For example, Rust has additional memory guarantees when compared to C.
Linus also brought this up: https://lkml.org/lkml/2021/4/14/1099
https://rust.docs.kernel.org/next/kernel/alloc/kvec/struct.V...
Vec::push_within_capacity is a nice API to confront the reality of running out of memory. "Clever" ideas that don't actually work are obviously ineffective once we see this API. We need to do something with this T, we can't just say "Somebody else should ensure I have room to store it" because it's too late now. Here's your T back, there was no more space.
Trying to apply backpressure from memory allocation failures which can appear anywhere completely disconnected from their source rather than capping the current in memory set seems like an incredibly hard path to make work reliably.
And that is usually not too difficult in C (in my experience), where allocation is explicit.
In C++, on the other hand, this quickly gets hairy IMO.
That's intentional; IOW the "most code" that is unable to handle OOM conditions are written that way.
You can write code that handles OOM conditions gracefully, but that way of writing code is the default only in C. In every other language you need to go off the beaten path to gracefully handle OOM conditions.
It's possible. But very very few projects do.
What are you talking about? Every allocation must be checked at the point of allocation, which is "the default"
If you write non-idiomatically, then sure, in other languages you can jump through a couple of hoops and check every allocation, but that's not the default.
The default in C is to return an error when allocation fails.
The default in C++, Rust, etc is to throw an exception. The idiomatic way in C++, etc is to not handle that exception.
C doesn't force you to check the allocation at all. The default behavior is to simply invoke undefined behavior the first time you use the returned allocation if it failed.
In practice I've found most people write their own wrappers around malloc that at least crash - for example: https://docs.gtk.org/glib/memory.html
PS. The current default in rust to print something and then abort the program, not panic (i.e. not throw an exception). Though the standard library reserves the right to change that to a panic in the future.
No one ever claimed it did; I said, and still do, that the in C, at any rate, the default is to check the returned value from memory allocations.
And, that is true.
The default in other language is not to recover.
> No one ever claimed it did;
You specifically said
> Every allocation must be checked at the point of allocation
...
> the default is to check the returned value from memory allocations.
Default has a meaning, and it's what happens if you don't explicitly choose to do something else.
In libc - this is to invoke undefined behavior if the user uses the allocation.
In glib - the library that underpins half the linux desktop - this is to crash. This is an approach I've seen elsewhere as well to the point where I'm comfortable calling it "default" in the sense that people change their default behavior to it.
Nowhere that I've ever seen, in C, is it to make the user handle the error. I assume there are projects with santizers that do do that, I haven't worked on them, and they certainly don't make up the majority.
because of OS-level overcommit, which is nearly always a good thing
It doesn't matter about the language you are writing in, because your OS can tell you that the allocation succeeded, but when you come to use it, only then do you find out that the memory isn't there.
It's a place where windows legitimately is better than linux.
It may not be as simple as "that's our policy". I worked at one place (embedded C++ code, 2018) that simply reset the device every 24h because they never managed to track down all the leaks.
Finding memory leaks in C++ is a non-trivial and time-consuming task. It gets easier if your project doesn't use exceptions, but it's still very difficult.
Was not available for that specific device, but even with Valgrind and similar tools, you are still going to run into weird destructor issues with inheritance.
There are many possible combinations of virtual, non-virtual, base-class, derived-class, constructors and destructors; some of them will indeed cause a memory leak, and are allowed to by the standard.
In Zig you must handle it. Even if handling means "don't care, panic", you have to spell that out.
No, this is wishful thinking. While plenty of programs out the are in the business of maintaining caches that could be optimistically evicted in order to proceed in low-memory situations, the vast majority of programs are not caching anything. If they're out of memory, they just can't proceed.
First off allocation failure (typically indicated by bad_alloc exception in C++ code, or nullptr in C style code) does not mean that the system is out of memory.
It just means that this particular allocator could not satisfy the allocation request. The allocator could have "ulimit" or such limit that is completely independent from actual process/system limitations.
Secondarily what reason is there to make an allocation failure any different than any other resource allocation failure.
A normal structure for a program is to catch these exceptions at a higher level in the stack close to some logical entry point where they can be understood and possibly shown to the user. It shouldn't really matter if the failure is about failing to allocate socket or failing to allocate memory.
You could make the case that if the system is out of memory the exception propagation itself is going to fail. Maybe..but IMHO on q code part that is taken when stack is unwound due to exception you should only end up releasing resources not allocation more anyway.
Are we playing word games here? If a process has a set amount of memory, and it's out of it, then that process is OOM, if a VM is out of memory, it's OOM. Yes, OOM is typically used for OS OOM, and Linus is talking about rust in the kernel, so that's what OOM would mean.
>Secondarily what reason is there to make an allocation failure any different than any other resource allocation failure.
Of course there is, would you treat being out of bread similar to being out of oxygen? Again this can be explained by the context being kernel development and not application development.
In your C++ (or C) program you have one (or more) allocators. These are just pieces of code that juggle blocks of memory into smaller chunks for the program to use. Typically the allocators get their memory from the OS in pages using some OS system call such as sbrk or mmap.
For the sake of argument, let's say I write an allocator that has a limit of 2MiB, while my system has 64Gib or RAM. The allocator can then fail some request when it's internal 2MiB has been exhausted. In C world it'd return a nullptr. In C++ world it would normally throw bad_alloc.
If this happens does this mean the process is out of memory? Or the system is out of memory? No, it doesn't.
That being said where things get murky is because there are allocators that in the absence of limits will just map more and more pages from the OS. The OS can "overcommit" which is to say it gives out more pages than can actually fit into the available physical memory (after taking into account what the OS itself uses etc). And when the overall system memory demand grows too high it will just kill some arbitrary process. On Linux this is the infamous OOM killer that uses the "niceness" score to determine what to kill.
And yes, for the OOM killer there's very little you can do.
But an allocation failure (nullptr or bad_alloc) does not mean OOM condition is happening in the system.
This is the far more meaningful part of the original comment:
> and furthermore most code is not in a position to do anything other than crash in an OOM scenario
Given that (unlike a language such as Zig) Rust doesn’t use a variety of different allocator types within a given system, choosing to reliably panic with a reasonable message and stack/trace is a very reasonable mindset to have.
If some allocation fails, the error bubbles up until a safe place, where some pages can be dropped from the cache, and the operation that failed can be tried again.
All this requires is that bubbling up this specific error condition doesn't allocate. Which SQLite purportedly tests.
I'll note that this is not entirely dissimilar to a system where an allocation that can't be immediately satisfied triggers a full garbage collection cycle before an OOM is raised (and where some data might be held through soft/weak pointers and dropped under pressure), just implemented in library code.
Historically, a lot of C code fails to handle memory allocation failure properly because checking malloc etc for null result is too much work — C code tends to calm that a lot.
Bjarne Stroustrup added exceptions to C++ in part so that you could write programs that easily recover when memory allocation fails - that was the original motivation for exceptions.
In this one way, rust is a step backwards towards C. I hope that rust comes up with a better story around this, because in some applications it does matter.
If it were any other way then processes could ignore signals and just make themselves permanent, like Maduro or Putin.
No. A single process can have several allocators, switch between them, or use temporary low limits to enforce some kind of safety. None of that has any relation to your system running out of memory.
You won't see any of that in a desktop or a server. In fact, I haven't seen people even discuss that in decades. But it exists, and there are real reasons to use it.
As I just explained an allocator can have its own limits.
A process can have multiple allocators. There's no direct logical step that says that because some allocator failed the process itself cannot allocate more.
"Of course there is, would you treat being out of bread similar to being out of oxygen? Again this can be explained by the context being kernel development and not application development."
The parent comment is talking about over commitment and OOM as if these are situations that are completely out of the programs control. They aren't.
So I assume there is no real blockers as people in this tread assume, this is just not a conventional behavior, ad hoc, so we need to wait and well defined stable OOM handlers will appear
After those experiences I agree with the sibling comment that calls your position "bullshit". I think people come to your conclusion when they haven't experienced a system that can handle it, so they're biased to think it's impossible to do. Since being able to handle it is not the default in so many languages and one very prominent OS, fewer people understand it is possible.
https://thenewstack.io/why-we-created-turso-a-rust-based-rew...
I also think it's important to have really solid understandings (which can take a few decades I imagine) to understand the bounds of what Rust is good at. For example, I personally think it's unclear how good Rust can be for GUI applications.
As a language, it's too basic. Almost every C projects try to mimic what C++ already has.
https://docs.gtk.org/glib/
https://github.com/antirez/sds
https://www.youtube.com/live/EIKAqcLxtT0?si=J82Us4zBlXLPbZVq
It is a matter of skill.
Because I could not, by scrubbing that video, find anything where immense skill is used to deal with the enormous overhead that standard C++ forces on every program that uses it.
"Rich Code for Tiny Computers: A Simple Commodore 64 Game in C++17"
https://youtu.be/zBkNBP00wJE?si=uqUwVMMEpp4ZPWun
It is a matter of skill, understanding C++ and how to make it useful for embedded systems, and being able to understand standard library isn't a requirement for each executable.
By the way, the famous Arduino and ESP32 hasve no problem dealing with C++.
As we also didn't, back in MS-DOS, with 640 KB, Turbo Vision and Borland Collection Library (BIDS).
A matter of skill, as mentioned.
So you also agree that C++ libraries are a bad fit for embedded? Because in the video you linked, that person did not use any libraries.
It is one thing to compile small standalone binary, using non-conforming compiler extensions to disable rtti and exceptions. It is another to write C++ library. By standard, even freestanding C++ requires RTTI, exceptions and most of standard library. If you need to implement your own STL subset to satisfy the library, then modify the library to work with your STL, the resulting API is not much of an API, is it?
It takes skill to make use of Arduino, ESP32, and other C++ embedded libraries, being able to write custom C++ libraries, master compiler switches and linker maps.
You cannot make it in C++, because any valid C++ library imposes massive requirements on the environment I already mentioned. C does no such thing!
Arduino, https://docs.arduino.cc/arduino-cloud/guides/arduino-c/
Or even ESP32, https://docs.espressif.com/projects/esp-idf/en/stable/esp32/...
Others do not.
As for C does not such thing, strangly there are enough examples from TI, Microchip, Atmel that prove otherwise.
The ESP32 SDK (I lol'd at your "even", those Xtensa/RISCV chips can even run linux kernel!) is extremely impressive - they support enabling/disabling rtti and exceptions (obviously disabled by default, but the fact they implemented support for that is amazing). So "real C++" is possible on ESP32, which is good to know.
For comparision, here are the minimum SQLite dependencies from the submitted article: memcmp() memcpy() memmove() memset() strcmp() strlen() strncmp()
Of course you could run javascript and erlang on MCU too, but is that API better than C? Your claim of "skill issue" sounds like RedBull challenge. Please let us unskilled people simply call library functions.
First of all, ABI is a property of the OS calling conventions, which happen to overlap with C on UNIX/POSIX, given its symbiotic relationship.
Secondly, https://thephd.dev/to-save-c-we-must-save-abi-fixing-c-funct...
Then came the rise of FOSS adoption, with the GNU manifest asserting to use only C as compiled language.
"Using a language other than C is like using a non-standard feature: it will cause trouble for users. Even if GCC supports the other language, users may find it inconvenient to have to install the compiler for that other language in order to build your program. So please write in C."
The GNU Coding Standard in 1994, http://web.mit.edu/gnu/doc/html/standards_7.html#SEC12
[1] https://web.archive.org/web/20170701061906/https://sqlite.or...
Let’s be serious now.
I believe this article is from a few years ago
Why they? You should do it.
Overall, it makes sense. C is a systems language, and a DB is a system abstraction. You shouldn't need to build a deep hierarchy of abstractions on top of C, just stick with the lego blocks you have.
If the project had started in 2016, maybe they would have gone for c++, which is a different beast from what it was pre-2011.
Similarly, you might write SQLite in Rust if you started today.
https://web.archive.org/web/20250130053844/https://250bpm.co...
* Error handling via exceptions. Rust uses `Result` instead. (It has panics, but they are meant to be strictly for serious logic errors only in which calling `abort` would be fine. There's a `Cargo.toml` option to do exactly that on panic that rather than unwinding.) (btw, C++ has two camps here for better or worse; many programs are written in a dialect that doesn't use exceptions.)
* Constructors have to be infallible. Not a thing in Rust; you just make a method that returns `Result<Self, Error>`. (Even in C++ there are workarounds.)
* Destructors have to be infallible. This is about as true in Rust as in C++: `Drop::drop` doesn't return a `Result` and can't unwind-via-panic if you have unwinding disabled or are already panicking. But I reject the characterization of it as a problem compared to C anyway. The C version has to call a function to destroy the thing. Doing the same in Rust (or C++) is not really any different; having the other calls assert that it's not destroyed is perfectly fine. I've done this via a `self.inner.as_mut().expect("not terminated")`. They say the C only has two states: "Not initialised object/memory where all the bets are off and the structure can contain random data. And there is initialised state, where the object is fully functional". The existence of the "all bets are off" state is not as compelling as they make it out to be, even if throwing up your hands is less code.
* Inheritance. Rust doesn't have it.
That section was probably written 20 years ago when Java was all the rage.
If they are getting good results with C and without OOP, and people like the product, then those from outside the project shouldn't really have any say on it. It's their project.
If only programming languages (or GenAI) were tools like hammers and augers and drills.
Even then the cabinets you see that come out of shops that only use hand tools are some of the most sturdy, beautiful, and long lasting pieces that become the antiques. They use fewer cuts, less glue, avoid using nails and screws where a proper joint will do, etc.
Comparing it to AI makes no sense. Invoking it is supposed to bring to mind the fact that it's worse in well-known ways, but then the statement 'better in every way' no longer applies. Using Rust passively improves the engineering quality compared to using anything else, unlike AI which sacrifices engineering quality for iteration speed.
The traditional joints held up very well and even beat the engineered connectors in some cases. Additionally one must be careful with screws and fasteners: if they’re not used according to spec, they may be significantly weaker than expected. The presented screws had to be driven in diagonally from multiple positions to reach the specified strength; driving them straight in, as the average DIYer would, would have resulted in a weak joint.
Glue is typically used in traditional joinery, so less glue would actually have a negative effect.
And a lot of traditional joinery is about keeping the carcase sufficiently together even after the hide glue completely breaks down so that it can be repaired.
Modern glues allow you to use a lot less complicated joinery.
No disrespect intended, but your criticism of the analogy reveals that you are speaking from assumptions, but not knowledge, about furniture construction.
In fact, less glue, and fewer fasteners (i.e. design that leverages the strength of the materials), is exactly how quality furtniture is made more sturdy.
If the alternative has drawbacks (they always do) or is not as well known by the team, it's perfecly fine to keep using the tool you know if it is working for you.
People who incessantly try to evangelise their tool/belief/preferences to others are often seen as unpleasant to say the least and they often achieve the opposite effect of what they seek.
but everyone with a brain knows the costs are worth the benefits.
And when it comes to programming languages, it's not as clear cut. As exemplified by the article.
So the power tools is a poor analogy.
In reality, this is not the case. Bad code is the result of bad developers. Id rather have someone writing C code that understands how memory bugs happen rather than a Rust developer thinking that the compiler is going to take care of everything for them.
There is literally nothing strange or disproportionate. It's incredibly obvious that new languages, that were designed by people who found older languages lacking, are of interest to groups of people interested in new applications of technology and who want to use their new languages.
> then those from outside the project shouldn't really have any say on it. It's their project.
People outside the project are allowed to say whatever the hell they want, the project doesn't have to listen.
Yeah this super common. Great comment.
383 more comments available on Hacker News