The C3 Programming Language
Key topics
The C3 programming language is sparking debate about the future of coding with large language models (LLMs), with some arguing that LLMs could make lower-level languages like C3 or Rust more appealing due to their performance benefits. However, others counter that LLMs excel at popular languages like Python and JavaScript, where there's more data to draw from, while klysm notes that "LLMs are not equally good at all languages." Surprisingly, some developers report that LLMs handle Rust remarkably well, thanks to its explicitness and type information, which provides valuable context. As one developer puts it, their machine is "far more skilled at generating quality Rust than I currently am," highlighting a potential shift in how we approach coding with AI assistance.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
22m
Peak period
135
0-12h
Avg / period
26.7
Based on 160 loaded comments
Key moments
- 01Story posted
Jan 3, 2026 at 11:41 AM EST
6d ago
Step 01 - 02First comment
Jan 3, 2026 at 12:03 PM EST
22m after posting
Step 02 - 03Peak activity
135 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Jan 9, 2026 at 12:19 AM EST
20h ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
But ultimately, I agree with you, in most projects, having enough existing style, arranged in a fairly specific way, for Claude to imitate makes results a lot better. Or at least, until you get to that "good-looking codebase", you have to steer it a lot more explicitly, to the level of telling it what function signatures to use, what files to edit, etc.
Currently on another project, I've had Claude make ~10 development spikes on specific ~5 high-uncertainty features on separate branches, without ever telling it what the main project structure really is. Some of the spikes implement the same functionality with e.g. different libraries, as I'm exploring my options (ML inference as a library is still a shitshow). I think that approach has some whiff of "future of programming" to it. Previously I would have spent more effort studying the frameworks up front and committed to a choice harder, now it's "let's see if this is good enough".
I'm currently doing this with golang. It is not that bad of an experience. LLMs do struggle with concurrency, though. My current project has proved to be pretty challenging for LLMs to chew through.
In Nim, strings and seqs exist on the heap, but are managed by simple value-semantic wrappers on the stack, where the pointer's lifetime is easy to statically analyze. Moves and destroys can be automatic by default. All string ops return string, there are no special derivative types. Seq ops return seq, there are no special derivative types. Do you pay the price of the occasional copy? Yes. But there are opt-in trapdoors to allocate RC- or manually-managed strings and seqs. Otherwise, the default mode of interacting with heap data is an absolute breeze.
For the life of me, I don't know why other languages haven't leaned harder into such a transformative feature.
Those implicit copies have downsides that make them a bad fit for various reasons.
Swift doesn't enforce value semantics, but most types in the standard library do follow them (even dictionaries and such), and those types go out of their way to use copy-on-write to try and avoid unnecessary copying as much as possible. Even with that optimization there are too many implicit copies! (it could be argued the copy-on-write makes it worse since it makes it harder to predict when they happen).
Implicit copies of very large datastructures are almost always unwanted, effectively a bug, and having the compiler check this (as in Rust or a C++ type without a copy constructor) can help detect said bugs. It's not all that dissimilar to NULL checking. NULL checking requires lots of extra annoying machinery but it avoids so many bugs it is worthwhile doing.
So you have to have a plan on how to avoid unnecessary copying. "Move-only" types is one way, but then the question is which types do you make move-only? Copying a small vector is usually fine, but a huge one probably not. You have to make the decision for each heap-allocated type if you want it move-only or implicitly copyable (with the caveats above) which is not trivial. You can also add "view" types like slices, but now you need to worry about tracking lifetimes.
For these new C alternative languages, implicit heap copies are a big nono. They have very few implicit calls. There are no destructors, allocators are explicit. Implicit copies could be supported with a default temp allocator that follows a stack discipline, but now you are imposing a specific structure to the temp allocator.
It's not something that can just be added to any language.
It's a tradeoff I am more than willing to take, if it means the processing semantics are basically straight out of the textbook with no extra memory-semantic noise. That textbook clarity is very important to my company's business, more than saving the server a couple hundred milliseconds on a 1-second process that does not have the request volume to justify the savings.
Obviously for your use case it's not a problem but other use cases are a different story. Games in particular are very sensitive to performance spikes. Even a naive tracing GC would do better than hitting such an implicit copy every few frames.
Meanwhile, a compiler is an enormously complicated story. I personally never ever want to write a compiler, cause I already had more fun than I ever wanted working with distributed systems. While idiomatic C was not the way forward, my choice was a C dialect and Go for higher-level things.
How can we estimate these things? Or let's have fun, yolo?
I don't intend to downplay the effort involved in creating a large project, but it's evident to me that there's a class of "better C" languages for which LLVM is very well suited.
On purely recreational grounds, one can get something small off the ground in an afternoon with LLVM. It's very enjoyable and has a low barrier to entry, really.
Is there something analogous for those wanting to create language interpreters, not compilers? And preferably for interpreters one wants to develop in Python?
Doesn't have to literally just an afternoon, it could be even a few weeks, but something that will ease the task for PL newbies? The tasks of lexing and parsing, I mean.
[1]: https://www.colm.net/open-source/ragel/
[2]: https://github.com/gritzko/librdx/blob/master/rdx/JDR.lex
Simple enough to do it by hand, but there’s a lot of boilerplate and bureaucracy involved that is painfully time-wasting unless you know exactly what syntax you are going for.
But if you adopt a parser-generator such as Flex/Bison you’ll find yourself learning and debugging and obtuse language that has to be forcefully bent to your needs, and I hope your knowledge of parsing theory is up-to-scratch when you’re facing with shift-reduce conflicts or have to decide whether LR or LALR(1) or whatever is most appropriate to your syntax.
Not even PEG is gonna come to your rescue.
but i've never created an interpreter, let alone a compiler.
https://github.com/mongrel/mongrel
On the non-generated side, lexer creation is largely mechanical - even if you write it by hand. For example, if you vaguely understand the idea of expressing a disjunctive regular expression as a state machine (its DFA), you can plug that into skeleton algorithms and get a lexer out (for example, the algorithm shown in Reps' "“Maximal-Munch” Tokenization in Linear Time " paper). For parsing, taking a day or two to really understand Pratt parsing is incredibly valuable. Then, recursive descent is fairly intuitive to learn and implement, and Pratt parsing is a nice way to structure your parser for the more expressive parts of your language's grammar.
Nowadays, Python has a match (pattern matching) construct - even if its semantics are somewhat questionable (and potentially error-prone). Overall, though, I don't find Python too unenjoyable for compiler-related programming: dataclasses (and match) have really improved the situation.
AST interpreter in Java from scratch, followed by the same language in a tight bytecode VM in C.
Great book; very good introduction to the subject.
What you then realize is that it is possible to generate quality machine code much faster than LLVM and using far fewer resources. I believe both that LLVM has been holding back compiler evolution and that it is close to if not already at peak popularity. As LLMs improve, the need for tighter feedback loops will necessitate moving off the bloat of LLVM. Moreover, for all of the magic of LLVMs optimization passes, it does very little to prevent the user from writing incorrect code. I believe we will demand more from a compiler backend than LLVM can ever deliver.
The main selling point of LLVM is that you gain access to all of the targets, but this is for me a weak point in its favor. Firstly, one can write a quality self hosting compiler with O(20) instructions. Adding new backends should be trivial. Moreover, the more you are thinking about cross platform portability, the more you are worrying about hypothetical problems as well as the problems of people other than yourself. Get your compiler working well first on your machine and then worry about other machines.
Sure it can't do all the optimizations LLVM can but it is radically simpler and easier to use.
That said, I suspect it’ll never be more than a small niche if it doesn’t target Mac and Windows.
Sounds like famous last words :-P
And I don't really know about faster once you start to handle all the edge cases that invariably crop up.
Point in case: gcc
I'm particularly fond of the organisation of the OCaml compiler: it doesn't really follow a classical separation of concerns, but emits good quality code. E.g. its instruction selection is just pattern matching expressed in the language, various liveness properties of the target instructions are expressed for the virtual IR (as they know which one-to-one instruction mapping they'll use later - as opposed to doing register allocation strictly after instruction selection), garbage collection checks are threaded in after-the-fact (calls to caml_call_gc), its register allocator is a simple variant of Chow et al's priority graph colouring (expressed rather tersely; ~223 lines, ignoring the related infrastructure for spilling, restoring, etc.)
--
As a huge aside, I believe the hobby compiler space could benefit from someone implementing a syntactic subset of LLVM, capable of compiling real programs. You'd get test suites for free and the option to switch to stock LLVM if desired. Projects like Hare are probably a good fit for such an idea: you could switch out the backend for stock LLVM if you want.
So long as only you use your custom C dialect, all is fine. Trouble starts when you'd like others to use it too or when you'd like to use libraries written by people who used a different language, e.g. C.
AT ANY POINT.
No exist, nothing, that could yield more improvements that a new language. Is the ONLY way to make a paradigm(shift) stick. Is the ONLY way to turn "discipline" into "normal work".
Example:
"Everyone knows that is hard to mutate things":
* Option 1: DISCIPLINE
* Option 2: you have "let" and you have "var" (or equivalent) and remove MILLIONS of times where somebody somewhere must think "this var mutates or not?".
"Manually manage memory is hard"
* Option 1: DISCIPLINE
* Option 2: Not need, for TRILLONS of objects across ALL the codebases with any form of automatic memory management, across ALL the developers and ALL their apps to very close to 100% to never worry about it
* Option 3: And now I can be sure about do this with more safety and across threads and such
---
Make actual progress with a language is hard, because there is a fractal of competing things that in sore need of improvement, and a big subset of users are anti-progress and prefer to suffer decades of C (example) than some gradual progress with something like pascal (where a "string" exist).
Plus, a language need to coordinate syntax (important) with std library (important) with how frameworks will end (important) with compile-time AND runtime outcomes (important) with tooling (important).
And miss dearly any of this and you blew it.
But, there is not other kind of project (apart from a OS, FileSystem, DBs) where the potential positive impact will extend to the future as much.
What is the difference? Using polite words to communicate?
https://xkcd.com/357/
"Head over heels" is another weird idiom. I'm so in love, I'm standing in a normal orientation.
It's one of those corruptions which flips the meaning (ironically, in this case!) on its head, or just becomes meaningless over time as it's reinterpreted (like "the exception that proves the rule" or "begs the question").
Neither does it force a new memory model on you, nor does it try to be C++. The killer feature for me is the full ABI compatibility. The fact that I no longer have to write bindings and can just mix C3 files into my existing C build system reduces the friction to near zero.
Kudos to the maintainer for sticking to the evolution, not revolution vision. If you are looking for a weekend language to learn that doesn't require resetting your brain but feels more modern than C99, I highly recommend giving this a shot. Great work by the team.
The only thing stopping me from just going full C the rest of my career is cstrings and dangling pointers to raw memory that isn’t cleaned up when the process ends.
For example microcontrollers or aerospace systems.
Without virtual memory, I would either need to force the use of a garbage collector (which is an interesting challenge in itself to design a GC for a flat address space full of stackless coroutines), or require languages with much stricter memory semantics such as Rust so I can be safe everything is released at the end (though all languages are designed for isolated virtual memory and not even Rust might help without serious re-engineering)
Do you keep notes of these types of platforms you’re working on? Sounds fun.
The good news is that this work is dying out. There isn’t a need to modernize old war birds anymore.
How do you feel about building special constructs to automatically handle these ?
Yes, it has the same ABI.
I mean… C isn't even an unsafe language. It's just that C implementations and ABIs are unsafe. Some fat pointers, less insanely unsafe varargs implementations, UBSan on by default, MTE… soon you're doing pretty well! (Exceptions apply.)
And the various system ABIs supported by C compilers are the defacto standards for that (contrary to popular belief there is no such thing as a "C ABI" - those ABIs are commonly defined by OS and CPU vendors, C compilers need to implement those ABIs just like any other compiler toolchain if they want to talk to operating system interfaces or call into libraries compiled with different compilers from different languages).
That's the job of an FFI. The internal ABI of most languages isn't anything like their FFI, eg any garbage collected language can't use the OS "C" ABI.
Most operating systems don't use the same ABI for kernel syscalls and userland libraries either. (Darwin is an exception where you do have to link a library instead of making syscalls yourself.)
> contrary to popular belief there is no such thing as a "C ABI"
It is a "C ABI" if it has eg null-terminated strings and varargs with no way to do bounds checking.
C3 provides a module system for cleaner code organization across files, unlike Zig where files act as modules with nesting limitations.
C3 offers first class lambdas and dynamic interfaces for flexible runtime polymorphism without Zigs struct based workarounds.
C3s operator overloading enables intuitive math types like vectors, which Zig avoids to prevent hidden control flow.
Here is a comparison to Zig in terms of features: https://c3-lang.org/faq/compare-languages/#zig
And yes, they are all system programming languages with a similar level of abstraction that are suited for similar problem. It is good to have choice. It is like asking what do you need Ruby for when you have Python.
[1] https://github.com/c3lang/c3c/blob/master/test/test_suite/co...
[2] https://c3-lang.org/language-fundamentals/functions/
https://c3-lang.org/language-overview/examples/#enum-and-swi...
I think consistency is the best correlate of least surprise, so having case statements that sometimes fall though, sometimes not, seems awful.
Personally, I'd rather see a different syntax switch (perhaps something like the Java pattern switch) or no switch at all than one that looks the same as in all C-style languages but works just slightly differently.
Some ways C3 differs from C:
- No mandatory header files
- New semantic macro system
- Module-based namespacing
- Slices
- Operator overloading
- Compile-time reflection
- Enhanced compile-time execution
- Generics via generic modules
- "Result"-based zero-overhead error handling
- Defer
- Value methods
- Associated enum data
- No preprocessor
- Less undefined behavior, with added runtime checks in "safe" mode
- Limited operator overloading (to enable userland dynamic arrays)
- Optional pre- and post-conditions
C3 has you covered
https://c3-lang.org/language-fundamentals/functions/#functio...
It also has operator overloading and methods which you could use in place of function overloading I guess.
Why do we still have to recompile the whole program everytime we make a change, the only project i am aware of who wants to tackle this is Zig with binary patching, and that's imo where we should focus our effort on..
C3 does look interesting tho, the idea of ABI compatibility with C is pretty ingenious, you get to tap into C's ecosystem for free
That problem was solved decades ago via object files and linkers. Zig needs a different approach because its language features depend on compiling the entire source code as a single compilation unit, but I don't think that C3 has that same "restriction" (not sure though).
Binary patching is another one. It feels a bit messy and I am sceptical that it can be maintained assuming it works at all.
I think a much better approach would be too make the compilers faster. Why does compiling 1M LOC take more than 1s in unoptimized mode for any language? My guess is part of blame lies with bloated backends and meta programming (including compile time evaluation, templates, etc.)
Moreover, I view optimization as an anti-pattern in general, especially for a low level language. It is better to directly write the optimal solution and not be dependent on the compiler. If there is a real hotspot that you have identified through profiling and you don't know how to optimize it, then you can run the hotspot through an optimizing compiler and copy what it does.
Are you talking about compiling, or linking, or both?
GNU ld has supported incremental linking for ages, and make systems only recompile things based on file level dependencies.
I guess recompilation could perhaps be smarter based on what changed or was added/deleted to a module definition (e.g C header) file, but this would seem difficult to get right. Maybe you just add a new function to a module, so no need to recompile other modules that use it, right? Except what if there is now a name clash and they would fail if recompiled?
Lisp solved that problem 60 years ago.
A meta answer to your question, I guess.
A tagged union always needs as at least as much memory as the biggest type, but even worse, they nudge the programmer towards 'any-types', which basically moves the type checking from compile-time to run-time, but then why use a statically typed language at all?
And even if they are useful in some rare situations, are the advantages big enough to justify wasting 'syntax surface' instead of rolling your own tagged unions when needed?
Tagged enums are everywhere. I am writing a micro kernel in C and how I wish I had tagged enums instead of writing the same boilerplate of
...what else is a select on a tagged union than 'runtime casting' though. You have a single 'sum type' which you don't know what concrete type it actually is at runtime until you look at the tag and 'cast' to the concrete type associated with the tag. The fact that some languages have syntax sugar for the selection doesn't make the runtime overhead magically disappear.
Not having syntactic sugar for this ultra-common use case doesn’t make it disappear. It just makes it more tedious.
As for the memory allocation, I can't see why any object should have the size of the largest alternative. When I do the manual equivalent of a tagged union in C (ie. a struct with a tag followed by a union) I malloc only the required size, and a function receiving a pointer to this object has better not assume any size before looking at the tag. Oh you mean when the object is automatically allocated on the stack, or stored in an array? Yes then, sure. But that's going to be small change if it's on the stack and for the array, well there is no way around it ; if it does not suit your design then have only the tags on the array?
Tagged unions are a thing, whether the language helps or not. When I program in a language that has them then it's probably a sizeable fraction of all the types I define. I believe they are fundamental to programming, and I'd prefer the language to help with syntax and some basic sanity checks; Like, with a dynamical sizeof that to reads the tag so it's easier to malloc the right amount, or a syntax that makes it impossible to access the wrong field (ie. any lightweight pattern matching will do).
In other words, I couldn't really figure out the downside you had in mind :)
That's because every type in a dynamically typed language is a tagged union ;) For instance in Javascript you need to inspect a variable with 'typeof' to find out if it is a string, a boolean, a number or something else.
E.g. in a dynamically typed language, the runtime system needs to carry information around what type an item actually is, essentially the same thing as the type-tag in a tagged union and Rust's match is the same sort of runtime type inspection as the typeof in JS, just with slightly different syntax sugar.
> As for the memory allocation, I can't see why any object should have the size of the largest alternative.
When you have a Rust enum like this:
...then every Bla object is always at least 16 bytes even when the active item is 'AByte' (assuming an empty String also fits into 16 bytes). Plain unions in C have the same problem of course, but those are rarely used (the one thing where unions are really useful in C is to have different views on the same memory).> When I program in a language that has them then it's probably a sizeable fraction of all the types I define
...IMHO 'almost always sum types' is a serious design smell, it might be ok in 'everything is a reference' languages like Typescript, but that's because you pay for the runtime overhead anyway, no matter if sum types are used or not.
You mean tagged unions.
https://c3-lang.org/language-fundamentals/functions/#functio...
A result is already the informal name of the outcome or return value of every regular operation or function call, whereas an Optional is clearly not a regular thing.
I also think, from a pragmatic systems-design point of view, it might make sense to only support the Either/Result pattern. It's not too much boilerplate to add a `faultdef KeyNotInMap`, and then it's clear to the consumer why a real answer was not returned.
(I don't really object to the idea of skipping a real Optional<T> type in a language in favor of just Result<T, ()>.)
Also syntactically it is quite different: it means you add exactly one character to the function head to denote that its possible to return an error.
So, calling that feature "Result" could also be confusing to people who have not yet learned this language.
Tell me, was it a blunder when Rust swapped "Result" from the commonly understood name of "Either" from OCaml/Haskell ?
I don't think that really matters? Result is "A or error" whereas optional is "A or nil".
Admittedly my wording was sloppy. It's technically a subset of the pattern when taken literally. But there's a very strong convention for the error type in C so at least personally I don't find the restriction off putting.
To me the issue is the name clash. This is most definitely not the "optional" pattern. I actually prefer C++'s "expected" over "result" as far as name clarity goes. "Maybe" would presumably also work.
At the end of the day it's all a non-issue thanks to the syntax. I might not agree with what you expressed but I also realize a name that only shows up in the docs isn't going to pose a problem in practice. Probably more than half the languages out there confuse or otherwise subtly screw up remainder, modulus, and a few closely related mathematical concepts but that doesn't get in the way of doing things.
You can name it "Result" or (questionably) "Either."
Not "Option," "Optional," or "Maybe;" those are something else.
Sounds intriguing. But then, the first thing I noticed in their example is a double-colon scope operator.
I understand that it's part of the culture (and Rust, C#, and many other languages), but I find the syntax itself ugly.
I dunno. Maybe I have the visual equivalent of misophonia, in addition to the auditory version, but :: and x << y << z << whatever and things like that just grate.
I like C. But I abhor C++ with a passion, partly because of what, to me, is jarring syntax. A lot of languages have subsequently adopted this sort of syntax, but it really didn't have that much thought put into it at the beginning, other than that Stroustrup went out of his way to use different symbols for different kinds of hierarchies, because some people were confused.
Source: https://medium.com/@alexander.michaud/the-double-colon-opera...
Think about that. The designer of a language that is practically focused on polymorphism went out of his way to _not_ overload the '.' operator for two things that are about as close semantically as things ever get (hierarchical relationships), simply because some of his customers found that overloading to be confusing. (And yet, '<<' is used for completely disparate things in the same language, but, of course, apparently, that is not at all confusing.)
I saw in another comment here just now that one of the differentiators between zig and C3 is that C3 allows operator overloading.
Honestly, that's in C3's favor (in my book), so why don't they start by overloading '.' and get rid of '::' ?
Lisp and APL both have their adherents.
I personally find a bit more syntax than lisp to be nice. Occasionally I long for the homoiconicity of lisp; otoh, many of the arguments for it fall flat with me. For example, DSLs -- yeah, no, it's hard enough to get semi-technical people to use DSLs to start with, never mind lisp-like ones.
It also helps code readability to know that a::b is referring to a namespace, without having to go lookup the definition of "a", while a.b is a variable access.
That's a perspective. Are we talking about the 'bar' that comes from 'foo' or are we talking about the 'bar' that comes from 'baz'?
But another perspective is that 'foo' is important and provides several facilities that are intimately related to foo, so 'bar' is simply one of the features of foo.
> It also helps code readability to know that a::b is referring to a namespace
For you, perhaps. As someone who reads a lot of Python, I don't personally find this argument persuasive.
I'm generally of the camp that code is written once, read many times, and that anything that adds to readability is therefore a win.
Right, the entire question is whether '::' ever adds to readability.
For me, it's a huge negative.
Obviously, YMMV.
In particular, C3's "path shortening", where you're allowed to write `file::open("foo.txt")` rather than having to use the full `std::io::file::open("foo.txt")` is only made possible because the namespace is distinct at the grammar level.
If we play with changing the syntax because it isn't as elegant as `file.open("foo.txt")`, we'd have to pay by actually writing `std.io.file.open("foo.txt")` or change to a flat module system. That is a fairly steep semantic cost to pay for a nicer namespace separator.
I might have overlooked some options, if so - let me know.
I don't see the issue. Just look up the id ? Moreover, if modules are seen as objects, the meaning is quite the same.
> checking is much easier if the namespace is clear from the grammar.
Again (this time by the checker) just look up the symbol table ?
If instead we had foo.bar(), we cannot know if this is the method "bar" on local or global foo, or a function "bar()" in a path matching the substring "foo". Consequently we cannot properly issue 4, since we don't know what the intent was.
So far, not so bad. Let's say it's instead foo::baz::bar(). In the :: case, we don't have any change in complexity, we simply match ::foo::baz instead.
However, for foo.baz.bar(), we get more cases, and let us also bring in the possibility of a function pointer being invoked: 1. It is invoking the method bar() on the global baz is a module that ends with "foo" 2. It is calling a function pointer stored in member bar on the global variable baz is a module that ends with "foo" 3. It is calling the function bar() in a module that ends with "foo.baz" 4. It is calling the function pointer stored in the global bar in a module that ends with "foo.baz" 5. It is invoking the method bar on the member baz of the local foo 6. It is calling a function pointer stored in the member bar in the member baz of the local foo
This might seem doable, but note that for every module we have that has a struct, we need to speculatively dive into it to see if it might give a match. And then give a good error message to the user if everything fails.
Note here that if we had yet another level, `foo.bar.baz.abc()` then the number of combinations to search increases yet again.
This is exactly the syntax Python uses, and there is no "search" per se.
Either an identifier is in the current namespace or not.
And if it is in the current namespace, there can only be one.
The only time multiple namespaces are searched is when you are scoped within a function or class which might have a local variable or member of the same name.
> find foo::bar(), then we know that the path is <some path>::foo, the function is `bar` consequently we search for all modules matching the substring ::foo,
The only reason you need to have a search and think about all the possibilities is that you are deliberately allowing implicit lookups. Again, in Python:
1) Everything is explicit; but 2) you can easily create shorthand aliases when you want.
> note that for every module we have that has a struct, we need to speculatively dive into it to see if it might give a match. And then give a good error message to the user if everything fails.
Only if you rely on search, as opposed to, you know, if you 'import foo' then 'foo' refers to what you imported.
> In particular, C3's "path shortening" ... we'd have to pay by actually writing `std.io.file.open("foo.txt")` or change to a flat module system.
You can easily and explicitly shorten paths in other languages. For example, in Python "from mypackage.mysubpackage import mymodule; mymodule.myfunc()"
Python even gracefully handles name collisions by allowing you to change the name of the local alias, e.g. "from my_other_package.mysubpackage import mymodule as other_module"
I find the "from .. import" to be really handy to understand touchpoints for other modules, and it is not very verbose, because you can have a comma-separated list of things that you are aliasing into the current namespace.
(You can also use "from some_module import *" to bring everything in, which is highly useful for exploratory programming but is an anti-pattern for production software.)
I don't want to get too far into details, but it's understandable that people misunderstand it if they haven't used it, as it's a novel approach not used by any other language.
I am wondering though: when does one pick C3 for a task/problem?
"Contracts are optional pre- and post-condition checks that the compiler may use for static analysis, runtime checks and optimization. Note that conforming C3 compilers are not obliged to use pre- and post-conditions at all.
However, violating either pre- or post-conditions is unspecified behaviour, and a compiler may optimize code as if they are always true – even if a potential bug may cause them to be violated.
In safe mode, pre- and post-conditions are checked using runtime asserts."
So I'm probably missing something, but it reads to me like you're adding checks to your code, except there's no guarantee that they will run and whether it's at compile or runtime. And sometimes instead of catching a mistake, these checks will instead silently introduce undefined behaviour into your program. Isn't that kinda bad? How are you supposed to use this stuff reliably?
(otherwise C3 seems really cool!)
Only in "fast" mode. The developer has the choice:
> Compilation has two modes: “safe” and “fast”. Safe mode will insert checks for out-of-bounds access, null-pointer deref, shifting by negative numbers, division by zero, violation of contracts and asserts.
The developer has the choice between fast or safe. They don't have a choice for checking pre/post conditions, or at least avoiding UB when they are broken, while getting the other benefits of the "fast" mode.
And all in all the biggest issue is that these can be misinterpreted as a safety feature, while they actually add more possibilities for UB!
https://youtube.com/playlist?list=PLpM-Dvs8t0VYwdrsI_O-7wpo-...
July 2025 (159 comments, 143 points): https://news.ycombinator.com/item?id=44532527
95 more comments available on Hacker News