A Comparison of Ada and Rust, Using Solutions to the Advent of Code
Posted3 months agoActive3 months ago
github.comTechstoryHigh profile
calmmixed
Debate
60/100
Programming LanguagesAdaRustComparison
Key topics
Programming Languages
Ada
Rust
Comparison
A detailed comparison of Ada and Rust programming languages using Advent of Code solutions sparks discussion on their features, adoption, and use cases.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
50m
Peak period
109
0-12h
Avg / period
26.7
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 4, 2025 at 11:10 AM EDT
3 months ago
Step 01 - 02First comment
Oct 4, 2025 at 12:00 PM EDT
50m after posting
Step 02 - 03Peak activity
109 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 11, 2025 at 9:36 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45473861Type: storyLast synced: 11/20/2025, 8:28:07 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I envy people who can write foundational, self-contained software. It's so elegant.
In my opinion, don't make thick bindings for your C libraries. It just makes it harder to use them.
For example I don't really like the OpenGL thick bindings for Ada because using them is so wildly different than the C examples that I can't really figure out how to do what I want to do.
I can't help but feel that we just went through a huge period of growth at all costs and now there is a desire to return, after 30-years of anything goes, to trying to make software that is safer again. Would be nice to start to build languages based on all the safety learnings over the decades to build some better languages, the good ideas keep getting lost in obscure languages and forgotten about.
Yes! I would kill to get Ada's number range feature in Rust!
For some strange reason people always relate to Ada for it.
18 year old me couldn't appreciate how beautiful a language it is but in my 40s I finally do.
2005-2010 my college most interesting (in this direction) language was Haskell. I don't think that there was any other language (like Ada) being taught)
Icon is an amazing language and I wish it was better known.
I admit that the terseness of the syntax of C can be off-putting. Still, it's just syntax, I am sorry you were disuaded by it.
I dabbled in some of them during some periods when I took a break from work. And also some, during work, in my free time at home.
Pike, ElastiC (not a typo), Icon, Rebol (and later Red), Forth, Lisp, and a few others that I don't remember now.
Not all of those are from the same period, either.
Heck, I can even include Python and Ruby in the list, because I started using them (at different times, with Python being first) much before they became popular.
Yeah not wanting to waste cycles is how we ended up with the current system languages, while Electron gets used all over the place.
I distinctly remember arguments for functions working on array of 10. Oh, you want array of 12? Copy-paste the function to make it array of 12. What a load of BS.
It took Pascal years to drop that constraint, but by then C had already won.
I never ever wanted the compiler or runtime to check a subrange of ints. Ever. Overflow as program crash would be better, which I do find useful, but arbitrary ranges chosen by programmer? No thanks. To make matters worse, those are checked even by intermediate results.
I realize this is opinioned only on my experience, so I would appreciate a counter example where it is a benefit (and yes, I worked on production code written in Pascal, French variant even, and migrating it to C was hilariously more readable and maintainable).
It still results in overflow and while you are right that it's UB by the standard, it's still pretty certain what will happen on a particular platform with a particular compiler :)
I always found it surprising that people did not reject clang for aggressively optimizing based on UB, but instead complained about the language while still using clang with -O3.
The one exception I know of is CompCert but it comes with a non-free license.
I definitely do think the language committee should have constrained UB more to prevent standards-compliant compilers from generating code that completely breaks the expectations of even experienced programmers. Instead the language committees went the opposite route, removing C89/90 wording from subsequent standards that would have limited what compilers can do for UB.
gcc has -fwrapv and -f[no-]strict-overflow, clang copied both, and MSVC has had a plethora of flags over the years (UndefIntOverflow, for example) so your guess is as good as mine which one still works as expected.
Requiring values to be positive, requiring an index to fall within the bounds of an array, and requiring values to be non-zero so you never divide by zero are very, very common requirements and a common source of bugs when the assumptions are violated.
[0] https://news.ycombinator.com/item?id=45474777
Can't tell you what the current state is but this should give you the keywords to find out.
Also, here is a talk Oli gave in the Ada track at FOSDEM this year: https://hachyderm.io/@oli/113970047617836816
There were some talks about general pattern type, but it's not even approved as an experiment, not to talk about RFC or stabilization.
This is a feature I use a lot in C++. It is not part of the standard library but it is trivial to programmatically generate range-restricted numeric types in modern C++. Some safety checks can even be done at compile-time instead of runtime.
It should be a standard feature in programming languages.
There is the wisdom that it is impossible to deliver C++ without pervasive safety issues, for which there are many examples, and on the other hand there are people delivering C++ in high-assurance environments with extremely low defect rates without heroic efforts. Many stories can be written in that gap. C++ can verify many things that are not verifiable in Rust, even though almost no one does.
It mostly isn’t worth the argument. For me, C++20 reached the threshold where it is practical to design code where large parts can be formally verified in multiple ways. That’s great, this has proven to be robust in practice. At the same time, there is an almost complete absence of such practice in the C++ literature and zeitgeist. These things aren’t that complex, the language users are in some sense failing the language.
The ability to codegen situationally specific numeric types is just scratching the surface. You can verify far weirder situational properties than numeric bounds if you want to. I’m always surprised by how few people do.
I used to be a C++ hater. Modern C++ brought me back almost purely because it allows rich compile-time verification of correctness. C++11 was limited but C++20 is like a different world.
Do you have an example of this? I'm curious where C++ exceeds Rust in this regard.
Why would you want this?
I mean, we've recently discussed on HN how most sorting algorithms have a bug for using ints to index into arrays when they should be using (at least) size_t. Yet, for most cases, it's ok, because you only hit the limit rarely. Why would you want to further constrain the field, would it not just be the source of additional bugs?
I guess you can just catch the exception in Ada? In Rust you might instead manually check the age validity and return Err if it's out of range. Then you need to handle the Err. It's the same thing in the end.
> Why would you want to further constrain the field
You would only do that if it's a hard requirement (this is the problem with contrived examples, they make no sense). And in that case you would also have to implement some checks in Rust.
In almost all the cases I have seen it eventually breaks out of confinement. So, it has to be handled sensibly. And, again, in my experience, if it's built into constraints, it invarianly is not handled properly.
So too big times steps cannot be used, but constant sized steps is wasteful. Seems good to know the integrator can never quietly be wrong, even if you have to pay the price that tge integrator could crash.
And yes... error handle on the input and you'd be fine. How would you write code that is cognizant enough to catch outofrange for every +1 done on the field? Seriously, the production code then devolves into copying the value into something else, where operations don't cause unexpected exceptions. Which is a workaround for a silly restriction that should not reside in runtime level.
Making the crash happen at the same time and space as the error means you don’t have to trace a later crash back to the root cause.
This makes your system much easier to debug at the expense of causing some crashes that other systems might not have. A worthy trade off in the right context.
I could go into many more examples but I hope I am understood. I think these hard-coded definition of ranges at compile time are causes of far more issues than they solve.
Let's take a completely different example: size of a field in a database for a surname. How much is enough? Turns out 128 varchars is not enough, so now they've set it to 2048 (not a project I work(ed) on, but am familiar with). Guess what? Not in our data set, but theoretically, even that is not enough.
So you validate user input, we've known how to do that for decades. This is a non-issue. You won't crash the program if you require temperatures to be between 0 and 1000 K and a user puts in 1001, you'll reject the user input.
If that user input crashes your program, you're not a very good programmer, or it's a very early prototype.
eg. If the constraint is 0..200, and the user inputs one value that is being multiplied by our constant, it's trivial to ensure the user input is less than the range maximum divided by our constant.
However, if we are having to multiply by a second, third... and so on.. piece of user input, we get to the position where we have to divide our currently held value by a piece of user input, check that the next piece of user input isn't higher, and then work from there (this assumes that the division hasn't caused an exception, which we will need to ensure doesn't happen.. eg if we have a divide by zero going on)
Logic errors should be visible so they can be fixed?
Weirdly, when going through the higher assurance levels in aviation, defensive programming becomes more costly, because it complicates the satisfaction of assurance objectives. SQLite (whiches test suite reaches MC/DC coverage which is the most rigorous coverage criterion asked in aviation) has a nice paragraph on the friction between MC/DC and defensive programming:
https://www.sqlite.org/testing.html#tension_between_fuzz_tes...
Comptime constant expression evaluation, as in your example, may suffice for the compiler to be able to prove that the result lies in the bounds of the type.
I see there's some hits for it on libs.rs, but I don't know how ergonomic they are.
[1] https://en.wikipedia.org/wiki/Interval_arithmetic
Modifying a compiler to emit a message at every point that a runtime check is auto-inserted should be pretty simple. If this was really that much of an issue it would have been addressed by now.
Ada's compile time verification is very good. With SPARK it's even better.
Runtime constraints are removable via Pragma so there's no tradeoff at all with having it in the language. One Pragma turns them into static analysis annotations that have no runtime consequences.
I assume it’s a runtime error or does the compiler force you to handle this?
If it fails at run time, it could be the reason you get paged at 1am because everything's broken.
It's a good example for the "Parse, don't validate" article (https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...). Instead of creating a function that accepts `int` and returns `int` or throws an exception, create a new type that enforces "`int` less than equal 200"
Something like this is possible to simulate with Java's classes, but it's certainly not ergonomic and very much unconventional. This is beneficial if you're trying to create a lot of compile-time guarantees, reducing the risk of doing something like `hmmm = works + 1;`.These kind of compile-time type voodoo requires a different mindset compared to cargo-cult Java OOP. Whether something like this is ergonomic or performance-friendly depends on the language's support itself.
https://learn.adacore.com/courses/intro-to-ada/chapters/cont...
https://docs.adacore.com/gnat_ugn-docs/html/gnat_ugn/gnat_ug...
https://learn.microsoft.com/en-us/dotnet/fsharp/language-ref...
FWIW, physical dimensions like meters were the original apples-to-oranges type system that pre-dates all modern notions of things beyond arithmetic. I'm a little surprised it wasn't added to early FORTRAN. In a different timeline, maybe. :)
I think what is in "the" "stdlib" or not is a tricky question. For most general/general purpose languages, it can be pretty hard to know even the probability distribution of use cases. So, it's important to keep multiple/broad perspectives in mind as your "I may be biased" disclaimer. I don't like the modern (well, it kind of started with CTAN where the micros seemed meant more for copy-paste and then CPAN where it was not meant for that) trend toward dozens to hundreds of micro-dependencies, either, though. I think Python, Node/JS, and Rust are all known for this.
I write Rust at work. I learned Ada in the early 1990s as the language of software engineering. Back then a lot of the argument against Ada was it was too big, complex, and slowed down development too much. (Not to mention the validating Ada 83 compiler I used cost about $20,000 a seat in today's money). I think the world finally caught up with Ada and we're recognizing that we need languages every bit as big and complex, like Rust, to handle issues like safe, concurrent programming.
I agree Rust's safety is very clearly (and maybe narrowly) defined, but it doesn't mean there isn't focus on general correctness - there is. The need to define safety precisely arises because it's part of the language (`unsafe`).
You may choose to think from safety guarantee hierarchy perspective like (Bottom = foundation... Top = highest assurance)
Layer 6: FORMAL PROOFS (functional correctness, no RT errors) Ada/SPARK: built-in (GNATprove) Rust: external tools (Kani, Prusti, Verus)
Layer 5: TIMING / REAL-TIME ANALYSIS (WCET, priority bounds) Ada: Ravenscar profile + scheduling analysis Rust: frameworks (RTIC, Embassy)
Layer 4: CONCURRENCY DETERMINISM (predictable schedules) Ada: protected objects + task priorities Rust: data-race freedom; determinism via design
Layer 3: LOGICAL CONTRACTS & INVARIANTS (pre/post, ranges) Ada: Pre/Post aspects, type predicates (built-in) Rust: type states, assertions, external DbC tools
Layer 2: TYPE SAFETY (prevent invalid states) Ada: range subtypes, discriminants Rust: newtypes, enums, const generics
Layer 1: MEMORY SAFETY & DATA-RACE FREEDOM Ada: runtime checks; SPARK proves statically Rust: compile-time via ownership + Send/Sync
As pjmlp says in a sibling comment, Pascal had this feature, from the beginning, IIRC, or from an early version - even before the first Turbo Pascal version.
Ime, being able to express constraints in a type systems yields itself to producing better quality code. A simple example from my experience with rust and golang is mutex handling, rust just won't let you leak a guard handle while golang happily let's you run into a deadlock.
https://prunt3d.com/
https://github.com/Prunt3D/prunt
It's kind of an esoteric choice, but struck me as "ya know, that's really not a bad fit in concept."
How does the cancellation story differ between threads and async in Rust? Or vs async in other languages?
There's no inherent reason they should be different, but in my experience (in C++, Python, C#) cancellation is much better in async then simple threads and blocking calls. It's near impossible to have organised socket shutdown in many languages with blocking calls, assuming a standard read thread + write thread per socket. Often the only reliable way to interrupt a socket thread it's to close the socket, which may not be what you want, and in principle can leave you vulnerable to file handle reuse bugs.
Async cancellation is, depending on the language, somewhere between hard but achievable (already an improvement) and fabulous. With Trio [1] you even get the guarantee that non-compared socket operations are either completed or have no effect.
Did this work any better in Rust threads / blocking calls? My uneducated understanding is that things are actually worse in async than other languages because there's no way to catch and handle cancellations (unlike e.g. Python which uses exceptions for that).
I'm also guessing things are no better in Ada but very happy to hear about that too.
There is... They're totally different things.
And yeah Rust thread cancellation is pretty much the same as in any other language - awkward to impossible. That's a fundamental feature of threads though; nothing to do with Rust.
Now I've set (and possibly moved) the goalposts, I can prove my point: C# already does this! You can use async across multiple threads and cancellation happens with cancellation tokens that are thread safe. Having a version where interruptable calls are blocking rather than async (in the language sense) would actually be easier to implement (using the same async-capable APIs under the hood e.g., IOCP on Windows).
If you need cleanup, that still needs to be handled manually. Hopefully the async Drop trait lands soon.
Dropping a future does not cancel a concurrently running (tokio::spawn) task. It will also not magically stop an asynchronous I/o call, it just won't block/switch from your code anymore while that continues to execute. If you have created a future but not hit .await or tokio::spawn or any of the futures:: queue handlers, then it also won't cancel it it just won't begin it.
Cancellation of a running task from outside that task actually does require explicit cancelling calls IIRC.
Edit here try this https://cybernetist.com/2024/04/19/rust-tokio-task-cancellat...
If you can't cancel a task and its direct dependents, and wait for them to finish as part of that, I would argue that you still don't have "real" cancellation. That's not an edge case, it's the core of async functionality.
[1] https://vorpus.org/blog/notes-on-structured-concurrency-or-g...
Hmm, maybe it's possible to layer structured concurrency on top of what Rust does (or will do with async drop)? Like, if you have a TaskGroup class and demand all tasks are spawned via that, then internally it could keep track of child tasks and make sure that they're all cancelled when the parent one is (in the task group's drop). I think? So maybe not such an issue, in principle.
Under the hood, there's nothing stopping a future from polling on or more other futures, so keeping in mind that it isn't the dropping that cancels but rather the lack of polling, you could achieve what you're describing with each future in the tree polling its children in its own poll implementation, which means that once you stop polling the "root" future in the tree, all of the others in the tree will by extension no longer get polled. You don't actually need any async Drop implementation for this because there's no special logic you need when dropping; you just stop polling, which happens automatically since you can't poll something that's been dropped anyhow.
Regular futures don't behave like this. They're passive, and can't force their owner to keep polling them, and can't prevent their owner from dropping them.
When a Future is dropped, it has only one chance to immediately do something before all of its memory is obliterated, and all of its inputs are invalidated. In practice, this requires immediately aborting all the work, as doing anything else would be either impossible (risking use-after-free bugs), or require special workarounds (e.g. io_uring can't work with the bare Future API, and requires an external drop-surviving buffer pool).
In her presentation on async cancellation in Rust, she spoke pretty extensively on cancel safety and correctness, and I would recommend giving it a watch or read.
https://sunshowers.io/posts/cancelling-async-rust/
It might be quite small, as I found for Maps (if we're putting 5 things in the map then we can just do the very dumbest thing which I call `VecMap` and that's fine, but if it's 25 things the VecMap is a little worse than any actual hash table, and if it's 100 things the VecMap is laughably terrible) but it might be quite large, even say 10x number of cores might be just fine without stealing.
For example, you have lots of concurrent tasks, and they're waiting on slow external IO. Each task needs its IO to finish so you can make forward progress. At any given time, it's unlikely more than a couple of tasks can make forward progress, due to waiting on that IO. So most of the time, you end up checking on tasks that aren't ready to do anything, because the IO isn't done. So you're waiting on them to be ready.
Now, if you can do that "waiting" (really, checking if they're ready for work or not) on them faster, you can spend more of your machine time on whatever actual work _is_ ready to be done, rather than on checking which tasks are ready for work.
Threads make sense in the opposite scenario: when you have lots of work that _is_ ready, and you just need to chew through it as fast as possible. E.g. numbers to crunch, data to search through, etc.
I'd love if someone has a more illustrative metaphor to explain this, this is just how I think about it.
[0] https://en.wikipedia.org/wiki/Scheduler_activations, https://dl.acm.org/doi/10.1145/121132.121151 | Akin to thread-per-core
[1] Stackless coroutines and event-driven programming
[2] User-level virtual/green threads today, plus responsiveness to blocking I/O events
But the most obvious difference, and maybe most important to a user, was left unstated: the adoption and ecosystem such as tooling, libraries, and community.
Ada may have a storied success history in aerospace and life safety, etc, and it might have an okay standard lib which is fine for AOC problems and maybe embedded bit poking cases in which case it makes sense to compare to Rust. But if you're going to sit down for a real world project, ie distributed system or OS component, interfacing with modern data formats, protocols, IDEs, people, etc is going to influence your choice on day one.
This is part of the effort of Ferrocene to provide a safety certificate compiler. And they are already available now.
Specs for other languages are also for a specific version/snapshot.
It's also a specific version of a compiler that gets certified, not a compiler in perpetuity, no matter what language.
Usually the standard comes first, compiler vendors implement it, and between releases of the spec the language is fixed. Using Ada as an example, there was Ada 95 and Ada 2003, but between 95 and 2003 there was only Ada 95. There was no in-progress version, the compiler vendors weren't making changes to the language, and an Ada95 compiler today compiles the same language as an Ada95 compiler 30 years ago.
Looking at the changelog for the Rust spec (https://rust-lang.github.io/fls/changelog.html), it's just the changelog of the language as each compiler verion is released, and there doesn't seem to be any intention of supporting previous versions. Would there be any point in an alternative compiler implementing "1.77.0" of the Rust spec?
And the alternative compiler implementation can't start implementing a compiler for version n+1 of the spec until that version of rustc is released because "the spec" is just "whatever rustc does", making the spec kind of pointless.
This is not how C or C++ were standardized, nor most computer standards in the first place. Usually, vendors implement something, and then they come together to agree upon a standard second.
When updating standards, sometimes things are put in the standard before any implementations, but that's generally considered an antipattern for larger designs. You want real-world evaluation of the usefulness of something before it's been standardized.
In rust, there is currently only one compiler so it seems like there's no problem
What the GP is suggesting is that the rust compiler should be written and then a spec should be codified after the fact (I guess just for fun?).
You have to squint fairly hard to get here for any of the major C++ compilers.
I guess maybe someone like Sean Baxter will know the extent to which, in theory, you can discern the guts of C++ by reading the ISO document (or, more practically, the freely available PDF drafts, essentially nobody reads the actual document, no not even Microsoft bothers to spend $$$ to buy an essentially identical PDF)
My guess would be that it's at least helpful, but nowhere close to enough.
And that's ignoring the fact that the popular implementations do not implement any particular ISO standard, in each case their target is just C++ in some more general sense, they might offer "version" switches, but they explicitly do not promise to implement the actual versions of the ISO C++ programming language standard denoted by those versions.
(Already mentioned) CakeML would be another example, together maybe with its Pancake sibling.
Also: WebAssembly!
Another, https://cakeml.org/
[1]: https://en.wikipedia.org/wiki/ATS_(programming_language)
[0] https://apps.dtic.mil/sti/tr/pdf/ADA249418.pdf
[1] https://ntrs.nasa.gov/citations/19960000030
In ADA you can subtype the index type into an array, i.e. constraining the size of the allowed values.
EDIT: Seems I'm getting downvoted, do people not know that ADA is not the name of the programming language? It's Ada, as in Ada Lovelace, whose name was also not generally SHOUTED as ADA.
93 more comments available on Hacker News