Zig Feels More Practical Than Rust for Real-World CLI Tools
Posted3 months agoActive3 months ago
dayvster.comTechstoryHigh profile
heatedmixed
Debate
80/100
RustZigProgramming LanguagesMemory SafetyCLI Tools
Key topics
Rust
Zig
Programming Languages
Memory Safety
CLI Tools
The article compares Rust and Zig for building CLI tools, sparking a debate about the trade-offs between memory safety and developer ergonomics.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
25m
Peak period
132
0-12h
Avg / period
26.7
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 23, 2025 at 8:56 AM EDT
3 months ago
Step 01 - 02First comment
Sep 23, 2025 at 9:21 AM EDT
25m after posting
Step 02 - 03Peak activity
132 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 1, 2025 at 7:46 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45346387Type: storyLast synced: 11/20/2025, 8:14:16 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The words of every C programmer who created a CVE.
Also, https://github.com/ghostty-org/ghostty/issues?q=segfault
All jokes aside, it doesn’t actually take much discipline to write a small utility that stays memory safe. If you keep allocations simple, check your returns, and clean up properly, you can avoid most pitfalls. The real challenge shows up when the code grows, when inputs are hostile, or when the software has to run for years under every possible edge case. That’s where “just be careful” stops working, and why tools, fuzzing, and safer languages exist.
- Every C programmer I've talked to
No its not, if it was that easy C wouldn't have this many memory related issues...
avoiding all memory management mistakes is not easy, and the bigger the codebase becomes, the more exponential the chance for disaster gets
C and Zig aren't the same. I would wager that syntax differences between languages can help you see things in one language that are much harder to see in another. I'm not saying that Zig or C are good or bad for this, or that one is better than the other in terms of the ease of seeing memory problems with your eyes, I'm just saying that I would bet that there's some syntax that could be employed which make memory usage much more clear to the developer, instead of requiring that the developer keep track of these things in their mind.
Even if you must manually annotate each function so that some metaprogram that runs at compile time can check that nothing is out of place could help detect memory leaks, I would think. or something; that's just an idea. There's a whole metaprogramming world of possibilities here that Zig allows that C simply doesn't. I think there's a lot of room for tooling like this to detect problems without forcing you to contort yourself into strange shapes simply to make the compiler happy.
On both your average days and your bad days.
Over the 40 to 50 years that your carer lasts.
I guess those kind of developers exist, but I know that I'm not one of them.
I am not a computer scientist (I have no degree in CS) but it sure seems like it would be possible to determine statically if a reference could be misused in code as written without requiring that you be the Rust Borrow Checker, if the language was designed with those kinds of things from the beginning.
Probably both. They're words of hubris.
C and Zig give the appearance of practicality because they allow you to take shortcuts under the assumption that you know what you're doing, whereas Rust does not; it forces you to confront the edge cases in terms of ownership and provenance and lifetime and even some aspects of concurrency right away, and won't compile until you've handled them all.
And it's VERY frustrating when you're first starting because it can feel so needlessly bureaucratic.
But then after awhile it clicks: Ownership is HARD. Lifetimes are HARD. And suddenly when going back to C and friends, you find yourself thinking about these things at the design phase rather than at the debugging phase - and write better, safer code because of it.
And then when you go back to Rust again, you breathe a sigh of relief because you know that these insidious things are impossible to screw up.
The question is, then, what price in language complexity are you willing to pay to completely avoid the 8th most dangerous cause of vulnerabilities as opposed to reducing them but not eliminating them? Zig makes it easier to find UAF than in C, and not only that, but the danger of UAF exploitability can be reduced even further in the general case rather easily (https://www.cl.cam.ac.uk/~tmj32/papers/docs/ainsworth20-sp.p...). So it is certainly true that memory unsafety is a cause of dangerous vulnerabilities, but it is the spatial unsafety that's the dominant factor here, and Zig eliminates that. So if you believe (rightly, IMO) that a language should make sure to reduce common causes of dangerous vulnerabilities (as long as the price is right), then Zig does exactly that!
I don't think it's unreasonable to find the cost of Rust justified to eliminate the 8th most dangerous cause of vulnerabilities, but I think it's also not unreasonable to prefer not to pay it.
[1]: https://cwe.mitre.org/top25/archive/2024/2024_cwe_top25.html
Second, I don't care if my bank card details leak because of CSRF or because of a bug in Chromium. Now, to be fair, the list of dangerous vulnerabilities weighs things by number of incidents and not by number of users affected, and it is certainly true that more people use Chrome than those who use a particular website vulnerable to CSRF. But things aren't so simple, there, too. For example, I work on the JVM, which is largely written in C++, and I can guarantee that many more people are affected by non-memory-safety vulnerabilities in Java programs than by memory-safety vulnerabilities in the JVM.
Anyway, the point is that the overall danger and incidence of vulnerabilities - and therefore the justified cost in addressing all the different factors involved - is much more complicated than "memory unsafety bad". Yes, it's bad, but different kinds of memory unsafety are bad to different degrees, and the harm can be controlled separately from the cause.
Now, I think it's obvious that even Rust fans understand there's a complex cost/benefit game here, because most software today is already written in memory-safe languages, and the very reason someone would want to use a language like Rust in the first place is because they recognise that sometimes the cost of other memory-safe languages isn't worth it, despite the importance of memory safety. If both spatial and temporal safety were always justified at any reasonable cost (that is happily paid by most software already), then there would be no reason for Rust to exist. Once you recognise that, you have to also recognise that what Rust offers must be subject to the same cost/benefit analysis that is used to justify it in the first place. And it shouldn't be surprising that the outcome would be similar: sometimes the cost may be justified, sometimes it may not be.
Sure, but just by virtue of what these languages are used for, almost all CSRF vulnerabilities are not in code written in C, C++, Rust, or Zig. So if I’m targeting that space, why would I care that some Django app or whatever has a CSRF when analyzing what vulnerabilities are important to prevent for my potential Zig project?
You’re right that overall danger and incidence of vulnerabilities matter - but they matter for the actual use-case you want to use the language for. The Linux kernel for example has exploitable TOCTOU vulnerabilities at a much higher rate than most software - why would they care that TOCTOU vulnerabilities are rare in software overall when deciding what complexity to accept to reduce them?
The rate of vulnerabilities obviously can't be zero, but it also doesn't need to be. It needs to be low enough for the existing coping processes to work well, and those processes need to be applied anyway. So really the question is always about cost: what's the cheapest way for me to get to a desired vulnerability rate?
Which brings me to why I may prefer a low-level language that doesn't prevent UAF: because the language that does present UAF has a cost that is not worth it for me, either because UAF vulnerabilities are not a major risk for my application or because I have cheaper ways to prevent them (without necessarily eliminating the possibility of UAF itself), such as with one of the modern pointer-tagging techniques.
To your point about V8 and CPython: that calculus makes sense if I’m Microsoft and I could spend time/money on memory safety in CPython or on making CSRF in whatever Python library I use harder. My understanding is that the proportions of the budget for different areas of vulnerability research at any tech giant would in fact vindicate this logic.
However, if I’m on the V8 team or a CPython contributor and I’m trying to reduce vulnerabilities, I don’t have any levers to pull for CSRF or SQL injection without just instead working on a totally different project that happens to be built on the relevant language. If my day job is to reduce vulnerabilities in V8 itself, those would be totally out of scope and everybody would look at my like I’m crazy if I brought it up in a meeting.
Similarly, if I’m choosing a language to (re)write my software in and Zig is on the table, I am probably not super worried about CSRF and SQL injection - most likely I’m not writing an API accessed by a browser or interfacing with a SQL database at all! Also I have faith that almost all developers who know what Zig is in the first place would not write code with a SQL injection vulnerability in any language. That those are still on the top ten list is a condemnation of our entire species, in my book.
Maybe (and I'll return to that later), but even if the job were to specifically reduce vulnerabilities in V8, it may not be the case that focusing on UAF is the best way to go, and even if it were, it doesn't mean that eliminating UAF altogether is the best way to reduce UAF vulnerabilities. More generally, memory safety => fewer vulnerabilities doesn't mean fewer vulnerabilities => memory safety.
When some problem is a huge cause of exploitable vulnerabilities and eliminating it is cheap - as in the case of spatial memory safety - it's pretty easy to argue that eliminating it is sensible. But when it's not as big a cause, when the exploits could be prevented in other ways, and when the cost of eliminating the problem at the source is high, it's not so clear cut that that's the best way to go.
The costs involved could actually increase vulnerabilities overall. A more complex language could have negative effects on correctness (and so on security) in some pretty obvious ways: longer build times could mean less testing; less obvious code could mean more difficult reviews.
But I would say that there's even a problem with your premise about "the job". The more common vulnerabilities are in JS, the less value there is in reducing them in V8, as the relative benefit to your users will be smaller. If JS vulnerabilities are relatively common, there could, perhaps, be more value to V8 users in improving V8's performance than in reducing its vulnerabilities.
BTW, this scenario isn't so hypothetical for me, as I work on the Java platform, and I very much prefer spending my time on trying to reduce injection vulnerabilities in Java than on chasing down memory-safety-related vulnerabilities in HotSpot (because there's more security value to our users in the former than in the latter).
I think Zig is interesting from a programming-language design point of view, but I also think it's interesting from a product design point of view in that it isn't so laser-focused on one thing. It offers spatial memory safety cheaply, which is good for security, but it also offers a much simpler language than C++ (while being just as expressive) and fast build times, which could improve productivity [1], as well as excellent cross-building. So it has something for everyone (well, at least people who may care about different things).
[1]: These could also have a positive effect on correctness, which I hinted at before, but I'm trying to be careful about making positive claims on that front because if there's anything I've learnt in the field of software correctness is that things are very complicated, and it's hard to know how to best achieve correctness. Even the biggest names in the field have made some big, wrong predictions.
That's a good example and I agree with you there. I think the difference with V8 though is twofold:
1. Nobody runs fully untrusted code on HotSpot today and expects it to stop anybody from doing anything. For browser JavaScript engines, of course the expectation is that the engine (and the browser built on it) are highly resistant to software sandbox escapes. A HotSpot RCE that requires a code construction nobody would actually write is usually unexploitable - if you can control the code the JVM runs, you already own the process. A JavaScript sandbox escape is in most cases a valuable part of an exploit chain for the browser.
2. Even with Google's leverage on the JS and web standardization processes, they have very limited ability to ship user-visible security features and get them adopted. Trusted Types, which could take a big chunk out of very common XSS vulnerabilities and wasn't really controversial, was implemented in Safari 5 years after Chrome shipped it. Firefox still doesn't support it. Let's be super optimistic and say that after another 5 years it'll be as common as CSP is today - that's ten years to provide a broad security benefit.
These are of course special aspects of V8's security environment, but having a mountain of memory safe code you can tweak on top of your unsafe code like the JVM has is also unusual. The main reason I'd be unlikely to reach for Zig + temporal pointer auth on something I work on is that I don't write a lot of programs that can't be done in a normie GC-based memory safe programming language, but for which having to debug UAF and data race bugs (even if they crash cleanly!) is a suitable tradeoff for the Rust -> Zig drop in language complexity.
As to your last point, I certainly accept that that could be the case for some, but the opposite is also likely: if UAF is not an outsized cause of problems, then a simpler language that, hopefully, can make catching/debugging all bugs easier could be more attractive than one that could be tilting too much in favour of eliminating UAF possibly at the expense of other problems. My point being that it seems like there are fine reasons to prefer a Rust-like approach over a Zig-like approach and vice-versa in different situations, but we simply don't yet know enough to tell which one - if any - is universally or even more commonly superior to the other.
Languages like Modula-3 or Oberon would have taken over the world of systems programming.
Unfortunately there are too many non-believers for systems programming languages with automatic resource management to take off as they should.
Despite everything, kudos to Apple for pushing Swift no matter what, as it seems to be only way for adoption.
Or those languages had other (possibly unrelated) problems that made them less attractive.
I think that in a high-economic-value, competitive activity such as software, it is tenuous to claim that something delivers a significant positive gain and at the same time that that gain is discarded for irrational reasons. I think at least one of these is likely to be false, i.e. either the gain wasn't so substantial or there were other, rational reasons to reject it.
Even for teams further toward the right of the bell curve, historical contingencies have a greater impact than they do in more grounded engineering fields. There are specialties of course, but nobody worries that when they hire a mechanical engineer someone needs to make sure the engineer can make designs with a particular brand of hex bolt because the last 5 years of the company’s designs all use that brand.
In fact, when we look at the long list of languages that have become super-popular and even moderately popular - including languages that have grown only to later shrink rather quickly - say Fortran, COBOL, C, C++, JavaScript, Java, PHP, Python, Ruby, C#, Kotlin, Go, TypeScript, we see languages that are either more specific to some domains or more general, some reducing switching costs (TS, Kotlin) some not, but we do see that the adoption rate is proportional to the language's peak market share, and once the appropriate niche is there (think of a possibly new/changed environment in biological evolution) we see very fast adoption, as we'd expect to see from a significant fitness increase.
So given that many languages displace incumbents or find their own niches, and that the successful ones do it quickly, I think that the most reasonable assumption to start with when a language isn't displaying that is that its benefits just aren't large enough in the current environment(s). If the pace of your language's adoption is slow, then: 1. the first culprit to look for is the product-market fit of the language, and 2. it's a bad sign for the language's future prospects.
I guess it's possible for something with a real but low advantage to spread slowly and reach a large market share eventually, but I don't think it's ever happened in programming languages, and there's the obvious risk of something else with a bigger advantage getting your market in the meantime.
Projects like Midori, Swift, Android, MaximeVM, GraalVM, only happen when someone high enough is willing to keep it going until it takes off.
When they fail, usually it is because management backing felt through, not because there wasn't a way to sort out whatever was the cause.
Even Java had enough backing from Sun, IBM, Oracle and BEA during its early uncertainty days outside being a language for applets, until it actually took off on server and mobile phones.
If Valhala never makes it, it is because Oracle gave up funding the team after all these years, or it is impossible and it was a waste of money?
It's just pig-headedness by Apple, nothing more.
Instead Swift was designed around the use-cases the team was familiar with, which would be C++ and compilers. Let's just say that the impedance between that and rapid UI development was pretty big. From C++ they also got the tolerance for glacial compile times (10-50 times as slow as compiling the corresponding Objective-C code)
In addition to that they did big experiments, such as value semantics backed by copy-on-write, which they thought was cool, but is – again – worthless in terms of the common problem domains.
Since then, the language's just been adding features at a speed even D can't match.
However, one thing the language REALLY GETS RIGHT, and which is very under-appreciated, is that they duplicated Objective-C's stability across API versions. ObjC is best in class when it comes to the ability to do forward and backwards compatibility, and Swift has some AWESOME work to make that work despite the difficulties.
Rust's model has a strict model that effectively prevents certain kinds of logic errors/bugs. So that's good (if you don't mind the price). But it doesn't address all kinds of other logic errors/bugs. It's like closing one door to the barn, but there are six more still wide open.
I see rust as an incremental improvement over C, which comes at quite a hefty price. Something like zig is also an incremental improvement over C, which also comes at a price, but it looks like a significantly smaller one.
(Anyway, I'm not sure zig is even the right comp for rust. There are various languages that provide memory safety, if that's your priority, which also generally allow dropping into "unsafe" -- typically C -- where performance is needed.)
Could you point at some language features that exist in other languages that Rust doesn't have that help with logic errors? Sum types + exhaustive pattern matching is one of the features that Rust does have that helps a lot to address logic errors. Immutability by default, syntactic salt on using globals, trait bounds, and explicit cloning of `Arc`s are things that also help address or highlight logic bugs. There are some high level bugs that the language doesn't protect you from, but I know of now language that would. Things like path traversal bugs, where passing in `../../secret` let's an attacker access file contents that weren't intended by the developer.
The only feature that immediately comes to mind that Rust doesn't have that could help with correctness is constraining existing types, like specifying that an u8 value is only valid between 1 and 100. People are working on that feature under the name "pattern in types".
There's a complexity cost to adding features, and while each one may make sense on its own, in aggregate they may collectively burden the developer with too much complexity.
Go tries to hide the issues, until a data loss happens because it has had trouble dealing with non-UTF8 filenames and Strings are by convention UTF8 but not truly and some functions expect UTF8 while others can work with any collection of bytes.
https://blog.habets.se/2025/07/Go-is-still-not-good.html
Or the Go time library which is a monster of special cases after they realized they needed monotonic clocks [1] but had to squeeze it into the existing API.
https://pkg.go.dev/time
Rust is on the other end of the spectrum. Explicit over implicit, but you can implicitly assume stuff works by panicking on these unexpected errors. Making the problem easy to fix if you stumble upon it after years of added cruft and changing requirements.
[1]: https://github.com/golang/go/issues/12914
There is a significant crowd of people who don't necessarily love borrow checker, but traits/proper generic types/enums win them over Go/Python. But yes, it takes significant maturity to recognize and know how to use types properly.
Much of Zig's user base seems to be people new to systems programming. Coming from a managed code background, writing native code feels like being a powerful wizard casting fireball everywhere. After you write a few unsafe programs without anything going obviously wrong, you feel invincible. You start to think the people crowing about memory safety are doing it because they're stupid, or, cowards, or both. You find it easy to allocate and deallocate when needed: "just" use defer, right? Therefore, it someone screws up, that's a personal fault. You're just better, right?
You know who used to think that way?
Doctors.
Ignaz Semmelweis famously discovered that hand-washing before childbirth decreased morality by an order of magnitude. He died poor and locked in an asylum because doctors of the day were too proud to acknowledge the need to adopt safety measures. If mandatory pre-surgical hand-washing step prevented complication, that implied the surgeon had a deficiency in cleanliness and diligence, right?
So they demonized Semmelweis and patients continued for decades to die needlessly. I'm sure that if those doctors had been on the internet today, they would say, as the Zig people do say, "skill issue".
It takes a lot of maturity to accept that even the most skilled practitioners of an art need safety measures.
What happens in those cases is that you drop a whole lot of disorganized dynamic and stack allocations and just handle them in a batch. So in all cases where the problem is tracking temporary objects, there's no need to track ownership and such. It's a complete non-problem.
So if you're writing code in domains where the majority of effort to do manual memory management is tracking temporary allocations, then in those cases you can't really meaningfully say that because Rust is safer than a corresponding malloc/free program in C/C++ it's also safer than the C3/Jai/Odin/Zig solution using arenas.
And I think a lot of the disagreement comes from this. Rust devs often don't think that switching the use of the allocator matters, so they argue against what's essentially a strawman built from assumed malloc/free based memory patterns that are incorrect.
ON THE OTHER HAND, there are cases where this isn't true and you need to do things like safely passing data back and forth between threads. Arenas doesn't help with that at all. So in those cases I think everyone would agree that Rust or Java or Go is much safer.
So the difference between domains where the former or the latter dominates needs to be recognised, or there can't possibly be any mutual understanding.
http://www.3kranger.com/HP3000/mpeix/doc3k/B3150290023.10194...
What is old is new again.
I feel like I am most interested about nim given how easy it was to pick up and how interoperable it is with C and it has a garbage collector and can change it which seems to be great for someone like me who doesn't want to worry about manual memory management right now but maybe if it becomes a bottleneck later, I can atleast fix it without worrying too much..
Out of all of them from what little I know and my very superficial knowledge Odin seems the most appealing to me, it's primary use case from what I know is game development I feel like that could easily pivot into native desktop application development was tempted to make a couple of those in odin in the past but never found the time.
Nim I like the concept and the idea of but the python-like syntax just irks me. haha I can't seem to get into languages where indentation replaces brackets.
But the GC part of it is pretty neat, have you checked Go yet?
I haven't really looked into odin except joining their discord and asking them some questions.
it seems that aside from some normal syntax, it is sort of different from golang under the hood as compared to V-lang which is massively inspired by golang
After reading the HN post of sqlite which recommended using sqlite as a odt or some alternative which I agreed. I thought of creating an app in flutter similar to localsend except flutter only supports C esq and it would've been weird to take golang pass it through C and then through flutter or smth and I gave up...
I thought that odin could compile to C and I can use that but it turns out that Odin doesn't really compile to C as compared to nim and v-lang which do compile to C.
I think that nim and v-lang are the best ways to write some app like that though with flutter and I am now somewhat curious as to what you guys think would be the best way of writing highly portable apps with something personally dev-ex being similar to golang..
I have actually thought about using something like godot for this project too and seeing if godot supports something like golang or typescript or anything really. Idk I was just messing around and having a bit of fun lol i think.
But I like nim in the sense that I feel sometimes in golang that I can't change its GC and so although I do know that for most things it wouldn't be a breaker.
but still, I sometimes feel like I should've somewhat freedom to add memory management later without restarting from scratch or something y'know?
Golang is absolutely goated. This was why I also recommended V-lang, V-lang is really similar to golang except it can have memory management...
They themselves say that on the website that IIRC if you know golang, you know 70% V-lang
I genuinely prefer golang over everything but I still like nim/ V-lang too as fun languages as I feel like their ecosystem isn't that good even though I know that yes they can interop with C but still...
We don't need yet another language with manual memory management in the 21st century, and V doesn't look that would ever be that relevant.
V is also similar to golang in syntax, something that I definitely admire tbh.
I am interested about nim and V more tbh as compared to D-lang
In fact I was going to omit D-lang from my comment but I know that those folks are up to something great too and I will try to look into them more but nim defintely peaks my interests as a production ready language-ish imo as compared to V-lang or even D-lang
I think people prefer what's familiar to them, and Swift definitely looks closer to existing C++ to me, and I believe has multiple people from the C++ WG working on it now as well, supposedly after getting fed up with the lack of language progress on C++.
The most recent versions gained a lot in the way of cross-platform availability, but the lack of a native UI framework and its association with Apple seem to put off a lot of people from even trying it.
I wish it was a lot more popular outside of the Apple ecosystem.
https://docs.swift.org/swift-book/documentation/the-swift-pr...
https://swift.org/documentation/cxx-interop/
https://swift.org/blog/swift-everywhere-windows-interop/
Seasoned Rust coders don’t spend time fighting the borrow checker - their code is already written in a way that just works. Once you’ve been using Rust for a while, you don’t have to “restructure” your code to please the borrow checker, because you’ve already thought about “oh, these two variables need to be mutated concurrently, so I’ll store them separately”.
The “object soup” is a particular approach that won’t work well in Rust, but it’s not a fundamentally easier approach than the alternatives, outside of familiarity.
My experience is that what makes your statement true, is that _seasoned_ Rust developers just sprinkle `Arc` all over the place, thus effectively switching to automatic garbage collection. Because 1) statically checked memory management is too restrictive for most kinds of non trivial data structures, and 2) the hoops of lifetimes you have to go to to please the static checker whenever you start doing anything non trivial are just above human comprehension level.
Do you tend to use a lot of Arenas?
The first is a fairly generic input -> transform -> output. This is your generic request handler for instance. You receive a payload, run some transform on that (and maybe a DB request) and then produce a response.
In this model, Arc is very fitting for some shared (im)mutable state. Like DB connections, configuration and so on.
The second pattern is something like: state + input -> transform -> new state. Eg. you're mutating your app state based on some input. This fits stuff like games, but also retained UIs, programming language interpreters and so on on.
Using ARCs here muddles the ownership. The gamedev ecosystem has found a way to manage this by employing ECS, and while it can be overkill, the base DOD principles can still be very helpful.
Treat your data as what it is; data. Use indices/keys instead of pointers to represent relations. Keep it simple.
Arenas can definitely be a part of that solution.
Even then, I’d agree that while Arc is used in lots of places in work stealing runtimes, I disagree that it’s used everywhere or that you can really do anything else if you want to leverage all your cores with minimum effort and not having to build your application specialized to deal with that.
I don't care that they have a good work-stealing event loop, I care that it's the default and their APIs all expect the work-stealing implementation and unnecessarily constrain cases where you don't use that implementation. It's frustrating and I go out of my way to avoid Tokio because of it.
Edit: the issues are in Axum, not the core Tokio API. Other libs have this problem too due to aforementioned defaults.
At $dayjob we have built a large codebase (high-throughput message broker) using the thread-per-core model with tokio (ie one worker thread per CPU, pinned to that CPU, driving a single-threaded tokio Runtime) and have not had any problems. Much of our async code is !Send or !Sync (Rc, RefCell, etc) precisely because we want it to benefit from not needing to run under the default tokio multi-threaded runtime.
We don't use many external libs for async though, which is what seems to be the source of your problems. Mostly just tokio and futures-* crates.
But in this case, the data hiding behind the Arc is almost never mutable. It's typically some shared, read-only information that needs to live until all the concurrent workers are done using it. So this is very easy to reason about: Stick a single chunk of read-only data behind the reference count, and let it get reclaimed when the final worker disappears.
There are some cases where someone new to Rust will try to use Arc as a solution to every problem, but I haven't seen much code like this outside of reviewing very junior Rust developers' code.
In some application architectures Arc is a common feature and it's fine. Saying that seasoned Rust developers rarely use Arc isn't true, because some types of code require shared references with Arc. There is nothing wrong with Arc when used properly.
I think this is less confusing to people who came from modern C++ and understand how modern C++ features like shared_ptr work and when to use them. For people coming from garbage collected languages it's more tempting to reach for the Arc types to try to write code as if it was garbage collected.
No, this couldn't be further from the truth.
If you use Rust for web server backend code then yes, you see `Arc`s everywhere. Otherwise their use is pretty rare, even in large projects. Rust is somewhat unique in that regard, because most Rust code that is written is not really a web backend code.
To some extent this is unavoidable. Non-'static lifetimes correspond (roughly) to a location on the program stack. Since a Future that suspends can't reasonably stay on the stack it can't have a lifetime other than 'static. Once it has to be 'static, it can't borrow anything (that's not itself 'static), so you either have to Copy your data or Rc/Arc it. This, btw, is why even tokio's spawn_local has a 'static bound on the Future.
It would be nice if it were ergonomic for library authors to push the decision about whether to use Rc<RefCell<T>> or Arc<Mutex<T>> (which are non-threadsafe and threadsafe variants of the same underlying concept) to the library consumer.
https://docs.rs/smol/latest/smol/fn.block_on.html
smol's spawn also requires the Future to be 'static (https://docs.rs/smol/latest/smol/fn.spawn.html), while tokio's local block_on also does not require 'static or Send + Sync (https://docs.rs/tokio/latest/tokio/task/struct.LocalSet.html...).
- 151 instances of "Arc<" in Servo: https://github.com/search?q=repo%3Aservo%2Fservo+Arc%3C&type...
- 5 instances of "Arc<" in AWS SDK for Rust https://github.com/search?q=repo%3Arusoto%2Frusoto%20Arc%3C&...
- 0 instances for "Arc<" in LOC https://github.com/search?q=repo%3Acgag%2Floc%20Arc%3C&type=...
Plus the html processing needs to be Arc as well, so that tracks.
- 6 instances of "Rc<" in AWS SDK for Rust: https://github.com/search?q=repo%3Arusoto%2Frusoto+Rc%3C&typ...
- 0 instance for "Rc<" in LOC: https://github.com/search?q=repo%3Acgag%2Floc+Rc%3C&type=cod...
(Disclaimer: I don't know what these repos are except Servo).
Arc isn't really garbage collection. It's like a reference counted smart pointer like C++ has shared_ptr.
If you drop an Arc and it's the last reference to the underlying object, it gets dropped deterministically.
Garbage collection generally refers to more complex systems that periodically identify and free unused objects in a less deterministic manner.
Large scale teams always get pointer ownership wrong.
Project Zero has enough examples.
No, this is a subset of garbage collection called tracing garbage collection. "Garbage collection" absolutely includes refcounting.
Chapter 5, https://gchandbook.org/contents.html
Other CS quality references can be provided with similar table of contents.
If you need a referecne counted garbage collector for more than a tiny minotiry of your code, then Rust was probably the wrong choice of language - use something that has a better (mark and sweep) garbage collectors. Rust is good for places where you can almost always find a single owner, and you can use reference counting for the rare exception.
However, the difference between Arc and a Garbage Collector is that the Arc does the cleanup at a deterministic point (when the last Arc is dropped) whereas a Garbage Collector is a separate thing that comes along and collects garbage later.
> If you need a referecne counted garbage collector for more than a tiny minotiry of your code
The purpose of Arc isn't to have a garbage collector. It's to provide shared ownership.
There is no reason to avoid Rust if you have an architecture that requires shared ownership of something. These reductionist generalizations are not accurate.
I think a lot of new Rust developers are taught that Arc shouldn't be abused, but they internalize it as "Arc is bad and must be avoided", which isn't true.
That is the most common implementation, but that is still just an implementation detail. Garbage collectors can run deterministically which is what reference counting does.
> There is no reason to avoid Rust if you have an architecture that requires shared ownership of something.
Rust can be used for anything. However the goals are still something good for system programming. Systems programming implies some compromises which makes Rust not as good a choice for other types of programming. Nothing wrong with using it anyway (and often you have a mix and the overhead of multiple languages makes it worth using one even when another would be better for a small part of the problem)
> I think a lot of new Rust developers are taught that Arc shouldn't be abused, but they internalize it as "Arc is bad and must be avoided", which isn't true.
Arc has a place. However most places where you use it a little design work could eliminate the need. If you don't understand what I'm talking about then "Arch is bad and must be avoided" is better than putting Arc everywhere even though that would work and is less effort in the short run (and for non-systems programming it might even be a good design)
As a rough approximation, if you're very heavy-handed with ARC then you probably shouldn't be using rust for that project.
[0] The term "leak" can be a bit hard to pin down, but here I mean something like space which is allocated and which an ordinary developer would prefer to not have allocated.
However, I disagree with generalizations that you can judge the quality of code based on whether or not it uses a lot of Arc. You need to understand the architecture and what's being accomplished.
That wasn't really my point, but I disagree with your disagreement anyway ;) Yes, you don't want to over-generalize, but Arc has a lot of downsides, doesn't have a lot of upsides, and can usually be relatively easily avoided in lieu of something with a better set of tradeoffs. Heavy use isn't bad in its own right, but it's a strong signal suggestive of code needing some love and attention.
My point though was: If you are going to heavily use Arc, Rust isn't the most ergonomic language for the task, and where for other memory management techniques the value proposition of Rust is more apparent it's a much narrower gap compared to those ergonomic choices if you use Arc a lot. Maybe you have to (or want to) use Rust anyway for some reason, but it's usually a bad choice conditioned on that coding style.
Reference counting is a valid form of garbage collection. It is arguably the simplest form. https://en.wikipedia.org/wiki/Garbage_collection_(computer_s...
The other forms of GC are tracing followed by either sweeping or copying.
> If you drop an Arc and it's the last reference to the underlying object, it gets dropped deterministically.
Unless you have cycles, in which case the objects are not dropped. And then scanning for cyclic objects almost certainly takes place at a non-deterministic time, or never at all (and the memory is just leaked).
> Garbage collection generally refers to more complex systems that periodically identify and free unused objects in a less deterministic manner.
No. That's like saying "a car is a car; a vehicle is anything other than a car". No, GC encompasses reference counting, and GC can be deterministic or non-deterministic (asynchronous).
In c++ land this is very often called garbage collection too
How else would you safely share data in multi-threaded code? Which is the only reason to use Atomic reference counts.
I do find myself running into lifetime and borrow-checker issues much less these days when writing larger programs in rust. And while your comment is a bit cheeky, I think it gets at something real.
One of the implicit design mentalities that develops once you write rust for a while is a good understanding of where to apply the `UnsafeCell`-related types, which includes `Arc` but also `Rc` and `RefCell` and `Cell`. These all relate to inner mutability, and there are many situations where plopping in the right one of these effectively resolves some design requirement.
The other idiomatic thing that happens is that you implicitly begin structuring your abstract data layouts in terms of thunks of raw structured data and connections between them. This usually involves an indirection - i.e. you index into an array of things instead of holding a pointer to the thing.
Lastly, where lifetimes do get involved, you tend to have a prior idea of what thing they annotate. The example in the article is a good case study of that. The author is parsing a `.notes` file and building some index of it. The text of the `.notes` file is the obvious lifetime anchor here.
You would write your indexing logic with one lifetime 'src: `fn build_index<'src>(src: &'src str)`
Internally to the indexing code, references to 'src-annotated things can generally pass around freely as their lifetime converges after it.
Externally to the indexing code you'd build a string of the notes text, and passing a reference to that to the `build_index` function.
For simple CLI programs, you tend not to really need anything more than this.
It gets more hairy if you're looking at constructing complex object graphs with complex intermediate state, partial construction of sub-states, etc. Keeping track of state that's valid at some level, while temporarily broken at another level, is where it gets really annoying with multiple nested lifetimes and careful annotation required.
But it was definitely a bit of a hair-pulling journey to get to my state of quasi-peace with Rust's borrow checker.
That doesn’t mean there aren’t other legitimate use cases, but “all the time” is not representative of the code I read or write, personally.
No true scotsman would ever be confused by the borrow checker.
i've seen plenty of rust projects open source and otherwise that utilise Arc heavily or use clone and/or copy all over the place.
> No true scotsman would ever be confused by the borrow checker.
I'd take that No true scotsman over the "Real C programmers write code without CVE" for $5000.
Also you are strawmanning the argument. GP said, "As a seasoned veteran of Rust you learn to think like the borrow checkers." vs "Real Rust programmers were born with knowledge of borrow checker".
Would you also say the same for a C++ project that uses shared_ptrs everywhere?
The clone quip doesn't work super well when comparing to C++ since that language "clones" data implicitly all the time
They are clearly just saying as you become more proficient with X, Y is less of a problem. Not that if the borrow checker is blocking you that you aren't a real Rust programmer.
Let's say you're trying to get into running. You express that you can't breathe well during the exercise and it's a miserable experience. One of your friends tells you that as an experienced runner they don't encounter that in the same way anymore, and running is thus more enjoyable. Do you start screeching No True Scotsman!! at them? I think not.
My beef is sometimes with the ways traits are implemented or how AWS implemented Errors for the their library that is just pure madness.
Here is one piece of the problem:
My problem is that I should have something like
(http_status, reason) where http_status is a String or u16, reason is a enum with SomeError(String) structure. So essentially having a flat meaningful structure instead of this what we currently have. I do not have any mental model about the error structure of the AWS libs or don't even know where to start to create that mental model. As a result I just try to turn everything to a string and return it altogether hoping that the real issue is there somwhere in that structure.
I think the AWS library error handling is way to complex for what it does and one way we could improve that if Rust had a great example of a binary (bin) project that has lets say 2 layers of functions and showing how to organize your error effectively.
Now do this for a lib project. Without this you end up with this hot mess. At least this is how I see it. If you have a suggestion how should I return errors from a util.rs that has s3_list_objects() to my http handler than I would love to hear what you have to say.
Thanks for your suggestions anyway! I am going to re-implement my error handling and see if it gives us more clarity with impl.
https://momori.dev/posts/rust-error-handling-thiserror-anyho...
burntsushi has a good writeup about their difference in usecase here:
https://www.reddit.com/r/rust/comments/1cnhy7d/whats_the_wis...
I really hope it’s an Rc/Arc that you’re cloning. Just deep cloning the value to get ownership is dangerous when you’re doing it blindly.
Yes, they know when to give up.
I like the fact that "fighting the borrow checker" is an idea from the period when the borrowck only understood purely lexical lifetimes. So you have to fight to explain why the thing you wrote, which is obviously correct, is in fact correct.
That's already ancient history by the time I learned Rust in 2021. But, this idea that Rust will mean "fighting the borrow checker" took off anyway even though the actual thing it's about was solved.
Now for many people it really is a significant adjustment to learn Rust if your background is exclusively say, Python, or C, or Javascript. For me it came very naturally and most people will not have that experience. But even if you're a C programmer who has never had most of this [gestures expansively] before you likely are not often "fighting the borrow checker". That diagnostic saying you can't make a pointer via a spurious mutable reference? Not the borrow checker. The warning about failing to use the result of a function? Not the borrow checker.
Now, "In Rust I had to read all the diagnostics to make my software compile" does sound less heroic than "battling with the borrow checker" but if that's really the situation maybe we need to come up with a braver way to express this.
But today's borrowck just goes duh, the reference y goes away right before the variable z is created and everything is cool.
These are called "Non-lexical lifetimes" because the lifetime is no longer strictly tied to a lexical scope - the curly braces in the program - but can have any necessary extent to make things correct.
Further improving the ability of the borrowck to see that what you're doing is fine is an ongoing piece of work for Rust and always will be†, but NLL was the lowest hanging fruit, most of the software I write would need tweaks to account for a strict lexical lifetime and it'd be very annoying when I know I am correct.
† Rice's theorem tells us we can either have a compiler where sometimes illegal borrows are allowed or a compiler where sometimes borrows that should be legal are forbidden (or both, which seems useless), but we cannot have one which is always right, so, Rust chooses the safe option and that means we're always going to be working to make it just a bit better.
When I was learning rust (coming from python/java) it certainly felt like a battle because I "knew" the code was logically sound (at least in other languages) but it felt like I had to do all sorts of magic tricks to get it to compile. Since then I've adapted and understand better _why_ the compiler has those rules, but in the beginning it definitely felt like a fight and that the code _should_ work.
Even though Rust can end up with some ugly/crazy code, I love it overall because I can feel pretty safe that I'm not going to create hard-to-find memory errors.
Sure, I can (and do) write code that causes my (rust) app to crash, but so far they've all been super trivial errors to debug and fix.
I haven't tried Zig yet though. Does it give me all the same compile time memory usage guarantees?
"This chair is guaranteed not to collapse out from under you. It might be a little less comfortable and a little heavier, but most athletic people get used to that and don't even notice!"
Let's quote the article:
> I’d say as it currently stands Rust has poor developer ergonomics but produces memory safe software, whereas Zig has good developer ergonomics and allows me to produce memory safe software with a bit of discipline.
The Rust community should be upfront about this tradeoff - it's a universal tradeoff, that is: Safety is less ergonomic. It's true when you ride a skateboard with a helmet on, it's true when you program, it's true for sex.
Instead you see a lot of arguments with anecdotal or indeterminate language. "Most people [that I talk to] don't seem to have much trouble unless they're less experienced."
It's an amazing piece of rhetoric. In one sentence the ergonomic argument has been dismissed by denying subjectivity exists or matters and then implying that those who disagree are stupid.
I have some issues with Zig's design, especially around the lack of explicit interface/trait, but I agree with the post that it is a more practical language, just because of how much simpler its adoption is.
Edits mine.
I like to keep the spacetime topologies complete.
Constant = time atom of value.
Register = time sequence of values.
Stack = time hierarchy of values.
Heap = time graph of values.
237 more comments available on Hacker News