Memory Safety for Skeptics
Postedabout 2 months agoActiveabout 1 month ago
queue.acm.orgTechstory
heatedmixed
Debate
80/100
Memory SafetyRustProgramming Languages
Key topics
Memory Safety
Rust
Programming Languages
The article 'Memory Safety for Skeptics' discusses the importance of memory safety in programming, sparking a debate among commenters about the effectiveness of Rust and other languages in achieving memory safety.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
3h
Peak period
130
Day 1
Avg / period
40
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Nov 10, 2025 at 1:23 PM EST
about 2 months ago
Step 01 - 02First comment
Nov 10, 2025 at 3:59 PM EST
3h after posting
Step 02 - 03Peak activity
130 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 23, 2025 at 1:12 PM EST
about 1 month ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45879012Type: storyLast synced: 11/20/2025, 8:56:45 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Doubtlessly, there is some of that going on. I doubt the risk compensation erases the benefit of memory safety, but let's not kid ourselves.
The latter can often be more easily exploited than the former, but the former can remain undetected longer, affect more components and installations, and be harder to reproduce in order to identify and resolve.
As an exmaple of "more easily exploited". Say that you have a web application that generates session cookies that are easy to forge, leading to session hijack. Not much skill is needed to do that, compared to exploiting a memory safety problem (particuarly if the platform has some layers defenses against it: scrambled address space, non-executable stacks, and whatnot).
Tell me more about memory safety, any time; just hold the Rust.
Rust skeptics are not memory safety skeptics. Hopefully, there are no memory safety skeptics, other than rhetorical strawmen.
So, I'd say that there is still some outreach to do on the topic.
On the other hand, you're absolutely right that Rust is only one of the many ways to get there.
And not even about tooling per se, since achieving safety on their own doesn't literally mean on their own; they are relying on tooling.
True Scotsman's "on your own" means working in assembly language, in which you have to carefully manage even just calling a function and returning from it: not leaving stray arguments on the stack, saving and restoring all callee-saved register that are used in the function and so on.
Someone who thinks that their job is not to have segfaults is pretty green, obviously.
I've actually been trying to figure out if it's practical to write assembly but then use a proof assistant to guarantee that it maintains safety. Weirdly feels easier than C insofar as that you don't have (C's) undefined behavior to contend with. But formal methods are a whole thing and I'm very very new to them so it remains to be seen whether this is actually a good idea, or awful and stupid:) (Edit: ...or merely impractical, for that matter)
E.g. Trying to reject string operations which write beyond the trailing \0. At assembly level, \0 is only one of many possible conventions for bounding a string. E.g. maybe free() is allowed to write past 0s. So you'd need to decide whether it's safe depending on context.
These C++ developers I'm mentioning are all at least senior (some of them staff+), which makes their remarks on segfaults scary, because clearly, they haven't realized that a segfault is the best case scenario, not the worst one. This means that they very much need a refresher course on memory safety and why it matters.
The fact that they assume that they're good enough to avoid memory errors without tooling despite the fact that most of these errors are invisible and may remain invisible for years before being discovered is a second red flag, because it strongly suggest that they misunderstand the difficulty.
Of course, the conversation between you and I is complicated by the fact that the same words "memory safety" could apply to the programming language/tooling or to the compiled binary.
[0]: https://materialize.com/blog/rust-concurrency-bug-unbounded-...
Furthermore, memory bugs still can be considered by teams as just another bug, so they might not get prioritised.
The only significant difference is that there’s lots of criminal energy targeting them, otherwise nobody would care much.
There are plenty of such skeptics. It's why Google, Microsoft, etc all needed to publish things like "70% of our vulnerabilities are memory-safety linked".
Even today, the increasing popularity of Zig indicates that memory-safety is not taken as a baseline.
I can also point to more extreme skeptics like Dan O'dowd, who argue that memory safety is just about getting gud and you don't actually need language affordances.
Discussions about this topic would be a lot less heated if everyone was on the same page to start. They're not. It's taken advocates years of effort to get to the point where we can start talking about memory safety without immediate negative reactions and that process is still ongoing.
Same thing with tests, get the coverage you need to build the confidence in your codebase, but don't tie yourself in knots trying to get that last 10%. It's not worth it. Create some manual and integration tests and move one.
I feel like type safety, memory safety, thread safety, etc. are are all similar. Building a physics core to simulate the stability of your nuclear stockpile? The typing should be second to none. Building yet another CSV exporter? Who gives a damn.
Context is so damn important.
A logic bug in a library doesn't break unrelated code. It's meaningful to talk about the continued execution of a program in the presence of logic bugs. Logic bugs don't time travel. There are ways to exhaustively prove the absence of logic bugs, e.g. MC/DC or state space exploration, even if they're expensive.
None of these properties are necessarily true of memory safety. A single memory safety violation in a library can smash your stack, or allow your code to be exploited. You can't exhaustively defend against this with error handling either. In C and C++, it's not meaningful to even talk about continued execution in the presence of memory safety violations. In C++, memory safety violations can time travel. You typically can't prove the absence of memory safety violations, except in languages designed to allow that.
With appropriate caveats noted (Fil-C, etc), we don't have good ways to retrofit memory safety onto languages and programs built without it or good ways to exhaustively diagnose violations. All we can do is structurally eliminate the possibility of memory unsafety in any code that might ever be used in a context where it's an important property. That's most code.
Memory bugs have a high risk of exploitability. That’s it; the threat model will tell the team what they need to focus on.
Nothing in software or engineering is absolute. Some projects have decided they need compile-time guarantees about memory safety, others are experimenting with it, many still use C or C++ and the Earth keeps spinning.
https://georgemauer.net/2017/10/07/csv-injection.html
The problem with memory unsafe code is that it can have unexpected and unpredictable side-effects. Such as subtly altering the critical data you're exporting, of letting an attacker take control of your CSV exporter.
In other words, you need quite a lot of context to figure out that a memory bug in your CSV exporter won't be used for escalation. Figuring out that context, documenting it and making sure that the context never changes for the lifetime of your code? That sounds like a much complex proposition that using memory-safe tools in the first place.
I don't really see how that's a) a scepticism of memory safety or b) how it's not seen as a reasonable position. Just because someone doesn't think X is the most important thing ever doesn't mean they are skeptical of it, but rather that the person holding the 100% viewpoint is probably the one with the extreme position.
Materials to be consumed by engineers are often unsafe when misused. Not just programs like toolchains with undefined behaviors, but in general. Steel beams buckle of overloaded. Transistors overhead and explode outside of their SOA (safe operating area).
When engineers make something for the public, their job is to combine the unsafe bits, but make something which is safe, even against casual misuse.
When engineers make something for other engineers, that is less so; engineers are expected to read the data sheet.
even if you know what the data sheet says, it's easier said than done, especially when the tool gives you basically no help. you are just praying people will magically just git gud.
It's also not a meaningful concept within the C++ language standard written by the committee Herb Sutter chairs. Memory unsafety is undefined behavior (UB). C++ code containing UB has no defined semantics and is inherently incorrect, whether that's 1 violation or 1000.
Now, we can certainly discuss the practical ramifications of 95% vs 100%, but even here Herb's arguments have fallen notoriously flat. I'll link Sean Baxter's piece on why Herb's actual proposals fail to achieve even these more modest goals as an entry point [0]. No need to rehash the volumes of digital ink already spilled on this subject in this particular comment thread.
[0] https://www.circle-lang.org/draft-profiles.html
It's like saying that people skeptical of formal verification are actually skeptical of eliminating bugs. Most people are not skeptical of eliminating bugs, but they might be skeptical of extreme approaches to do so.
If you think that's impossibly difficult, you're starting to understand the basic problem. We already know from other languages that memory safety is possible. I've already linked one proposal to retrofit similar safety onto C++. The author of Fil-C is elsewhere in these comments arguing for another way.
It doesn't, because logic bugs generally have, or can be made to have limited scope.
> And likewise in reverse - you can have a memory safety issue that doesn't result in a vulnerability or crash.
No you can't, not in standard C. Any case of memory unsafety is undefined behaviour, therefore a conforming implementation may implement it as a vulnerability and/or crash. (You can have a memory safety issue that happens to not result in a vulnerability or crash in the current version of gcc/clang, but that's a lot less reassuring)
It’s also trivial to discount, since the classical evaluation of bugs is based on actual impact, not some nebulous notions of scope or what-may-happen.
In practice, the program will crash most of the time. Maybe it will corrupt or erase some files. Maybe it will crash the Windows kernel and cause 10 billion in damages; just like a Rust panic would, by the way.
However, I imagine he would probably take into consideration the context. Who and what is the program for? And does the issue only reproduce if the program is misused? Does the program handle untrusted inputs? Or are there conceivable situations in which a user of the program could be duped by a bad actor into feeding the program a malicious input?
Imagine Sutter wrote a C compiler, and someone found a way to crash it. But the only way to reproduce that crash is via code that invokes undefined behavior. why would Herb prioritize fixing that over other work?
Suppose the user insists that he's running the compiler as a CGI script, allowing unauthenticated visitors to their site to compile programs, making it a security issue.
How should Herb reasonably reply to that?
Because submitting code that invokes undefined behavior to one's compiler is a very normal thing that most working C developers do dozens of times per day, and something that a decent C compiler should behave reasonably in response to. (One could argue that crashing is acceptable, but erasing the developer's hard drive is not - but by definition that means means undefined behaviour in this situation is not acceptable).
> Suppose the user insists that he's running the compiler as a CGI script, allowing unauthenticated visitors to their site to compile programs, making it a security issue. How should Herb reasonably reply to that?
By fixing the bug?
What would you even call such a thing? "Compiler Explorer"?
I guess maybe if Herb had helped the guy who owned that web site, say, Matt Godbolt, to enable his "Syntax 2 for C++" compiler cppfront, on that site, it would feel like Herb ought to take some responsibility right ?
Or maybe I am being unreasonable ?
Herb is usually talking about the latter because of the nature of his role, like he does here [0]. I'm willing to give him the benefit of the doubt on his opinions about specific programs, because I disagree with his language opinions.
[0] https://herbsutter.com/2024/03/11/safety-in-context/
I wonder how you figure out when your codebase has reached 95% safety? Or is it OK to stop looking for memory unsafety when you hit, say, 92% safe?
One thing I've noticed when people make these arguments is that they tend to ignore the fact that most (all?) of these other safeties they're talking about depend on being able to reason about the behaviour of the program. But when you violate memory safety a common outcome is undefined behaviour, which has unpredictable effects on program behaviour.
These other safeties have a hard dependency on memory safety. If you don't have memory safety, you cannot guarantee these other safeties because you can no longer reason about the behaviour of the program.
For C/C++, memory safety is a retrofit to a language never designed for it. Many people, including me, have tried to improve the safety of C/C++ without breaking existing code. It's a painful thing to attempt. It doesn't seem to be possible to do it perfectly. Sutter is taking yet another crack at that problem, hoping to save C/C++ from becoming obsolete, or at least disfavored. Read his own words to see where he's coming from and where he is trying to go.
Any new language should be memory safe. Most of them since Java have been.
The trouble with thinking about this in terms of "95% safe" is that attackers are not random. They can aim at the 5%.
[1] https://herbsutter.com/2024/03/11/safety-in-context/
The most popular ones have not been necessarily. Notably Go, Zig, and Swift are not fully memory safe (I’ve heard this may have changed recently for swift).
(Yes I'm aware that Go literature 'encourages' the use of Channels and certain patterns)
Lets not let the perfect be the enemy from good.
Even with all my criticism of many Go's design decisions, I rather have more infrastructure code being written in Go than C, and derived languages.
Or any of the supposed better C alternatives, with manual memory management and use after free.
That is not true of Rust.
It's not supposed to be, but Rust does have plenty of outstanding soundness bugs: https://github.com/rust-lang/rust/issues?q=state%3Aopen%20la...
Rust as intended is safe and sound unless you write u-n-s-a-f-e, but as implemented has a ways to go.
Also for what it’s worth Rust ports tend to perform faster according to Russinovich. Part of that may be second system syndrome although the more likely explanation is that the default std library is just better optimized (eg hash tables in Rust are significantly better than unordered_map)
All benchmarks between Ada, C, C++, and Rust (and others) should come down to a wash. A skilled programmer can find a difference but it won't be significant. A skilled C++ programmer wouldn't be using unordered_map so it is unfair to point out you can use something bad.
> A skilled C++ programmer wouldn't be using unordered_map so it is unfair to point out you can use something bad.
Pretending defaults don’t matter is naive especially in a language that is so hostile to being easy to add 3p dependencies (even without that defaults matter).
There are other examples for other data structures.
C++ isn't my primary language. Pray tell - what's wrong with unordered_map, and what's the alternative?
It may be that they've implemented it differently in a way that is more performant but has fewer features. A "rust port" is not automatically or apparently a 1:1 comparison.
Better types like VecDeque<T>, better implementations of common ideas like sorting, even better fundamental concepts like providing the destructive move, or the owning Mutex by default.
Even the unassuming growable array type, Rust's Vec<T>, is just plain better than C++ std::vector<T>. It's not a huge difference and for many applications it won't matter, but that's the sort of pervasive quality difference I'm talking about and so I can well believe that in practice this ends up showing through.
---
1. Using 8 cores.
2. Single-threaded
But critics seem to often never engage with the actual data and just blindly get knee jerk defensive.
2. That's a language feature too. Writing non-trivial multi-core programs in C or C++ takes a lot of effort and diligence. It's risky, and subtle mistakes can make programs chronically unstable, so we've had decades of programmers finding excuses for why a single thread is just fine, and people can find other uses for the remaining cores. OTOH Rust has enough safety guarantees and high-level abstractions that people can slap .par_iter() on their weekend project, and it will work.
This matches my experience with the runtimes in question—I tried Bun out for a very small project and ran into 3 distinct crashes, often in very simple parts of the code like command line processing. Obviously not every crash / null-pointer dereference is a memory safety issue, but I think most people would agree that Zig does not seem to make bug-free software easier to write.
Probably segfaults imply the absence of memory safety.
Many have no use for borrow checker, and there is already enough choice with ML inspired type systems.
What is it you want to hear about memory safety? If you’re willing to accept the tradeoffs of an automatic garbage collector, memory safety has been a solved problem for decades, and there’s not a whole lot to tell you about it other than learning about widespread, mature technology.
But if you have some good reason to avoid that technology, then your options are far more limited, and there are good reasons that Rust dominates that conversation.
So the question stands - what is it you want to hear more about?
Definitely not all of them, yes.
> Hopefully, there are no memory safety skeptics, other than rhetorical strawmen.
You'll find the reality disappointing then…
I guess it is similar to Rust code that uses `unsafe {}` as the other poster mentioned (maybe `unsafe fn` for a closer analogy). My knowledge of Ada/SPARK is much greater than what I know about Rust, so I might be guessing wrong.
Unsafe functions mark that the caller is responsible for upholding the invariants necessary to avoid UB. In the 2021 and earlier editions, they also implicitly created an unsafe block in the body, but don't in 2024.
Or, in a more pithy tone: an unsafe block is the "hold my beer" block, while an unsafe function is a "put down your beer" function.
What are todoay's definitions? If Ada simply had more thorough rules but introduced an "unsafe {}" construct, then what would the practical difference actually be? Compiler defaults?
Firmware is written in TamaGo.
PTC and Aicas JVMs and AOT compilers are proper Java, there is nothing on the standard regarding what kind of GC or real time constraints are required.
Additionally, real time Java is an industry standard.
Even PL/I had a higher security score given by the DoD, than C.
Small nit: As someone curious about a definition of memory safety, I had come across Michael Hicks' post. He does not use the list of errors as definition, and argues that such a definition is lacking rigor and he is right. He says;
> Ideally, the fact that these errors are ruled out by memory safety is a consequence of its definition, rather than the substance of it. What is the idea that unifies these errors?
He then offers a technical definition (model) involving pointers that come with capability of accessing memory (as if carrying the bounds), which seems like one way to be precise about it.
I have come to the conclusion that language safety is about avoiding untrapped errors, also known as "undefined behavior". This is not at all new, it just seems to have been forgotten or was never widely known somehow. If interested, find the argument here https://burakemir.ch/post/memory-safety-the-missing-def/
I believe everyone who cares about memory safety appreciates that certain bugs cannot occur in Java and go, and if the world calls that memory safe, that is ok.
There are hard, well-defined guarantees that a language and implementation must make, and a space of trade-offs. We need language and recognition for the ability to push the boundary of hard, well-defined guarantees further. That, too, is memory safety and it will be crucial for moving the needle beyond what can be achieved with C and C++.
No one has a problem with applications being ported from low-level to GC-ed languages, the challenge is the ones where this is not possible. We need to talk about memory safety in this specific context, and mitigations and hardening will not solve the entire problem, only pieces of it.
There is art and there is science. What both have in common is that their protagonists do not intend to become obstacles of progress.
I'm afraid GC'd languages have been around for a very long time and yet we continue to talk about memory safety as an urgent problem. Now what?
How does pretending that low-level memory safety is not its own complex domain deserving of its own technical definitions help with anything?
TFA hints at memory safety requiring static checking, in the sense that it's written in a way that would satisfy folks who think that way, by saying thingys like "never occur" and including null pointer safety.
Is it necessary for the checking to be static? No. I think reasonable folks would agree that Java is memory safe, yet it does so much dynamic checking (null and bounds). Even Rust does dynamic checking (for bounds).
But even setting that aside, I don't like how the way that the definition is written in TFA doesn't even make it unambiguous if the author thinks it should be static or dynamic, so it's hard to debate with what they're saying.
EDIT: The definition in TFA has another problem: it enumerates things that should not happen from a language standpoint, but I don't think that definition is adequate for avoiding weird execution. For example, it says nothing about bad casts, or misuses of esoteric language features (like misusing longjmp). We need a better definition of memory safety.
I don't see where you're seeing the article drawing a line between static and dynamic defenses. The article opens by noticing Rust isn't the first memory safe language. It is by implication referring to things like Java, which have dynamic, runtime-based protections against memory corruption.
This piece does not define memory safety as "not admitting memory corruption vulnerabilities". If it was using that definition, then:
- You and I would be on the same page.
- I would have a different complaint, which is that now we have to define "memory corruption vulnerability". (Admittedly, that's maybe not too hard, but it does get a bit weird when you get into the details.)
The definition in TFA is quoted from Hicks, and it enumerates a set of things that should never happen. It's not defining memory safety the way you want.
I'm always a little guarded about message board definitions of "memory safety", because they tend to be axiomatically derived from the words "memory" and "safety", and they tend to have an objective of saying that there's only one mainstream language that provides memory safety.
Yeah!
I agree it's hard to do it without some kind of axioms or circularity, but it's also important to get the definition right, because at some point, we'll need to be able to objectively say whether some piece of critical software is memory safe, or not.
So we have to keep trying to find a good definition. I think that means rejecting the bad definitions.
Rust's big step function was to offer memory safety at compile time through the use of static analysis borrowed and grown out of prior efforts such as Cyclone, a research programming language formulated as a safe subset of C.
In other words, Rust has solved the halting problem since the static checking of array bounds is undecidable in the general case!
No one is making that claim.
For arrays, this problem is not computable at compile time, hence the sarcastic remark that, IF THE ABOVE DEFINITION IS TAKEN AT FACE VALUE, Rust must have solved the halting problem. Downvoters are so dumb here.
Why are you shouting? That's what twits do. You don't want to be a twit, do you? Read the site guidelines, emphasis is done with italicized text marked up with an * at the beginning and another * at the end.
But to respond to the topic at hand: Are you familiar with the distinction between sound (what Rust aims for) and complete analyses?
https://stackoverflow.com/questions/28389371/why-does-rust-c...
> Rust's big step function was to offer memory safety at compile time through the use of static analysis borrowed and grown out of prior efforts such as Cyclone, a research programming language formulated as a safe subset of C.
It is absolutely true that memory safety is offered at compile time in Rust which is a novel thing. You then pivoted this to start talking about bounds safety of arrays which is a strawman interpretation of what was written.
"Memory safety—the property that makes software devoid of weaknesses such as buffer overflows, double-frees, and similar issues—has been a popular topic in software communities over the past decade"
And the buffer overflows are not detected statically except for the cases when the compiler can prove them. And Rust proponents keep ignoring the topic of this subthread, which is memory safety by static analysis.
RefCell enforces memory safety too by doing the lifetime enforcement at runtime by using UnsafeCell to provide certain semantics. But the compiler ensures that the RefCell itself still has correct lifetimes and is used correctly, resulting in a compile time guarantee that the code that’s run is memory safe.
https://blog.adacore.com/memory-safety-in-rust
Notice that in-bounds indexing is included, same as in the definition from this submission that I quoted at you.
Firstly lets attack the direct claim. The reason you'd reduce to the halting problem is via Rice's theorem. But, Rice only matters if we want to allow exactly all the programs with the desired semantics. In practice what you do is either allow some incorrect programs too (like C++ does, that's what IFNDR is about, now some programs have no defined behaviour at all, but oh well, at least they compiled) or you reject some correct programs too (as Rust does). Now what we're doing is merely difficult rather than provably impossible, and people do difficult things all the time.
This is an important choice (and indeed there's a third option but it's silly, you could do both, rejecting some correct programs while allowing some incorrect ones, the worst of both worlds) but in neither case do we need to solve an impossible problem.
Now, a brief aside on what Rust is actually doing here, because it'll be useful in a moment. Rust's compiler does not need to perform a static bounds check on array index, what Rust's compiler needs to statically check is only that somebody wrote the runtime check. This satisfies the requirement.
But now back to laughing at you. While in Rust it's common to have fallible bounds checks at runtime (only their presence being tested at compile time) in WUFFS it's common for all the checks to be at compile time and so to have your code rejected if the tools can't see why your indexing is always in bounds.
When WUFFS sees you've written arr[k] it considers this as a claim that you've proved 0 <= k < arr.len() if it can't see how you've proved that then your code is wrong, you get an error. The result is that you're going to write a bunch of math when you write software, but the good news is that instead of going unread because nobody reviewing the code bothered reading the maths the machine read your math and it runs a proof checker, so if you were wrong it won't compile.
Edited: Fix off-by-one error
A panic is not a violation of memory safety; if you wanted to violate memory safety you'd need to have caused that deref to succeed, and println to spit out the result.
EDIT: let me guess, did the panic message include "index out of bounds: the len is 5 but the index is 10"?
An index OOB error? Here, it's important to remember the Rust panic is still memory safe. Perhaps you should read the article, or read up on what undefined behavior is?[0] Here, the Rust behavior is very well defined. It will either abort or unwind.[1]
If you prefer different behavior, there is the get method on slices.[2]
[0]: https://en.wikipedia.org/wiki/Undefined_behavior [1]: https://doc.rust-lang.org/reference/panic.html [2]: https://doc.rust-lang.org/std/primitive.slice.html#method.ge...
Static analysis has a specific meaning, and rote insertion of bounds checking isn't it.
The equivalent of vector[i] in Rust is Vex::get_unchecked, which is marked as unsafe, not the default that people reach for normally.
I refuted that point by pointing out that the same process, if done manually in C++, would not be considered "static analysis that provides memory safety for array access".
C++ can have UB, compilable non-unsafe Rust can’t, that’s what static analysis of memory safety is.
Main point here is you don’t know (and refuse to learn) new knowledge.
Oh, I read it.
Rust, and for that matter, the person to whom you are replying, above, never claimed that Rust could statically check array bounds. You created that straw-man. Yes, Rust does use static analysis, as one important method to achieve memory safety, but Rust doesn't require static analysis in every instance for it to be Rust's big step function, or for Rust to achieve memory safety.
Yes, certain elements of memory safety can only be achieved at runtime. Fine? But using static analysis to achieve certain elements of memory safety at compile time is obviously better where possible, rather than only at runtime, such as re: Java or Fil-C?
We understand you're saying it's not possible in the general case to assert that all memory accesses are in bounds. Instead of that, if you ensure all memory accesses are either in bounds or that they at least do not violate memory safety, you've achieved the requirement of "memory safety", regardless of runtime inputs.
Pfft, the simply typed lambda calculus solved the halting problem back in 1940.
[1] https://community.intel.com/t5/Blogs/Tech-Innovation/open-in...
[2] http://fishshell.com/blog/rustport/
I read this in the opposite way: if the hardware is going to be stricter about memory accesses being valid, that suggests that software is going to have to meet a higher standard in order to successfully run.
Imagine if you had to satisfy the Rust borrow checker, except you're still writing C and don't have additional tooling during compilation to show how a problem could trigger, you just have more crashes.
Weird misrepresentation of your source... they had to drop support for only the obscurest of platforms, and concluded "We don’t see a big problem here".
Rust has enormous platform support. See https://doc.rust-lang.org/nightly/rustc/platform-support.htm... for a list. It also has the backend/frontend separation you describe, because it is based on clang internals. There is also ongoing work to plug it into GCC, as well as Rust compilers that can output C code directly to target dead embedded platforms that only have a single proprietary C compiler.
So viewing their recommendations with caution seems wise.
0: https://lwn.net/Articles/342330/
You stated explicitly it isn’t but the compiler optimizing away null pointer checks or otherwise exploiting accidental UB literally is a thing that’s come up several times for known security vulnerabilities. It’s probability of incidence is less than just crashing in your experience but that doesn’t necessarily mean it’s not exploitable either - could just mean it takes a more targeted attack to exploit and thus your Baysian prior for exploitability is incorrectly trained.
But not in reality. For example a signed overflow is most likely (but not always) compiled in a way that wraps, which is expected. A null pointer dereference is most likely (but not always) compiled in a way that segfaults, which is expected. A slightly less usual thing is that a loop is turned into an infinite one or an overflow check is elided. An extremely unusual thing and unexpected is that signed overflow directly causes your x64 program to crash. A thing that never happens is that your demons fly out of your nose.
You can say "that's not expected because by definition you can't expect anything from undefined behaviour" but then you're merely playing a semantic game. You're also wrong, because I do expect that. You're also wrong, because undefined behaviour is still defined to not shoot demons out of your nose - that is a common misconception.
Undefined behaviour means the language specification makes no promises, but there are still other layers involved, which can make relevant promises. For example, my computer manufacturer promised not to put demon-nose hardware in my computer, therefore the compiler simply can't do that. And the x64 architecture does not trap on overflow, and while a compiler could add overflow traps, compiler writers are lazy like the rest of us and usually don't. And Linux forbids mapping the zero page.
The worst case of UB is worse than the worst case of most kinds of non-UB memory safety issues.
> NULL pointer dereferences are almost never exploitable
Disagree; we've seen enough cases where they become exploitable (usually due to the impact of optimisations) that we can't say "almost never". They may not be the lowest hanging fruit, but they're still too dangerous to be acceptable.
Couldn't find this in the reference text. Is it my interpretation? https://www.memorysafety.org/docs/memory-safety/#how-common-...
37 more comments available on Hacker News