Safe C++ Proposal Is Not Being Continued
Posted4 months agoActive4 months ago
sibellavia.lolTechstoryHigh profile
heatednegative
Debate
80/100
C++Memory SafetyRust
Key topics
C++
Memory Safety
Rust
The C++ community is discussing the abandonment of the Safe C++ proposal and the potential of the Profiles proposal to improve memory safety, with many commenters expressing skepticism and frustration.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
26m
Peak period
93
0-12h
Avg / period
16
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 13, 2025 at 3:00 PM EDT
4 months ago
Step 01 - 02First comment
Sep 13, 2025 at 3:26 PM EDT
26m after posting
Step 02 - 03Peak activity
93 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 21, 2025 at 1:03 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45234460Type: storyLast synced: 11/20/2025, 7:50:26 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Profiles cannot achieve the same level of safety as Rust and it's obvious to anyone who breathes. Profiles just delete stuff from the language. Without lifetimes reified as types you can't express semantics with precision enough to check them. The moment string_view appears, you're horked.
Okay, so you ban all uncounted reference types too. Now what you're left with isn't shit Rust but instead shit Swift, one that combines the performance of a turtle with the ergonomics of a porcupine.
There's no value in making things a little bit safer here and there. The purpose of a type system is to embed proofs about invariants. If your system doesn't actually prove the the invariant, you can't rely on it and you've made is a shitty linter.
Continue the safe C++ work outside the context of the C++ standards committee. Its members, if you ignore their words and focus on the behaviors, would rather see the language become irrelevant than change decades old practices. Typical iron law of bureaucracy territory.
With C++, my impression is that most implementers simply aren't interested. And, conversely, most people who might be interested enough to roll a new implementation have already moved to Rust and make better use of their time improving that.
There are lots of cool innovations C++ made that will just disappear from the Earth, forever, if C++ can't be made memory safe in the same sense Rust is. I mean, Rust doesn't even support template specialization.
I don't think it's too late for someone to fork both C++ and Clang and make something that's actually a good synthesis of the old and the new.
But yeah, the most likely future is one on which C++ goes the way of Fortran (which still gets regular updates, however irrelevant) and the energy goes into Rust. But I like to rage, rage, against the dying of the type based metaprogramming.
Not that long ago tsoding was like "I should learn Fortran" and wrote a bunch of Fortran. Obviously from his perspective some things about Fortran are awful because it's very old, but it wasn't somehow impossible to do.
There are a few really amazing things which have been achieved in C++ like fmt, a compile time checked, userspace library for arbitrarily formatting variadic generic parameters. That's like man on the moon stuff, genuinely impressive. Mostly though C++ is a garbage fire and so while it's important to learn about it and from it we should not keep doing that.
Anecdotal, but that's hardly unique to C++. So even if C++ were to disappear overnight (which we all agree won't happen), this wouldn't be a burning-library-of-Alexandria moment.
To me the things which come to mind are either compiler magic (e.g. C printf) or they rely on RTTI (e.g. Odin and similar C-like languages) and neither of those is what fmt does, they're "cheating" in some sense that actually matters.
https://zig.guide/standard-library/formatting/
You're right about Zig, and reading the source [1] I'm still kind of impressed how unified they made comptime / runtime.
[1]: https://github.com/ziglang/zig/blob/32a1aabff78234b428234189...
etc.
People have tried variants of this already: Carbon, for example. I don’t think anyone outside of Google uses it, though, and even within Google I suspect it’s dwarfed by regular C++.
I don’t think C++ will become irrelevant for a long time. Recent standards have added some cool new features (like std::expected), and personally I feel like the language is better than ever (a biased opinion obviously).
Memory management is still a huge elephant in the room, but I don’t think it’s becoming irrelevant.
They are quite open everyone else should use a managed language or Rust.
Great! What are you waiting for?
If you try to answer that question, you'll also find why other similar projects are not finding much traction yet.
This doesn't mean that it's not possible to achieve a safe subset of C++ that supports template specialization, but it suggests that we aren't going to see it any time soon.
Apple with Swift, Google with Go, Java/Kotlin, Rust, Microsoft with C#, Go, Java and Rust.
Most modules stuff on GCC was done by a single developer, if I am not mistaken.
Notice how Visual C++ blog posts are mostly about game development related improvements, Visual Studio tooling.
Everyone else their compilers are still stuck on C++14, or C++17 if lucky.
So the claim is that the scpptool approach[1] can, while remaining closer to traditional C++, and not requiring the introduction of new language elements. Since the scpptool-enforced safe subset of C++ is an actual subset of C++, conforming code continues to build with your existing compiler. It just uses an additional static analyzer to check conformance.
For the 90% or whatever of C++ code that is not actually performance sensitive, the associated SaferCPlusPlus library provides drop-in and "one-to-one" safe replacements for unsafe C++ elements (like standard library containers and raw pointers). (For example, if you're worried about potentially invalid vector iterators, you can just replace your std::vector<>s with mse::mstd::vector<>s.) With these elements, most of the safety is enforced in the type system and not reliant on the static analyzer.
Conforming implementations of performance-sensitive code would be more restricted and more reliant on the static analyzer for safety enforcement. And sometimes requires the use of library elements, like "borrowing objects", which may not have analogies in traditional C++. But overall, even high-performance conforming code remains very recognizable C++.
The claim is that the scpptool approach is a straightforward path to full memory (and data race) safety for C++, and the one that requires the least code migration effort. (And again, as an actual subset of existing C++, not technically dependent on standard committees or compiler vendors for its implementation or deployment.)
[1]: https://github.com/duneroadrunner/scpptool/blob/master/appro...
What infuriates me about the C++ safety situation is that C++ is by and large a better, more expressive language than Rust is, particularly with respect to compile time type level metaprogramming. And I am being walked hands handcuffed behind my back, alongside everyone else, into the Rust world with its comparatively anemic proc macro shit because the C++ committee can't be bothered to care about memory safety.
Because of the C++ standards committee's misfeasance, I'm going to have to live in a world where I don't get to use some of my favorite programming techniques.
To be more precise, it's old Python. Recent versions of Python use a gc.
> And I am being walked hands handcuffed behind my back, alongside everyone else, into the Rust world with its comparatively anemic proc macro shit because the C++ committee can't be bothered to care about memory safety.
Out of curiosity (as someone working on static analysis), what properties would you like your compiler to check?
Have you worked much with SAL and MIDL from Microsoft? Using SAL (an aesthetically hideous but conceptually beautiful macro based gradual typing system for C and C++) overlay guarantees about not only reference safety but also sign comparison restriction, maximum buffer sizes, and so on.
But first: we need to take step-zero and introduce a type "r64": a "f64" that is not nan/inf.
Rust has its uint-thats-not-zero - why not the same for floating point numbers??
Why do we need to single out a specific value. It would be way better if we also could use uint-without-5-and-42. What I would wish for is type attributes that really belong to the type.
While I think it's a great idea, this also sounds like it would require fairly major rewrites (and possibly specialized libraries?), which suggests that it would be hard to get much buy-in.
> Reference counting is the primary mechanism of garbage collection, however, it doesn’t work when the references have cycles between them and for handling such cases it employs the GC.
I don't see how that follows at all. What makes Python Python (and slow!) is dynamic dispatch everywhere down to the most primitive things. Refcounted smart pointers are a very minor thing in the big picture, which is why we've seen Python implementations without them (Jython, IronPython). Performance-wise, yes, refcounting certainly isn't cheap, but you just do that and keep everything else C++-like, the overall performance profile of such a language is still much closer to C++ than to something like Python.
You can also have refcounting + something like `ref` types in modern C# (which are essentially restricted-lifetime zero-overhead pointers with inferred or very simplistic lifetimes):
https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...
It doesn't cover all the cases that a full-fledged borrow checked with explicit lifetime annotations can, but it does cover quite a few; perhaps enough to adopt the position that refcounting is "good enough" for the rest.
Both of which have modern, concurrent, parallel, and generational garbage collectors.
For example, their tooling prevents code like this:
from compiling, forcing you to explicitly create an owning local reference, like so: unless it's a trivial inlined function, like a simple getter.[0]https://www.youtube.com/watch?v=RLw13wLM5Ko
My interpretation of Geoff's presentation is that some version of profiles might work, at least in the sense of making it possible to write C++ code that is substantially safer than what we have today.
The most hopeful thing I saw in Geoff's talk was cultural. It sounds like Geoff's team wanted to get to safer code. Things went faster than expected, people landed "me too" unsolicited patches, that sort of thing. Of course this is self-reported, but assuming Geoff wasn't showing us a very flattering portrait of a grim reality, which I can't see any incentive for, this team sounds like while they'd get value from Rust they're delivering many of the same security benefits in C++ anyway.
Bureaucrats don't like culture because it's hard to measure. "Make sure you hire programmers with a good culture" is hard to chart. You're probably going to end up running some awful quiz your team hates "Answer D, A, B, B, E, B to get 100%". Whereas "Use Rust not C++" is measurable, team A has 93% of code in Rust, but team B scored 94.5% so that's more Rust, they win.
That's not true at all.
- The bounds safety part of it prevents those C operations that Fil-C or something like it would dynamically check. You can to use hardened API instead.
- The cast safety part of it prevents C casts except if they're obviously safe.
- The lifetime safety part of it forces you to use WebKit's smart pointers except when you have an overlooking root.
Those are type safety rules. It's disingenuous to call them heuristics.
It is true, however, that Geoff's rules don't go to 100% because:
- There are nasty corners of C that aren't covered by any of those rules.
- WebKit still has <10% ish code that isn't opted into those rules.
- WebKit has JITs.
r/cpp is full of people with such heuristics, ways that they personally have fewer safety bugs in their software. That's how C++ got its "core guidelines", and it is clearly the foundation of Herb's profiles. You can't get to safety this way, you can get closer than you were in a typical C++ codebase and for Geoff that was important.
“Prevent something unless obviously safe” is a core pattern of rules in type systems. For example variable assignment in Java. If it’s possibly unsafe (RHS is not a subtype of LHS) then it’s prevented.
Are you saying Java and all of classic type theory is just heuristics?
You could compile your program with address sanitizer then it at least crashes in a defined way at runtime when memory corruption would happen. TCC (tiny C compiler initially written by fabrice bellard) also has such a feature I think.
This of course makes it significantly slower.
You’d possibly just be trading one problem for another though - ask anyone who’s had to debug a shared ownership issue.
Regardless of the technology the big thing Rust has that C++ does not is safety culture, and that's dominant here. You could also see at the 2024 "Fireside chat" at CppCon that this isn't likely to change any time soon.
The profiles technology isn't very good. But that's insignificant next to the culture problem, once you decided to make the fifteen minute bagpipe dirge your lead single it doesn't really matter whether you use the colored vinyl.
Profiles and sanitizers just aren't sufficient.
I think there are plenty of people that must use C++ due to legacy, management or library reasons and they care about safety. But those people aren't going to join language committees.
These include:
1. bounds checked arrays (you can still use raw pointers instead if you like)
2. default initialization
3. static checks for escaping pointers
4. optional use of pure functions
5. transitive const and immutable qualifiers
6. ranges based on slices rather than pointer pairs
Why does Rust sometimes require program redesigns? Because these programs are flawed at some fundamental level. D lacks the most important and hardest kind of safety and that is reference safety - curiously C++ profiles also lacks any solution to that problem. A significant amount of production C++ code is riddled with UB and will never be made safe by repainting it and bounds checking.
Claiming that not being forced to fix something fundamentally broken is an advantage when talking about safety doesn't make you look like a particularly serious advocate for the topic.
I'm familiar with borrow checkers, as I wrote one for D.
Not following the rules of the borrow checker does not mean the program is flawed or incorrect. It just means the borrow checker is unable to prove it correct.
> D lacks the most important and hardest kind of safety and that is reference safety
I look at compilations of programming safety errors in shipped code now and then. Far and away the #1 bug is out-of-bounds array access. D has solved that problem.
BTW, if you use the optional GC in D, the program will be memory safe. No borrow checker needed.
Do you have good data on that? Looking at the curl and Chromium reports they show that use-after-free is their most recurring and problematic issue.
I'm sure you are aware, but I want to mention this here for other readers. Reference safety extends to things like iterators and slices in C++.
> Not following the rules of the borrow checker does not mean the program is flawed or incorrect.
At a scale of 100k+ LoC every single measured program has been shown to be flawed because of it.
Edit: I just googled "causes of memory safety bugs in C++". Number 1 answer: "Buffer Overflows/Out-of-Bounds Access"
"Undefined behavior in C/C++ code leads to security flaws like buffer overflows" https://www.trust-in-soft.com/resources/blogs/memory-safety-...
"Some common types of memory safety bugs include: Buffer overflows" https://www.code-intelligence.com/blog/memory_safety_corrupt...
"Memory Safety Vulnerabilities 3.1. Buffer overflow vulnerabilities We’ll start our discussion of vulnerabilities with one of the most common types of errors — buffer overflow (also called buffer overrun) vulnerabilities. Buffer overflow vulnerabilities are a particular risk in C, and since C is an especially widely used systems programming language, you might not be surprised to hear that buffer overflows are one of the most pervasive kind of implementation flaws around." https://textbook.cs161.org/memory-safety/vulnerabilities.htm...
Temporal safety is usually also much harder to reason about for humans, since it requires more context.
That's a very strong statement. How do you support it with arguments?
No. Programs that pass borrow checking are a strict subset of programs that are correct with respect to memory allocation; an infinite number of correct programs do not pass it. The borrow checker is a good idea, but it's (necessarily) incomplete.
Your claim is like saying that a program that uses any kind of dynamic memory allocation at all is fundamentally broken.
Simply not true, and this stance is one of the reasons we have people talking about Rust sect.
1. The use of garbage collection. If you accept GC there are many other languages you can use. If you don't want GC the only realistic option was C++. Rust doesn't rely on GC.
IIRC GC in D is optional in some way, but the story always felt murky to me and that always felt like a way of weaseling out of that problem - like if I actually started writing D I'd find all the libraries needed GC anyway.
2. The awkward standard library schism.
3. Small community compared to C++. I think it probably just didn't offer enough to overcome this, whereas Rust did. Rust also had the help of backing from a large organisation.
I don't recall anyone ever mentioning its improved safety. I had a look on Algolia back through HN and most praise is about metaprogramming or it being generally more modern and sane than C++. I couldn't find a single mention of anything to do with safety or anything on your list.
Whereas Rust shouts safety from the rooftops. Arguably too much!
D has a lot of very useful features. Memory safety features are just one aspect of it.
> The awkward standard library schism.
???
Don't underestimate the backing of a large and powerful organization.
Yes but people wanted a language where you can't use GC.
> ???
"Which standard library should I use?" is not a question most languages have:
https://stackoverflow.com/q/693672/265521
Surely... you were aware of this problem? Maybe I misunderstood the "???".
> Don't underestimate the backing of a large and powerful organization.
Yeah it definitely matters a lot. I don't think Go would have been remotely as successful as it has been without Google.
But also we shouldn't overstate it. It definitely helped Rust to have Mozilla, but Mozilla isn't nearly as large and powerful as Google. The fact that it is an excellent language with generally fantastic ergonomics and first-of-its-kind practical memory safety without GC... probably more important. (Of course you could argue it wouldn't have got to that point without Mozilla.)
What do you think of C and C++ coming with extensive guides for best practices and what features to not use? Even so, D comes with an @nogc attribute which won't let you use the GC. Ironically, people complain that @nogc actually does not allow use of the GC. You can also use the -betterC compiler switch to not use the GC.
Interestingly, Compile Time Function Execution works great with the GC, as one needn't have to do backflips to allocate some memory.
Mozilla is orders of magnitude larger and more powerful than the D Language Foundation.
I feel that this is a disingenuous point and that you know better than this.
For example, the poster child of C++'s "don't use this feature" cliche are exceptions, and it's origins are primarily in Google's C++ stye guide.
https://google.github.io/styleguide/cppguide.html#Exceptions
If you cross-reference your claims with what Google's rationale was, you will be forced to admit your remark misrepresents the whole point.
You do not need to read too far to realize Google's point is that they have a huge stack of legacy code that is not exception-safe and not expected to be refactored, and introducing exceptions would lead their legacy code to break in ways that is not easy to remediate.
So Google had to make a call, and they decided to add the caveat that if your code is expected to be invoked by exception-free code, it should not throw exceptions.
Taken from the guide:
> Because most existing C++ code at Google is not prepared to deal with exceptions, it is comparatively difficult to adopt new code that generates exceptions.
I wonder why you left this bit out.
If this is how you try to get D to shine, then you should know why it isn't.
Google's guide is not the only one. There are the Scott Meyers series "Effective C++" with things like "declare destructors virtual in polymorphic base classes". D's destructors in polymorphic are always virtual.
This brings up another issue with C++ - conflation of polymorphic structs with non-polymorphic structs. The former should always be passed by reference, the latter maybe or maybe not. What C++ should have done is what D does - structs are for aggregation, classes are for OOP. The fundamental differences are enforced.
How does one enforce not passing a polymorphic object by value in C++? Some googling of the topic results in variations on "don't do that".
There is no such question when using D2 either. It was only an issue with D1, which was discontinued almost 15 years ago and was irrelevant for longer.
For me, it was missing presence in the IT news that did it. D might be great, but it makes no noise.
Rust or go had a lot of articles and blog posts going deep into specific posts, appearing at a regular rate. They tended to appear on e.g this hacker news, or reddit, etc... This caused a drip feed of tutoring, giving me a slow but steady feel for these languagee. There were people tirelessly correcting misinformation. There were non stop code examples of people doing stuff with the language, proving the language usable in all kind of situations.
Nginx had the guy translating the manual from russian. Ruby had the person with the weird poetic manuals. Rust had Steve Klabnic and others.
You need someone who is both a technical person and a communicator, being active on fora, doing advent calendars, blog posts, etc... Rust jumped in a hole where D could also fit, and D might be easyer than rust. I think the background interest in D exists, but has nothing to crystallize around.
Software never was a technical meritocracy.
There does not have to have been anything technically wrong with D for it to not have been widely adopted. I think HN doesn’t like that because it often means that there’s nothing obvious they can necessarily do to fix it
D's selling only proposition was providing C++11 features in a point in time between C++98 and C++11 when the C++ committee struggled to get a new standard out of the door.
Once C++11 was out, D's sales pitch was moot, and whatever wind it had in it's sail was lost and never recovered again.
It's interesting to note that Rust, in spite of all odds and also it's community, managed to put together a far more compelling sales pitch than D.
Unfortunately that was already lost, Java/Kotlin, Go, C# and Swift are the platform holder darlings for safe languages with GC, being improved for low level programming on each release, many with feature that you may argue that were in D first, and Rust for everything else.
Microsoft recently announced first class support for writing drivers in Rust, while I am certain that NVidia might be supportive of future Rust support on CUDA, after they get their new Python cu tiles support going across the ecosystem.
Two examples out of many others.
Language is great systems programming language, what is missing is the rest of the owl.
What intent do you think it has? That proposals are still being worked on and haven't been published in a specification yet?
I'm afraid context is required to actually understand what was said. For example, it can mean anything including very obvious things like stating that the committee is still working on proposals to provide guarantees and they won't feature in a standard until the work is done and a new standard is published. Which would be stating the obvious.
This is amusingly wrong in the worst way. In the case of c++, they were there but they left years ago when it became clear the committee didn’t see this problem as existential
Lots of people in mega-companies set forth to reinvent the wheel. I think we have enough track record to understand that the likes of Google don't walk over water and some of the output is rather questionable and far from the gold standard. Appeals to authority are a logical fallacy for a reason.
But, based on a initiative of some Rust entusiast of one Team we tried it. Result after a half of a year was to not to use it. Learning a new language is difficult, Rust is for much people not fun to write and a newbie Rust programmer writes worse code than a senior C/C++ programmer, even if it's the same person.
Beside of people hyped by Rust, there is not much interest to replace C/C++. Currently I see no existential risk at all. On the other hand, Rust currently is overhyped, I would not bet that it's easy to find long time experienced Rust developers to maintain your code in a decade.
Some of the biggest vulnerabilities of recent years (e.g. Heartbleed) were out-of-bounds access. The most common vulnerability sources are things that are impossible in Rust, but cannot be fully solved via C++ static checkers.
> Some of the biggest vulnerabilities of recent years (e.g. Heartbleed) were out-of-bounds access.
If I understand the Heartbleed bug correctly, it did not involved buffer overflows. It was a logical bug where they "trusted" the user-provided payload length (that can be much larger than the actual payload) and allocated the response buffer accordingly without zeroing it (malloc vs calloc). The "trash" in the uninitialized memory turned out to be quite valuable.
"xkcd: Heartbleed Explanation":
https://xkcd.com/1354/
"Add heartbeat extension bounds check.":
https://github.com/openssl/openssl/commit/731f431497f463f3a2...
How so? By listing all dependencies in an easy to digest way?
Making it hard to have a dependency does not stop devs from reusing code, it just leads to that code bing copy/pasted into code bases. I found several copies of gzip in every large C++ code base I ever looked at. Sometimes the functions get renamed, as somebody reported that linking failed when pulling in gzip as a library elsewhere. Most of the time with some patches applied. Never with documentation on where the code came from or how to update it.
"Header only libraries" are taking this approach and turn it into a best practice: Just copy this file somewhere into your source tree and you are done.
Even if you use proper dependencies, you typically depend on huge kitchen sink libraries that depend on the world themselves... often with the world being "vendored" (== copied into the library repository).
Those hidden dependencies are the worst kind of supply chain security issue you can have as itnis costly to even know about them being there.
> Currently I see no existential risk at all.
I see rust as a continuation of a trend that was started with Java... C++ has lost entire markets to memory safe languages in the last 25 years, be it the enterprise applications (Java), windows software (C#) or scientific computing (python). With these markets, C++ also has lost mind share and that shows in the new features being proposed.
When I visit a rust conference I am the old guy. When I go to a C++ conference I am of average age. The old guards are retiring with few new people filling up the ranks.
Can you be very specific about why?
Here's the argument for why profiles might work: with all of the profiles enabled, you are only allowed to use the safe subset of C++ and all of the unsafe stuff is hidden behind APIs whose implementations don't have those profiles enabled. Those projects that enable all profiles by default effectively get Swift-like or Rust-like protection.
Like, you could force all array operations to use C++ stdlib primitives, enable full hardening of the stdlib, and then have bounds safety.
And you could force all lifetime operations to use C++ stdlib refcounting primitives, and then have lifetime safety in a Swift-like way (i.e. eager refcounting everywhere).
I can imagine how this falls over but then it might just be a matter of engineering to make it not fall over.
(I'm playing devils advocate a bit since I prefer Fil-C++.)
And C++ code simply doesn't have the necessary info to make safety decisions. Sean explains it better than I can https://www.circle-lang.org/draft-profiles.html
E.g., the first case is "Inferring aliasing". He presents some examples and states, "The compiler cannot infer a function’s aliasing requirements from its declaration or even from its definition."
But why not?
The aliasing requirements come directly from vector. If the compiler has those then determining the aliasing requirements of those functions is straightforward.
Now, maybe there is some argument that a C++ compiler cannot determine the aliasing requirements of vector, but if that's the claim, then the paper should make it, and back it up.
The paper continues in the same vein in the next section, as if the lifetime requirements of map and min cannot be known or cannot bubble up through the functions that call them.
As written, the paper says almost nothing about the feasibility of static analysis of C++ to achieve safety goals for C++.
For example, given a declaration
What's the relationship between the return value and the input? You can't know without diving into 'func' itself; they could be the same pointer or it could return a freshly allocated pointer, without getting into the even more esoteric options.Trying to solve this without recursively analysing a whole program at once is infeasible.
Rust's approach was to require more information to be provided by function definitions, but that's new syntax, and not backwards compatible, so not a palatable option for C++.
Why, though?
Perhaps it's unfeasibly complex? But if that's the argument, then that's an argument that needs to be made. The paper sets out to refute the idea that C++ already has the information needed for safety analysis, but the examples throw away most of the information C++ does have, without explanation. I can't really take it seriously.
1. Complexity. This manifests as compile times. It takes much longer.
2. Usability. Error messages are poor, because changes have nonlocal effects.
3. Stability. This is related to 2. Without requirements expressed in the signature, changes in the body change the API, meaning keeping APIs stable is much harder.
There’s really a simple reason why it’s not fully feasible in C++ though: C++ supports separate compilation. This means the whole program is not required to be available. Therefore you don’t have the whole program for analysis.
Trying to establish proofs that the pointer is one way or the other can't work, because the pointer doesn't have to be only one or the other.
The fact that you then have to treat the pointer one way or the other is a problem; if you reduce the allowed programs so that the pointer must be one of the two that's a back-compat hazard. If you don't constrain it, you need to require additional information be carried somewhere to determine how to treat it.
If you do neither, you don't have the information needed to safely dispose of the pointer.
The entire point of having a static-type-system, is to enable local reasoning. Otherwise, we would just do whole program analysis on JS instead of inventing typescript.
They are clear that Profiles infers everything from function types and not function bodies. Obviously that won't work, but that's what they say.
But local analysis means analysis of function definitions. At least it does to me. I can't think of what else it could mean. I think there must be some aspect of people talking past each other here, using the same words to mean different things.
Further, I don't think local analysis of the code comprising a function means throwing away the results of that analysis rather than passing it up the line to the analysis of callers of the function. E.g., local analysis of std::sort would establish its aliasing limitations, which would be available to analysis of the body of "f1" from the example in the paper (the results of which, in turn, would be available to callers of f1).
Now, I don't know if that's actually feasible/workable without the "heavy" annotation that C++ profiles wants to forbid. That's the key question to me.
That's going to be a non-starter for 99% of serious C++ projects there. The performance hit is going to be way too large.
For bounds checking, sure I think the performance penalty is so small that it can be done.
Managed C++, C++/CLI, C++/CX, C++ Builder, Unreal C++
But those aren't extensions or approaches WG21 cares about having.
The C++11 GC design didn't even took those experiences into consideration, thus it got zero adoption, and was dropped on C++20.
Depends on how many times it's inlined and/or if it's in hot code. It can result in much worse assembly code.
Funny thing: C++17 string_view::substr has bound check + exception throw, whereas span::subspan has neither; I can see substr's approach being problematic performance- and code-size-wise if called many times yet being validated by caller.
See https://www.youtube.com/watch?v=RLw13wLM5Ko
Note that they also allowed other kinds of pointers so long as their use could be statically verified using very simple rules.
This is not the goal of profiles. It’s to be “good enough.” Guaranteed safety isn’t in the cards.
- Rust isn’t totally guaranteed safe since folks can and do use unsafe code.
- Exact same situation in Swift
- Go has escape hatches like it you race, but not only.
So most “safe” things are really “safe enough” for some definition of “enough”.
Profiles do not, even for code that is 100% using profiles, guarantee safety.
So, no matter what safe language we talk about, "safety" always has its caveats.
Can you be specific about what missing safety feature of profiles leads you to be so negative about them?
It does not guarantee that code in any function being called within that block is free of it, but it does guarantee this block of code is.
Profiles don't give you that.
An entire program ported to Rust will call into unsafe APIs in at least a few places, somewhere down the call stacks.
But you'll still have swathes of code that doesn't ultimately end up calling an unsafe API, which can be trivially considered memory safe.
It’s not about specifics, it’s about the stated goals of profiles. They do not claim to prove memory safety even with all of them turned on.
Rust does a pretty good job formalizing what the safety guarantees are and when you can assume them. Other languages don't, but they also don't support safety concepts that C++ nominally does like safety critical systems. "Good enough" can be perfectly fine for a web service like Go while being grossly inadequate for HPC or safety critical.
Firstly, you need composition. Rust's safety composes. The safe Rust library for farm animals from Geoff, the safe Rust library for cooking recipes by Alice and the safe Rust library for web server by Bert together with my safe program code adds up to my safe Rust farm foods web site.
By having N profiles, where N is intended to be at least five and might grow arbitrarily and be user extensible, C++ guarantees it cannot deliver composition this way.
Maybe they can define some sort of composition and maybe everybody will ship software which conforms to that definition and so eventually they get composition, that's not there today, so it's just a giant unknown at best.
Secondly, of the profiles described so far, most of them are just solving parts of the single overarching problem Rust addresses, for the serial case. So if they ship that, which already involves some amount of new work yet to be finished, you need all of those profiles to get to only partial memory safety.
Which comes to the third part. Once you start down this path, as they found, you realise you actually want a borrowck. You won't call it that of course, because that would be embarrassing. But you'll need to track reference lifetimes and you'll need annotation and you end up building most of the stuff you insisted you didn't want. For now, you can handwave, this is an unsolved static analysis problem. Well, not so much unsolved as you know the solution and you don't like it.
Your idea to do the reference counting everywhere is not something WG21 has looked at, I think the perf cost is sufficiently bad that they won't even glance at it. They're also not going to ship a GC.
Finally though, C++ is a concurrent language. It has a whole memory model which doesn't even make sense if you aren't thinking about concurrency. But to deliver concurrent memory safety without Fil-C's overheads you would want... well, Rust's Send and Sync traits, which sure enough have eerie twins in the Safe C++ proposal. No attempt to solve this is even hinted at in the current profiles proposal, and they would need to work one out and if it's not Send + Sync again they'd need to prove it is correct.
> Which comes to the third part. Once you start down this path, as they found, you realise you actually want a borrowck.
That's a bold statement. It might be true for some very loose definition of "borrow checker". See the super simple static analysis that WebKit uses (that presentation is now linked in at least two places on this HN discussion, so I won't link it again).
> Your idea to do the reference counting everywhere is not something WG21 has looked at, I think the perf cost is sufficiently bad that they won't even glance at it. They're also not going to ship a GC.
The point isn't to have ref counting on every pointer at the language level, but rather: if your prevent folks from calling `delete` directly (as one of the profiles does) then you're effectively forcing folks to use smart pointers.
Reference counting that happens by smart pointers is something that they would ship. We know this because it's already happened.
I imagine this would really mean that some references are ref counted (if you use shared_ptr or similar) while other references use some other policy.
> Finally though, C++ is a concurrent language. It has a whole memory model which doesn't even make sense if you aren't thinking about concurrency. But to deliver concurrent memory safety without Fil-C's overheads you would want... well, Rust's Send and Sync traits
Yeah, this might be an area where they leave a hole. Like, you might have reference counting that is only partially thread safe:
- The refcount of any object is atomic.
- The smart pointer itself is racy. So, racing on pointers can pop the protections.
If they got that far, then that wouldn't be so bad. The marginal safety advantage of Rust would be very slim at that point.
I doubt it, because the reason I favoured C++ over C back in 1993, was the safety culture, as someone coming from Turbo Pascal.
Somehow this has been deteriorating since 2000, as C++ kept getting C refugees that would rather keep using C, but work required C++ now.
Most of the hardening capabilities that are being added now, were already part of the compiler frameworks during the 1990's, e.g. Turbo Vision, OWL, MFC, CSet++, MFC, MacApp, PowerPlant,...
C++ / profiles will not be able to do much less or much different to achieve the same goals.
Instead, for example, the lifetime safety profile (https://github.com/isocpp/CppCoreGuidelines/blob/master/docs...) is a Rust-like compile time borrow checker that relies on annotations like [[clang::lifetimebound]], yet they also repeatedly insist that profiles will not require this kind of annotation (see the papers linked from https://www.circle-lang.org/draft-profiles.html#abstract).
Their messaging is just not consistent with the concrete proposals they have described, let alone actually implemented.
Microsoft even has blog posts admitting that only with SAL like annotations it can be improved, while keeping the usual C++ semantics.
Yet WG21 has ignored this field experience.
1. They're either unimplementable or useless (too many false positives and false negatives).
I think this is pretty evident based on the fact that profiles have been proposed for a while and that no real implementation exists. Worse, out of all of the open source projects and for profit companies, noone has been able to implement any sort of static analysis that would even begin to approach the guarantees Rust makes.
2. The language doesn't give you any tools to actually write safe code.
Ok, let's say that someone actually implements safety profiles. And it highlights your usage of a standard library type. What do you do?
Safe C++ didn't require a new standard library just because. The current stdlib is riddled with safety issues that can't really be fixed and would not be fixed because of backwards compatibility.
You're stuck. And so you turn the safety profile off.
To address your points: 1. The safe subset of C++ is too small to do anything with. 2. The Standard Library is not written in the safe subset.
My favorite example from the above paper is the problem of std::sort -- the compiler has no idea if both operands are iterators into the same allocation. The function is fundamentally unsafe. Which C++ profile do you turn on to make that safe? Does it ban use of std::sort? Does it ban use of all <algorithms>, all of which work on pointers/iterators that are susceptible to use-after-free UB?
The whole Standard Library is unsafe. I proposed a rigorously safe std2, and that was rejected. And now you propose a safe std2 (using refcounting primitives)--why would that fare better? What does Profiles actually propose? No change in existing code. The compiler simply finds all UB. Right.
True. So many proposals have gone by over the years. Here's one of mine from 2001.[1] Bad idea. The layers of cruft in C++ have become so deep that it's a career just to understand them.
DARPA has something called the TRACTOR program, "Translate All C to Rust". It's been underway for a year, and they have a consortium of universities working on it. Not much, if anything, has come out. Disappointing.
Rust is probably too hard. I write 100% safe Rust, and there are times when I hit an ownership structure wall and have to spend several days re-planning. So far I've always succeeded without using "unsafe" or indices, but it drags down productivity.
Although object-oriented programming is out of fashion, classes with inheritance are useful. It's really hard to do something comparable in Rust. Traits are not that helpful for this.
Go is a good compromise. Safety at a minor cost in performance. Go is good enough for web back end stuff. Go has both GC and "green threads". This automates the problems that wear people down in C++ and Rust.
[1] https://www.animats.com/papers/languages/cppstrictpointers.h...
I really don’t understand this perspective. The whole philosophy of Rust is one where you document why “unsafe” is safe. It is not and never has been a goal to make everything safe because that is an impossible goal to merge with high performance systems language because hardware itself is unsafe. It’s why the unsafe keyword exists. If that wasn’t the goal, unsafe wouldn’t.
You shouldn’t go out of your way to use unsafe, but between that and 2 weeks refactoring, I’ll take the unsafe and use tools like miri or ASAN to provide extra guards. Engineering is inherently about making practical choices.
Even ignoring the practicality approach, there’s a reason you see it in things like crossbeam or zerocopy - not everything worthwhile expressing can even be expressed in purely safe code wether because of performance or purely because the ownership lifetime cannot be understood due to the limitations of the borrow checker even when it is indeed safe code.
Understanding the implicit constraints of C/C++ functions is something where an LLM can help. Once you have the constraints recorded, they become formal constraints at the call. Historic C isn't expressive enough for even basic constraints.
Most of the constraints mentioned are at least expressible in Rust.
There is a common perception that Rust is less productive than competing languages, but empirical research by Google and others has found this to be wrong. Rust just shifts the effort earlier in the development phase, where the costs are often orders of magnitude lower. You may spend a few hours struggling with the borrow checker, but that saves you countless days of debugging highly non-trivial defects, especially in a larger codebase.
> Although object-oriented programming is out of fashion, classes with inheritance are useful. It's really hard to do something comparable in Rust. Traits are not that helpful for this.
FWIW, "classes with inheritance" in Rust can be very elegantly modeled with generic typestate. (Traits are used as part of this pattern, but are not the full story.) It might look clunky at first glance, but it accurately reflects the underlying semantics.
That works fantastically when you're rewriting something - you already have the idea and final product nailed down.
It works poorly when you don't have everything nailed down and might switch a lot of stuff around, or remove stuff that isn't needed, etc.
If you're prototyping code you can just do defensive .clone() calls and use Rc<> to avoid borrow checker issues. You don't need maximum efficiency, and the added boilerplate doesn't hurt that much: in fact, it helps should you want to refactor the code later.
I do prototype applications in Rust and it involves heavy refactoring including deletions. Those steps are the easiest ones for me and rarely gives me any headache. Part of the reason are the interfaces that you're forced to define clearly early on. Even the unrelated friction of satisfying the borrow checker gently nudge you towards that.
The real problems are often caused by certain operations that the type system can't prove to be safe, even when they are. For example, you couldn't write async closures until recently. Such situations often require lots of thought to resolve. You may have to restructure your code or use a workaround like RC variables.
The point is, these sorts of assumptions often don't seem to hold in practice, at least in my experience. My personal experience doesn't agree with the assertion that prototyping is hard in Rust.
63 more comments available on Hacker News