C++26: Erroneous Behaviour
Posted4 months agoActive4 months ago
sandordargo.comTechstory
controversialmixed
Debate
80/100
C++Programming LanguagesSoftware Safety
Key topics
C++
Programming Languages
Software Safety
The article discusses the new 'erroneous behaviour' feature in C++26, sparking a debate among commenters about the language's safety, ergonomics, and future prospects.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
38m
Peak period
49
0-6h
Avg / period
11.1
Comment distribution100 data points
Loading chart...
Based on 100 loaded comments
Key moments
- 01Story posted
Sep 6, 2025 at 6:51 PM EDT
4 months ago
Step 01 - 02First comment
Sep 6, 2025 at 7:29 PM EDT
38m after posting
Step 02 - 03Peak activity
49 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 9, 2025 at 10:53 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45153636Type: storyLast synced: 11/20/2025, 5:42:25 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Is this to cover cases that would be hard/costly to detect? For example you pass the address of an uninitialized variable to a function in another source file that might read or just write to it, but the compiler can't know.
I've used C++ for so long and I'm a good way into thinking that the language is just over. It missed its mark because of backwards compatibility with fundamental language flaws. I think we can continue to add features - which are usually decent ideas given the context, kudos to the authors for the effort - but the language will decline faster than it can be fixed. Furthermore, the language seems to be continually becoming harder to implement, support, and write due to the constant feature addition and increase in semantic interconnectivity. To me it's almost mostly a theoretical exercise to add features at this point: practically we end up with oddly specced features which mostly work but are fundamentally crippled because they need to dodge an encyclopedia of edge cases. The committee are really letting the vision of a good C++ down by refusing to break backwards compatibility to fix core problems. I'm talking fundamental types, implicit conversions, initialisation, preprocessor, undefined / ill-formed NDR behaviour. The C++ I'm passionate about is dead without some big changes I don't think the community can/will handle.
sure we are looking at options - but rust and c++ don't interoperate well (c api is too limiting). D was looking interesting for a while but I'm not sure how it fits (d supports c++ abi)
Many developers never do the math of what their salaries actually cost.
We in consulting are deeply aware of the impact mapping hours and days into money, plus license and support costs, and infrastructure.
Hence why I dislike all those "I rewrote X into Y", great and how many thousands did that fun endeavour cost to the employer?
you can perhaps guess based on my comment history but I'm not supposed to state directly.
We do these night walks out in the Forest. 2-4 hours in the countryside, when it's still pleasantly warm but no sun. Owls, bats, insects and occasionally horses or deer that were not expecting humans - no sudden moves because horses or deer can easily kill you by mistake if startled. Without much light it's hard to be sure whether you're on a trail and if you've reached the fork you expected or just a place where there's a tree in the way.
So, it's not rare to realise we've made a mistake. Now, if you realise after 5 minutes you can "just" backtrack. It's nowhere near as easy in darkness, but it's possible and arguably preferable. However it is also possible to realise after say an hour. You thought you'd be in a clearing, on a ridge, but you're a mile away approaching a river crossing. Oops. It is usually not correct to retrace your steps in this case, you need a new plan taking into account what you know now about your position.
Memory safety semantics aside (needed and will be disruptive, even if done gradually) ---
You could get 80% of the way to ergonomic parity via a 1:1 re-syntaxing, just like Reason (new syntax for OCaml) and Elixer (new syntax for Erlang). C++ has good bones but bad defaults. Why shouldn't things be const by default? Why can't we do destructuring in more places? Why is it so annoying to define local functions? Why do we have approximately three zillion ways of initializing a variable?
You can address a lot of the pain points by making an alternative, clean-looking "modern" syntax that, because it's actually the same language as C++, would have perfect interoperability.
Sounds like Herb Sutter's cpp2/cppfront: https://github.com/hsutter/cppfront
I mean, c'mon
> Show that in today's C++ guidance literature we already teach C++ programmers to make local variables const, and link to a few examples of such guidance (which will contain more details, including examples of problems that the advice avoids). That will start to make the case. If you can do that, feel free to open a "suggestion" issue about it!
That "C++ guidance literature" is called the Rust book. Const by default won.
You'd have an existing language with a new syntax; it can perfectly interact with existing C++ code, but you could make those suggested changes, and could also express things in the new syntax that couldn't be done in the old one.
EDIT: taking an example elsewhere in this thread; taking an address of an uninitialised variable and passing it to a function. Today the compiler can't (without inter-procedure analysis) tell whether this is a use of uninitialised data, or whether it's only going to write/initialise the variable.
A new syntax could allow you to express that distinction.
Perhaps a better language could even get some traction without major corporate sponsorship. I think (?) rust and zig are examples of that.
Rust might not count depending on whether you count Mozilla's sponsorship a major corporate sponsorship.
Also things often just don’t compose well. For example if you have a nested class that you want to use in an unordered_set in its parent class then you just can’t do it because you can’t put the std::hash specialization anywhere legal. It’s just two parts of the language which are totally valid on their own but don’t work together. Stuff like this is such a common problem in c++ that it drives me nuts
This is not true. From within your parent class you use an explicit hashing callable, and then from outside of the parent class you can go back to using the default std::hash.
The result looks like this:
The std::hash specialization at the end is legal and allows other users of Foo::Bar to use std::unordered_set<Foo::Bar> without needing the explicit BarHasher.This is an interesting perspective to me, because my view as someone who's been using Rust since close to 1.0 and hasn't done much more than dabbled in C++ over the years is basically the opposite. My (admittedly limited) understanding is that this has never really been a goal of the committee, because if someone is willing to sacrifice backwards compatibility, they could presumably just switch to using one of those other languages at that point. Arguably the main selling point of C++ today is the fact that there's a massive set of existing codebases out there (both libraries that someone might want to use and applications that might still be worked on), and for the majority of them, being rewritten would be at best a huge effort and more realistically not something going to be seriously considered.
If the safety and ergonomics of C++ are a concern, I guess I'm not sure why someone would pick it over another language for a newly-started codebase. In terms of safety, Rust is an option that exists today without needing C++ to change. Ergonomics are a bit less clear-cut, but I'd argue that most of the significant divergences in ergonomics between languages are pretty subjective, and it's not obvious to me that there's a significant enough gap in between Rust's and C++'s respective choices that warrant a new language that's not compatible with C++ but is far enough from Rust for someone to refuse to use it on the basis ergonomics alone. It seems to me like "close enough to C++ to attract the people who don't want to use Rust but far enough from Rust to justify breaking C++'s backwards compatibility" is just too narrow a niche for it to be worth it for C++ to go after.
However I think C++ still has some things going for it which may make it a useful option, assuming the core issues were fixed. C++ gives ultimate control over memory and low level things (think pointers, manual stack vs heap, inline assembly). It has good compatibility with C ABIs. It's very general purpose and permissive. And there are many programmers with C++ (or C) knowledge out there already.
Further, I think C++ started on its current feature path before Rust really got a big foothold. Consider C++ has been around a really long time, plenty long enough to fix core features.
Finally I reckon the whole backwards compatibility thing is a bit weird because if the code is so ancient and unchangable, why does it need the latest features? Like you desprately need implicit long-to-int conversion but also coroutines?? And for regular non-ancient code, we already try to avoid the problematic parts of C++, so fixing/removing/changing them wouldn't be so bad. IMO it's a far overdone obsession with backwards compatibility.
Of course without a significant overhaul to the language you'd probably say "screw it" and start from scratch with something nicer like Rust.
I think I'm most confused about the last part that you're saying. A significant overhaul to the language in a breaking way feels pretty much the same as saying "screw it" and starting from scratch, just with specific ergonomic choices being closer to C++ than to Rust. Several of the parts that you cite as the strengths of the language, like inline assembly and pointers are still available in Rust, just not outside of explicitly unsafe contexts, and I'd imagine that an overhaul of C++ to enhance memory safety would end up needing to make a fairly similar compromise for them. It just seems like the language you're wishing for would end up with a fairly narrow design space, even if it is objectively superior to the C++ we have today, because it would have to give up the largest advantage that C++ does have without enough unoccupied room to grow into. The focus on backwards compatibility doesn't seem to be that it would necessarily be the best choice in a vacuum, but a reflection of the state of the ecosystem as it is today, and a perception that sacrificing it would be giving up its position as the dominant language in a well-defined niche to try to compete in a new one. This is obviously a subjective viewpoint, but it doesn't seem implausible to me, and given the fact that we can't really know how it would work out unless they do try, sticking with compatibility feels like the safer option.
Headers would be a problem given their text inclusion in multiple translation units, but it's not insurmountable; you're currently limited to the oldest standard a header is included into, and under a new standard that breaks compatibility you'd be limited to a valid subset of the old & new standard.
EDIT: ironically modules (as a concept) would (could?) solve the header problem, but they've not exactly been a success story so far.
Because they are little different from precompiled headers. Import std; may be nice, but in a large project you are likely to have your own defines.hpp file anyway (that is going to be precompiled for double-digits compile times reduction).
Ironically too, migrating every header in an executable project to modules might slow down build times, as dependency chains reduce the parallelism factor of the build.
So does Rust.
> It's very general purpose and permissive
I don't really understand this point.
> And there are many programmers with C++ (or C) knowledge out there already
Indeed, the only thing C++ has for it is it's legacy: Lots of projects written in C++ and many programmer who knows the language. And that's exactly why it is difficult to change.
When there are new features added to the language, it takes time to adjust because the old code isn't getting converted by itself, and the devs need to learn and get used to the new features.
(If you can afford to teach your team new features of C++, you can as well teach your team Rust.)
[0] https://github.com/emilk/egui
[1] https://www.egui.rs/
Like retained mode GUIs, games, intrusive containers or anything that can't be trivially represented by a tree of unique_/shared_ptr?
modern games are almost never made in a native language, but rather on a language on top of it, be it squirle, lua, blueprints, c#, wasm, javascript, state trees, binary trees, decision trees...
> (hopefully) will do it better
So, not quite literally everything
Be honest.
You have been called out for dishonesty. Stop lying. I won't respond again.
I just don't see a reason to use c++ anymore (as a language).
There happy? Point being if I am starting a project, it's going to be rust because it's just plain better as long as it's something I actually care about and isn't related to breaking things or leveraging the fact that c++ is worse (in a good way) or I am already working with c++.
IIUC this is what Profiles are. It’s an opt in source file method to ban certain misfeatures and require certain other safe features.
But then the problem becomes where exactly do you opt-in to this feature? If you do it in a header file then this can result in a function being compiled with the safety profile turned on in one translation unit and then that exact same function is compiled without that safety profile in another translation unit... which ironically results in one of the most dangerous possible outcomes in C++, the so-called ODR violation.
If you don't allow safety-profiles to be turned on in header files, then you've now excluded a significant amount of code in the form of templates, constexpr and inline functions.
Stroustrop, 2023 presentation: https://open-std.org/JTC1/SC22/WG21/docs/papers/2023/p2816r0...
Sutter, 2025 committee paper: https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2025/p30...
Sutter, 2024 blog post: https://herbsutter.com/2024/03/11/safety-in-context/
https://safecpp.org/draft.html
What do you mean?
- The P3390R0 Safe C++ proposal and the related vote in the 2024-11 Wrocław meeting: https://github.com/cplusplus/papers/issues/2045
- The adoption of the paper P3466 R1 (Re)affirm design principles for future C++ evolution at the same meeting, which contains language which can be interpreted as preemptively foreclosing a Safe C++-style approach: https://github.com/cplusplus/papers/issues/2121
The idea in this document was to write down "principles" of the C++ language without regard to annoying facts which might put those principles in doubt.
That would at the very least be better than what we have now.
I don't think it would be that straightforwards since this is one of those changes which depends pretty heavily on compiler internals, unlike library-only reference implementations. It'd be like taking a reference implementation in Clang and "adding it" to GCC/MSVC.
Like any language that lasts (including Python and Rust) you subset it over time: you end up with linters and sanitizers and static analyzers and LSP servers, you have a build. But setting up a build is a one-time cost and maintaining a build is a fact of life, even JavaScript is often/usually the output of a build.
And with the build done right? Maybe you dont want C++ if youre both moving fast and doing safety or security critical stuff (as in, browser, sshd, avionics critical) but you shouldnt be moving fast on avoinics software to begin with.
And for stuff outside of that "even one buffer overflow is too many" Venn?
C++ doesn't segfault more than either of those after its cleared clang-tidy and ASAN. Python linking shoddy native stuff crashes way more.
https://github.com/Speykious/cve-rs
> That is why cve-rs uses #![deny(unsafe_code)] in the entire codebase. There is not a single block of unsafe code (except for some tests) in this project.
> cve-rs implements the following bugs in safe Rust:
* Use after free
* Buffer overflow
* Segmentation fault
[0] https://hal.science/hal-01633165v2/document
[1] https://github.com/rust-lang/rust/issues/25860
[2] https://github.com/rust-lang/rust/issues/107975
Caution: Lifetime may not live long enough
https://github.com/Speykious/cve-rs/issues/48
If you have low-defect appropriate test coverage, ASAN approaches linear typing for the class of bugs that the borrow checker catches.
Python and Rust have their sweet spots just like C++ does, I use all three. The meme that either is a blanket replacement for all of C++'s assigned duties is both sikly by observation and not demonstrated by enthusiastic adoption in C/C++ in the highest-demand regimes: AI inference, HFT, extreme CERN-scale physics, your web browser.
Python is better for fast prototyping and other iteration critical work, Rust is unmatched for "zero defect target with close to zero cost abstractions" (it was designed for writing a web browser and in that wildly demanding donain I can think of nothing better).
C++ excels in extreme machine economics scenarios where you want a bit more flexibility in your memory management and a bit more license to overrule the compiler and a trivial FFI story. Zig to me looks likely to succeed it in this role someday.
All have their place, software engineers in these domains should know all of them well, and any "one true language" crap is zealoty or cynical marketing or both.
When it's for all the marbles today? It's C++. The future is an open question, but a lot of us are pushing hard for C++.
Backwards compatibility is what keeps C and C++ relevant, for better or worse.
All major languages value backwards compatibility to certain extent.
Python was lucky that after Python 2 to Python 3 mess, being the AI scripting language, has turned things around.
C++ sucks for macOS, because it was never a first party language, unless using it via Objective-C++.
The only role of C++ in macOS is for drivers, LLVM and being the Metal Shader Language (C++14 with extensions).
In an homage to NeXTSTEP, the userspace version of IO Kit is named DriverKit.
There are some frameworks that might use C++ in their implementations, like Core Audio, however they are exposed to userspace as C APIs and Objective-C frameworks.
Hence why Apple nowadays mostly cares about LLVM, clang is good enough for their limited uses of C++.
https://developer.apple.com/xcode/cpp/
https://cppreference.com/w/cpp/compiler_support.html
Slint, Bevy and no idea what a GPU framework is (even search engines are stumped; they keep recommending Framework laptops). Is it just CUDA?
Not gonna lie, what exactly is this mismatch of stuff? How to be C++ in 3001 steps?
Ergonomic as C++? That sounds as oxymoronic as "musical as nails on a chalkboard or a cat drowning in porridge." Having had the displeasure to read some relatively complex C++ codebase (SIMD json), has left me in deep appreciation of Java.
Followed by being integrated into RenderDoc, Pix, Instruments, NSights, for debugging purposes and instrumentation.
GPU frameworks is having matching CUDA, Open API, Metal Shading Language, ROCm feature parity, in IDE tooling, graphical debuggers, industry and academic support.
Having a seat at the table when Khronos and its partners are discussing the next GPU standards, so far only C, C++ and most recently Python, have a seat allocated.
It had to have a commercial game engine: so C++ or C#. It had to have cross platform GUI with Qt features - so C++, maybe Delphi. And it had to have seat at GPU standards so C, C++ and Python.
Forget Rust. C doesn't fit your criteria. Or Java. Or C#...
By cherry picking your criteria you can create an argument that pre selects one language.
Console devkits included.
Also the fact that C89 (minus one or two things) is a subset of C++ is still a reality that many studios take advantage of.
Java is indeed not used at all in the commercial games industry, unless the authors only focus on desktop or Android.
There is a reason why the very first thing Microsoft did after Mojang's acquisitions was to rewrite Minecraft into C++ for mobile platform and game consoles. They only don't get rid of Java version due to the modding community.
C# is only used thanks to Unity and even that is powered via C++ engine, and a compiler that translates MSIL bytecode into C++.
Yes, Rust still has a lot to catch up in this GUI and games industry.
On VFX, it isn't even part of the reference platform for the industry,
https://vfxplatform.com/
Where is its equivalent of QGraphicsView? Or QTextDocument?
And Slint is probably more mature than any other pure Rust GUI. They, as of now, are little more than toys.
Look, I enjoy Rust as well, but be realist.
I wasn't aware having feature parity with <INSERT FRAMEWORK FOR INSERT LANGUAGE> was necessary for language to beat C++. I could have sworn Java did it without half of those.
Ultimately it depends what you want to do? Cross platform GUI or winning pointless debates.
If you really want cross platform GUI you pretty much have to use Web based UI. Because Apple is extremely hostile to anything that tries to be native on its platform.
IDEs are the only surviving Java desktop applications, and a few outliers like Bitwig DAW.
Well by that measure neither did C/C++. Only few survivors in the deluge of Electron apps.
Chrome and Firefox also use JavaScript/CSS for their GUI.
Most of the rest of my complaints could be addressed by jettisoning backward compatibility and switching to more sensible defaults. I realize this will never happen.
C++ still has some unique strengths, particularly around metaprogramming compared to other popular systems languages. It also is pretty good at allowing you to build safe and efficient abstractions around some ugly edge cases that are unavoidable in systems programming. Languages like Rust are a bit too restrictive to handle some of these cases gracefully.
The difference in API shapes in their respective standard libraries is because of culture not technology. Easy to see example: In C++ the two fundamental sort functions are named sort and stable_sort. In Rust those same two sorts are named sort_unstable and sort, a subtle but crucial safer choice. Or compare C++ vector::pop_back against Rust Vec::pop
I would personally like a safe (and fast) subset that doesn't require me to be vigilant but catches every thing I could do wrong to the same level as rust. Then, like rust, you could remove that flag for a few low-level parts that for some reason need to be "unsafe" (maybe because they call into the OS).
There was a good talk from the WebKit team about stuff they did to get more safety.
https://www.youtube.com/watch?v=RLw13wLM5Ko
Some of it was AST level checks. IIRC, they have a pre-commit check that there is no pointer math being used. They went over how to change code with pointer math into safe code with zero change in performance.
A similar one was Ref usage checking where they could effectively see a ref counted object was being passed to as a raw pointer to a function that might free the ref and then still used in the calling function. They could detect that with an AST based checker.
That said, I have no idea how they (the C++ committee) are going to fix all the issues. --no-undefined-behavior would be a start. Can they get rid of the perf bombs with std::move? Why do I have to remember that shit?
"no one goes there anymore it's too crowded"
It's widely used and you can do so effectively if you need it as know what you're doing.
They love to say C++ is for everyone but it is clearly not. Only wizards and nerds burdened by sunk cost fallacy is willingly writing these modern C++ code. I personally just use C++ as a "nicer" C.
Unless they were incumbent and inertia keeps them in. Or they're the only choice you have for a niche target. Or you have some other reason to keep them, such as (thinking?) the performance they bring is more important.
For this case, there is the [[indeterminate]] attribute.
Surely all good things come to an end, but where? i reckon there will be a C++29. what about C++38? C++43 sounds terrifying. Mid-century C++? there is no way in hell i will still be staying up to date with C++43. Personally I've already cut the cord at C++11.
As long as there are people willing to put in the work to convince the standards committee that the proposals they champion are worth adding to C++ (and as long as there is a committee to convince, I suppose), then new versions of C++ will continue to be released.
> there is no way in hell i will still be staying up to date with C++43. Personally I've already cut the cord at C++11.
Sure, different developers will find different features compelling and so will be comfortable living with different standards. That one group is fine with existing features shouldn't automatically prevent another from continuing to improve the language if they can gain consensus, though.
#1 Require initialization (like many modern languages). Makes sense. But now your existing C++ doesn't even compile so that's a hard "No" from the committee.
#2 Status quo, evaluating uninitialized variables is Undefined Behaviour. We cannot diagnose this reliably, any attempt will be Best Effort and several vendors already supply this but when it doesn't catch you arbitrary nonsense happens.
#3 Zero init. Now not initializing has defined behaviour, all the diagnostic tools we saw in #2 are invalidated and must be removed, but did you actually mean zero? Awful bugs still occur and now our best tools to solve them are crippled. Ouch.
#4 Erroneous Behaviour. Unlike #3 we do not invalidate those diagnostic tools from #2 because we've said the tool was correct. However, we do avoid Undefined Behaviour, something bad might happen but at least it's something you can reason about and it is clearly stated that it's your fault.
Because it is an actual security vulnerability if you cross privilege boundaries (infoleaks/(K)ASLR bypass, etc.), and one people often miss at that.
Say you write:
You end up leaking 7 stack bytes here (due to padding).GCC's `-ftrivial-auto-var-init=pattern` currently initializes all unknown-value stack variables with 0xFEFEFEFE(...). This is usually an invalid fp value, invalid offset and invalid virtual address, allowing crashes to happen. This is a good thing.
Regarding performance, there is an attribute to opt out (both for the standard C++26 feature and the GCC option that is a subset of it)
`-ftrivial-auto-var-init=pattern` doesn't need "erroneous behaviour" in the standard at all. In fact, it may outright conflict with it, if for example the standard defines that the compiler must initialize variables to zero instead of your chosen pattern in case of "erroneous behaviour".
"Erroneous behaviour" is a superfluous concept that exists only to allow the committee to pat themselves on the back and say "See? We no longer have undefined behaviour!".
I guess it's my old theory that organisations turn into living beings and start living off of mere survival instinct even past the point of serving any purpose to society.