What's New in C# 14: Null-Conditional Assignments
Posted4 months agoActive3 months ago
blog.ivankahl.comTechstoryHigh profile
heatedmixed
Debate
80/100
C#Programming LanguagesNull Safety
Key topics
C#
Programming Languages
Null Safety
The introduction of null-conditional assignments in C# 14 has sparked a heated debate among developers, with some praising its conciseness and others criticizing its potential for abuse and decreased readability.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2d
Peak period
78
48-60h
Avg / period
22.9
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 15, 2025 at 2:08 PM EDT
4 months ago
Step 01 - 02First comment
Sep 17, 2025 at 6:47 PM EDT
2d after posting
Step 02 - 03Peak activity
78 comments in 48-60h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 22, 2025 at 4:23 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45253012Type: storyLast synced: 11/20/2025, 7:45:36 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I think it is for situations where the programmer wants to check a child property but the parent object may be null. If the parent is expected to be null sometimes, the syntax lets the programmer express "try to get this value, but if we can't then move on" without the boilerplate of explicitly checking null (which may be a better pattern in some cases).
It's sort of like saying:
- Get the value if you can, else move on. We know it might not be there and it's not a big deal.
v.s.
- The value may not be there, explicitly check and handle it because it should not be null.
> If config?.Settings is null, the assignment is skipped.
If the right hand expression has side effects, are they run? I guess they do, and that would make the code more predictable.
Side-Effect Prevention When a null-conditional statement assignment is evaluated, the right-hand side of the expression is not executed unless the left-hand side is defined.
I really dislike that, because it hides the control flow too much. Perhaps I'm biased by Racket, where it's easy to define something weird using macros, but you should not do unexpected weird things.
For example you can define vector-set/drop that writes a value to a position of a vector, but ignores the operation when the position is outside the vector. For example
With a macro is possible to skip (print "banana") because -2 is clearly out of range, but if you do that everyone will hate you.That ooks like a good reason. Looking at https://learn.microsoft.com/en-us/dotnet/csharp/language-ref... , the magic is in the LHS or the RHS, and in ??= the magic is explicitly in the connection. Here, IMHO, the problem is that the magic jumps from one side to the other. Anyway, I don't use C#.
The motivation is that you don't want the side effects in some cases like GetNextId() but I think it's still strange. I hacven't thought deeply about it but i _think_ I'd rather keep the intuitive right-hand-first evaluation and explicitly have to use if (..) in case I have a RHS whose side effects I need to avoid when discarded.
I feel like this is another step in the race to add every conceivable feature to a language, for the sake of it.
When you have an expression P which names a mutable place, and you execute P := X, the contract says that P now exhibits the value X, until it is assigned another value.
Conditional assignment fucks this up. When P doesn't exist, X is not stored. (Worse, it may even be that the expression X is not evaluated, depending on how deep the fuckery goes.)
Then when you access the same expression P, the conditional assignment becomes conditional access and you get back some default value like a nil.
Store X, get back nil.
That's like a hardware register, not a durable memory model.
It's okay for a config.connection?.retryPolicy to come up nil when there is no config.connection. It can be that the design makes nil a valid retry policy, representing some default. Or it could be that it is not the case, but the code which uses connection? handles the nil soon afterward.
But a store to config.connection?.retryPolicy not having an effect; that is dodgy.
What you need for config.connection? to do when the expression is being used to calculate a mutable place to assign to is to check that config.connection is null, and in that case, instantiate a representative instance of something which is then assigned to config.connnection, such that the config.connection.retryPolicy place then exists and the assignment can proceed.
This recognizable as a variation on COW (copy-on-write); having some default instance for reading, but allocating something on writing.
In a virtual memory system, freshly allocated memory can appear to contain zero bytes on access due to all of its pages being mapped to a single all-zero frame that exists in the entire system. Conceptually, the hardware could do away with even that all-zero frame and just have a page table entry which says "this is a zero-filled page", so the processor then fakes out the zero values without accessing anything. When the nonexistent page is written, then it gets the backing storage.
In order to instantiate settings.connection? we need to know what that has to be. If we have a static type system, it can come from that: the connection member is of some declared type of which a representative instance can be produced with all constructor parameters defaulted. Under a dynamic paradigm, the settings object can have a handler for this: a request to materialize a field of the object that is required for an assignment.
If you don't want a representative config.connection to be created when config.connection?.retryPolicy is assigned, preferring instead that config.connection stays null, and the assignment is sent to the bit buckets, you have incredibly bad taste and a poor understanding of software engineering and programming language design --- and the design of your program is scatter-brained accordingly.
The config example isn't the best, but instead imagine if it was just connection.?retryPolicy. After you set connection?.retryPolicy it would be weird for reading it back to be null. But it would be just as weird for connection?.retryPolicy to not be null when we never established a connection in the first place.
The copy on write analogy is tempting but what you're describing only works when the default value is entirely made of nulls. If you need anything that isn't null, you need to actually make an object (either upfront or on first access). And if you do that, you don't need ?. anymore.
If it worked using "materialize-on-write" semantics, why wouldn't you, as an alternative to the verbose code which checks every path component that might not exist, and instantiates it before doing the assignment?
Obviously, you can't use it if you don't have materialize-on-write semantics in the assigned expression not that you shouldn't.
But syntax error would be fine.
Definitely not acting the same as a question mark.
That's the mindset the feature is developed for (and by).
This isn't the case, though, is it? A normal member access (or indexer) expression may point to a mutable location (field, property). However, with conditional access expressions you get either a member access _or nothing_. And that nothing is not a mutable place.
When you use any of the conditional operators, you split the following code into two paths, and dropping the assignment (since there's nothing to assign to) seems pretty consistent to me, since you'd also drop an invocation, or a property evaluation in similar expressions.
If you require P to point to something that actually exists because you want the assignment to succeed, then write code to ensure that P exists because the assignment has no way of knowing what the intention was on the left side.
Once you introduce this misfeature, mutable places no longer have the property that they all record the assigned value and keep it until the next assignment.
Now, sure, you also don't have that property when you have operator overloading that allows assignment to be coded by the programmer; but that's a design decision under the programmer's control which affects only that code base, not the entire language and all its users.
And we all get to choose what we find ridiculous:
i = i + 1 ? No it does not. Never has, never will.
Connection is null? It's insane to type it as Connection then. null has type Null.
The syntactic sugar already exits before this change. That is to say, in a version f C# without the feature, you can write this:
It's not not well-formed semantically.What the change does is allow the above to be well-formed semantically. If b isn't null, then it behaves like a.b.c = foo. Otherwise, the value of foo is discarded. (Perhaps foo isn't even evaluated?!)
The idea that there is no change in semantics, but only syntactic sugar is exactly backwards.
A particular meaning was assigned to combinations of syntactic sugar which were previously invalid.
That meaning involved making a design choice among multiple possible meanings.
Is there any public visibility to the design decision; what alternatives were considered for the semantics and rejected?
> The syntactic sugar already exits before this change. That is to say, in a version f C# without the feature, you can write this:
You cannot.The semantics is conditional assignment, which already exists in the language. Hence the article repeatedly bludgeoning you with pairs of code snippets - whose semantics is the same - but there is a new syntax. So maybe you could say "there's no syntax change" in the sense that "there is only additional syntax, none of the old syntax as been modified".
When you insisted this was a change in semantics, I double-checked, because this would be a huge fuckup (and would explain why you're making such a fuss about it). A difference in semantics (in this case) would mean backward-incompatibility, and would break all codebases already using the syntax a.b?.c = foo. But old codebases do not use that syntax because that syntax does not exist before 14. Old codebases already do conditional assignment, which has not changed. After 14 they will have new syntactic sugar to carry out the old semantics.
This is what sibling comment meant semantically with "Apparently you do t use if in your code?" even if his syntax ("do t") was fucked up.> Is there any public visibility to the design decision; what alternatives were considered for the semantics and rejected?
Asked about for years I guess: https://stackoverflow.com/questions/35887106/using-the-null-... (I.e. their semantics already works, but they want a better syntax (or sugar) to carry out the same semantics.)
And the discussion: https://github.com/dotnet/csharplang/discussions/6072
This error got replaced with code which allows that type of expression, and generates code for it rather than a diagnostic.
I can't say for sure that they did zero parser work for the feature, but it sure looks like in principle, you do not have to, for this case, given that it already parses.
(If they have a parser which handles semantic attributes in teh grammar like "assignable expression", then of course that gets adjusted.)
For reference:
More readable? I'm less convinced on that one.
Some of those edge cases and their effects can get pretty nuanced. I fear this will get overused exactly as the article warns, and I'm going to see bloody questions marks all over codebases. I hope in time the mental overhead to interpret exactly what they're doing will become muscle memory...
Why would you ever write an assignment, but not expect that it "sticks"? Assignments are pretty important.
What if someone doesn't notice the question marks and proceeds to read the rest of the code thinking that the assignment always takes effect? Is that still readable?
(Gets a lot better if you enable nullable references and upgrade the nullable reference warnings to errors.)
you didn't null check possibly.go.
Such a thing has been perpetrated in C. In C, you can repeat designated initializers. like
The order in which the expressions are called is unspecified, just like function arguments (though the order of initialization /is/ specified; it follows initialization order).The implementaton is allowed to realize that since .a is being initialized again by y(), the earlier initialization is discarded. And it is permitted not to emit a call to x().
That's just like permitting x() not to be called in x() * 0 because we know the answer is zero.
Only certain operators in C short-circuit. And they do so with a strict left-to-right evaluation discipline: like 0 && b will not evaluate b, but b && 0 will evaluate b.
The initializer expressions are not sequenced, yet they can be short-circuited-out in left-to-right order.
Consistency? What's that ...
It should be clear enough that this operator isn't going to run 'new' on your behalf. For layers you want to leave missing, use "?.". For layers you want to construct, use "??=".
> Why would you ever write an assignment, but not expect that it "sticks"? Assignments are pretty important.
If you start with the assignment, then it's important and you want it to go somewhere.
If you start with the variable, then if that variable doesn't have a home you don't need to assign it anything.
So whether you want to skip it depends on the situation.
> What if someone doesn't notice the question marks and proceeds to read the rest of the code thinking that the assignment always takes effect? Is that still readable?
Do you have the same objection with the existing null-conditional operators? Looking at the operators is important and I don't think this makes the "I didn't notice that operator" problem worse in a significant way.
In all these examples I feel something must be very wrong with the data model if you're conditionally assigning 3 levels down.
At least the previous syntax the annoyingness to write it might prompt you to fix it, and it's clear when you're reading it that something ain't right. Now there's a cute syntax to cover it up and pretend everything is okay.
If you start seeing question marks all over the codebase most of us are going to stop transpiling them in our head and start subconsciously filtering them out and miss a lot of stupid mistakes too.
This is a fantastic way to make such nasty behavior easier.
And agreed on the question mark fatigue. This happened to a project in my last job. Because nullable types were disabled, everything had question marks because you can't just wish away null values. So we all became blind and several nullref exceptions persisted for far too long.
I'm not convinced this is any better.
> More concise? Yes.
Note: being more concise is not really the goal of the `?` features. The goal is actually to be more correct and clear. A core problem these features help avoid is the unfortunate situation people need to be in with null checks where they either do:
Or, the more correct, but much more unweildy: `?` allows the collapsing of all the concepts together. The computation is only performed once, and the check and subsequent operation on it only happens when it is non-null.Note that this is not a speculative concern. Codebases have shipped with real bugs because people opted for the former form versus the latter.
Our goal is to make it so that the most correct form should feel the nicest to write and maintain. Some languages opt for making the user continuously write out verbose patterns over and over again to do this, but we actually view that as a negative (you are welcome to disagree of course). We think forcing users into unweildy patterns everywhere ends up increasing the noise of the program and decreasing the signal. Having common patterns fade away, and now be more correct (and often more performant) is what we as primary purposes of the language in the first place.
Thanks!
At this point, even though I've been doing .net since version 2, I get confused with what null checks I should be doing and what is the new "right" and best syntax. It's kind of becoming a huge fucking mess, in my opinion anyway.
If you want a kind of proof of this, see this documentation which requires 1000s of words to try and explain how to do null/nullable: https://learn.microsoft.com/en-us/dotnet/csharp/nullable-ref...
Do you think most C# devs really understand and follow this entire (complex and verbose) article?
C# grows because they add improvements but cannot remove older ways of doing things due to backwards compatibility. If you wan’t a language without so much cruft, I recommend F#.
I'd love to see some good examples of those bugs you referred to, in order to get some more context.
Is the intent of the second form to evaluate only once, and cache that answer to avoid re-evaluating some_expr?
When some_expr is a simple variable, I didn't think there was any difference between the two forms, and always thought the first form was canonical. It's what I've seen in codebases forever, going all the way back to C, and it's always been very clear.
When some_expr is more complex, i.e. difficult to compute or mutable in my timeframe of interest, I'm naturally inclined to the second form. I've personally found that case less common (eg. how exactly are you using nulls such that you have to bury them so deep down, and is it possible you're over-using nullable types?).
I appreciate what you're saying about nudging developers to the most correct pattern and letting the noise fade away. I always felt C# struck a good balance with that, although as the language evolved it feels like there's been a growing risk of "too many different right ways" to do things.
Btw while you're here, I understand why prefix increment/decrement could get complicated and why it isn't supported, but being forced to do car.Wheel?.Skids += 1 instead of car.Wheel?.Skids++ also feels odd.
Isn't this over engineered? Why not allow the assignment but do nothing if any of the intermediate objects is null (that's how Kotlin does it).
Compare that with PHP where the "0" string is falsy and where even the array has no class (so no methods).
If it’s rarely used, people may misinterpret whether the RHS is evaluated or not when the LHS doesn’t exist (I don’t actually know which it is).
Optional operations and missing properties often require subtle consideration of how to handle them. You don’t want to make it too easy to say “whatever”.
I fully expect no RHS evaluation in that case. I think the fear is misplaced; it's one of those "why can't I do that when I can do this" IMO. If you're concerned, enable the analyzer to forbid it.
There are already some really overly paranoid analyzers in the full normal set that makes me wonder how masochistic one can be...
I'm a fan of this notation because it's consistent but language design should not just add features because it doesn't hurt.
The only gripe I have though, is that I have to be remember which version does and does not support such syntax changes. It's not a major issue by any means, but when dealing with legacy applications, I tend to often forget what is and is not syntactically allowed.
For example in my last gig, the original devs didn't understand typing, so they were forever writing typing code at low levels to check types (with marker interfaces) to basically implement classes outside of the classes. Then of course there was lots of setting of mutable state outside of constructors, so basically null was always in play at any moment at any time.
I would have loved this feature while working for them, but alas; they were still on 4.8.1 and refused to allow me to upgrade the codebase to .net core, so it wouldn't have helped anyway.
So no, c# are not constantly null-checking more than in Rust
Think of it this way. We already supported these semantics in existing syntax through things like invocations (which are freely allowed to mutate/write). So `x?.SetSomething(e1)`. We want properties to feel and behave similarly to methods (after all, they're just methods under the covers), but these sorts of deviations end up making that not the case.
In this situation, we felt like we were actually reducing concept count by removing yet another way that properties don't compose as well with other language features as something like invocation calls do.
Note: when we make these features we do do an examination of the ecosystem and we can see how useful the feature would be. We also are community continuously with our community and seeing just how desirable such a feature is. This goes beyond just those who participate on the open source design site. But also tons of private partners, as well as tens of thousands of developers participating at our conferences and other events.
This feature had been a continued thorn for many, and we received continuous feedback in the decade since `?.` was introduced about this. We are very cautious on adding features. But in this case, given the continued feedback, positive reception from huge swaths of the ecosystem, minimal costs, lowered complexity, and increased consistency in the language, this felt like a very reasonable change to make.
Thanks!
But the code gets really hard to understand when you encounter code that uses a subset you aren't familiar with. I remember staring at C++ codebases for days trying to figure out what is going on there. There was nothing wrong with the code. I just wasn't too familiar with the particular features they were using.
* The above is just applying an existing (useful) feature to a different context. So there isn't really much learning needed, it now just 'works as expected' for assignments and I'd expect most C# engineers to start using this from the get go.
* As a C# and C++ developer, I am always excited to hear about new things coming in C++ that purportedly fix some old pain points. But in the last decade I'd say the vast majority of those have been implemented in awful ways that actually make the problem worse (e.g. modules, filesystem, ...). C#'s new features always seem pretty sound to me on the other hand.
I've never gotten to this point with any other language, no matter how hard I tried.
Static abstract methods are probably the feature I see used least (so far!) and they’re not nearly as hard to understand as half of the stuff in a recent C++ standard.
I just can't imagine Gen Z wanting to start a project in C#.
I realise there are still .NET shops, and I still talk to people who do it daily, but ours is a field driven by fashion whether we care to admit or not - and C# just does not feel as fashionable as it once did
(I'm a former C# dev, up until 2020)
It's portable, fast, productive and well supported by a massive corp. It's not just a "language du jour", it's here to stay.
There are plenty of job in dotnet where I live: old, new, startups...
I am the momentum!
JS has lost against TS which is basically C# for web (both designed by the same person) and Python is not really something you should build large applications with (execution speed + maintenance issues).
What do you believe is the current language du jour?
I think they would have done themselves a favor had they just rebranded as `dot`.
`dot build`
`dot run`
`dot publish`
`dot test`
Waaaaay better
I didn't ask about whether it was good.
I asked about whether it's past its peak.
I wouldn't say it's past its peak because it's still improving and there is no good alternative for a language of its class. Go isn't it (I doubt there will be a good desktop/mobile app/game engine etc story for Go in the future). Swift could have been a competitor in the allround space but Apple doesn't seem interested in conquering the world outside its own garden. I'm not sure who it would be that would make the "next" C# and .NET. Only Microsoft and Apple are making commercial desktop environments, for example.
I struggle to even see how anyone would prefer that over an explicit if before assigning.
Having that on the right side (attribute reference) is great, but that was already available as far as I understood the post...
Maybe my feeling is just rooted in the fact I've never used a language which allowed ?. on assignment
So as a casual observer, I'd say it brings more consistency.
But also as a casual observer, my opinion is low-value.
Most companies don't care about this kind of stuff.
I work across Java, C#, JS/TS, C++, SQL, and whatever else might be needed, even stuff like Go and C, that I routinely criticise, because there is my opinion, and then there is the job market, and I rather pay my bills.
It was a joke.
You can't banish the "absence of value" from any programming language. That wouldn't be a useful language. You can stop confusing "a string but perhaps not" as a single type "string" as C# did in the past though.
It feels like the C# designers have a hard time saying "no" to ideas coming their way. It's one of my biggest annoyances with this otherwise nice language. At this point, C# has over 120 keywords (incl. contextual ones) [0]. This is almost twice as much as Java (68) [1], and 5 times as much as Go (25) [2]. And for what? We're trading brevity for complexity.
[0]: https://learn.microsoft.com/en-us/dotnet/csharp/language-ref... keywords/
[1]: https://en.wikipedia.org/wiki/List_of_Java_keywords
[2]: https://go.dev/ref/spec#Keywords
https://learn.microsoft.com/en-us/dotnet/csharp/language-ref...
You can force it all the way down to ISO-1/2.
If this is still insufficient, then I question what your goals actually are. Other people using newer versions of C# on their projects shouldn't be a concern of yours.
Obviously you’re not alone to disagree, and there are even some good arguments you could potentially be making. But to say “I question what your motives really are” and tell someone what they should be concerned with is… odd?
It’s a very common position with ample practical examples. While there certainly are valid counter arguments, they are a little more involved than “nothing is stopping you.” There is. Collaborating with others, for example.
You can argue that C# gets a lot of new features that are hard to keep up with, but I wouldn't agree this is one of them. This actually _reduces_ the "mental size" of C#.
IDK, if you read
as "there is a now a ExponentialBackoffRetryPolicy" then you could be caught out when there isn't. That one ? char can be ignored .. unless it can't. It's another case where "just because it compiles and runs doesn't mean that it does the thing".This to me is another thing to keep track of. i.e. an increase in the size of the mental map needed to understand the code.
That's been part and parcel for C# for over 10 years at this point. When we added `?.` originally, it was its nature that it would not execute code that was now unnecessary due to the receiver being null. For example:
This would already not run anything on the RHS of the `?.` if `Settings` was null.So this feature behaves consistently with how the language has always treated this space. Except now it doesn't have an artificial limitation on which 'expression' level it stops at.
But also, reading the code will mean keeping track of (slightly) more possible outcomes.
As I wrote in another comment, ignored side effects are perhaps the one questionable aspect of it. I usually assume RHS is evaluated first, regardless of what happens on the left - but I'm not sure that mental model was actually correct. But keeping that would mean having to simply do explicit if _when_ there are side effects. So
But for non-side effects It's obviously hard to know when there are side effects, but that problem already existed. You could trip this up before too e.g. Would have tripped you up. But at least it's obvious WHY it would. Something I have long sought in C# is a good way of tracking what is pure and what isn't.Hi there! C# language designer here :-)
In this case, it's more that this feature made the language more uniform. We've had `?.` for more than 10 years now, and it worked properly for most expressions except assignment.
During that time we got a large amount of feedback from users asking for this, and we commonly ran into it ourselves. At a language and impl level, these were both very easy to add in, so this was a low cost Qol feature that just made things nicer and more consistent.
> It feels like the C# designers have a hard time saying "no" to ideas coming their way.
We say no to more than 99% of requests.
> We're trading brevity for complexity
There's no new keyword here. And this makes usage and processing of `?.` more uniform and consistent. Imo, that is a good thing. You have less complexity that way.
p.s. I will take the opportunity to say that I dream of the day when C# gets bounded sum types with compiler enforced exhaustive pattern matching. It feels like we are soooo close with records and switch expression, but just missing one or two pieces to make it work.
I should have been clearer in my message. This specific feature is nice, and the semantics are straightforward. My message is more about some the myriad of other language features with questionable benefits. There's simply more and more "stuff" in the language and a growing number of ways to write the same logic but often with subtle semantic differences between each variant. There are just too many different, often overlapping, concepts. The number of keywords is a symptom of that.
C# is also much more flexible than languages you compared it to. In bunch of scenarios where you would need to add a second language to the stack, you could with C# still use just one language, which reduces complexity significantly.
As polyglot I have the advantage that I don't have to sell myself as XYZ Developer, and increasingly I don't think C# (the language itself) is going into the direction that I would like, for that complexity I rather keep using C++.
Just wait for when extension everything, plus whatever design union types/ADT end up having, and then what, are they going to add on top to justify the team size, and yearly releases?
Despite my opinion on Go's design, I think the .NET team should take a lesson out of them, and focus on improving the AOT story, runtime performance, and leave the language alone, other than when needed to support those points.
Also bring VB, F# and C++/CLI along, this doesn't not have to be C# Language Runtime, where it gets all the features of what designed as a polyglot VM.
"I understand we sometimes need to address deficiencies in a language, but when we do stop? More syntax is leading to daily decision fatigue where it's difficult to justify one approach over another. I don't want C# to become C++."
It was interesting listening to the discussion that took over from that. The audience seemed in favour of what I said, and someone else in the audience proposed a rolling cut-off to deprecate older features after X years. It sounded very much like Mads had that discussion internally, but Microsoft weren't in favour. I understand why, but the increasing complexity of the language isn't going to help any of us long-term.
Property reads were fine before (returning null if a part of the chain was null), method invocations were fine (either returning null or just being a no-op if a part of the chain was null). But assignments were not, despite syntactically every ?. being basically an if statement, preventing the right side from executing if the left side is null (yes, that includes side-effects from nested expressions, like arguments to invocations).
So this is not exactly a new feature, it just removes a gotcha from an old one and ensures we can use ?. in more places where it previously may have been useful, but could not be used legally due to syntax reasons.
Yes, actually. I did write it multiple times naturally only to realize it was not supported yet. The pattern is very intuitive.
Discriminated unions continue to be worked on, and you can see our latest designs here: https://github.com/dotnet/csharplang/blob/main/proposals/uni...
The space there is large and complex, and we have a large amount of resources devoted to it. There was no way that `a?.b = c` was going to change if/when unions come to the language.
For unions, nothing has actually been delayed. We continue working hard on it, and we'll release it when we think it's suitable and ready for the future of the lang.
The only feature that actually did get delayed was 'dictionary expressions' (one that i'm working). But that will hopefully be in C# 15 to fill out the collection-expression space.
By delayed I mean that the committee was discussing about discriminated unions since a long time ago and it was never "the right time". You can see the discussions related to implementing discriminated unions on Github.
They also often need a lot of scaffolding to be built along the way. We like breaking hard problems into much smaller, composable, units that we can build into the language and then compose to a final full solution. We've been doing that for many years, with unions being a major goal we've been leading to. At this point, we think we have the right pieces in place to naturally add this in a way that feels right to the C# ecosystem.
Thanks for the feedback!
That is why although they are much requested, none of the proposals that I have seen are simple to understand or easy to implement, and thus are proceeding slowly.
I don't really see Discriminated union as being in "competition" with "a?.b = c" as that's a "quick win" extension to previous ?. and ?? syntax. It's not even close to being of the same magnitude.
I would settle for a good built-in Result<T, E> type, so that people don't roll their own crappy ones, or use various opinionated but incompatible libraries.
It feels like Microsoft just wants C# to be Python or whatever and the language is losing its value and identity. It's becoming bland, messy, and complicated for all the same reasons they keep increasing padding between UI elements. It's very "me too" and I'm becoming less and less interested in what I used to consider my native language.
I used to write any and all little throwaway programs and utilities in C#, but it just keeps getting more and more fussy. Like Python, or maybe java. Nowadays I reach for C++. It's more complicated, but at least it's stable and not trying to rip itself apart.
Oh, how happy would I be if Python had a sliver of C# features. There's nothing like null-conditionals in Python, and there are many times I miss them
Additinoally, I think they are becoming hostage that every year C# gets a new release, thus the team has to keep pushing features no matter what.
Imagine how C# will look a decade from now with this rythm.
Slowly I am starting to think it is not that bad, that most of the .NET projects that our agency does are still stuck on Framework.
The `?.` operator behaves similarly on the LHS to the RHS, making the language more consistent, which is always a good thing. In terms of readability, I would say that once you understand how the operator works (which is intuitive because the language already supports it on the RHS), it becomes more readable than wrapping conditionals in `if` statements.
There are downsides, such as the overuse I mentioned. But this is true for many other language features: it requires experience to know when to use a feature appropriately, rather than applying it everywhere.
However, the great thing about this particular enhancement is that it's mostly cosmetic. Nothing prevents teams from not adopting it; the old syntax still works and can be enforced. C# and .NET are incredibly versatile, which means code can look dramatically different depending on its context and domain. For some projects, this feature might not be needed at all. But many codebases do end up with various conditional assignments, and in those cases, this can be useful.
For example, if we have something like this:
and we introduce another category SpecialSettings, we need to split one code block into two and manually place each line in the correct code block: With the new language feature the modification is easy and concise: becomes: and can be made for any other special setting in place, without the need to group them.Furthermore, I find the "Don't Overuse It" section of the article somewhat misleading. All the issues mentioned with regard to
would apply to the traditional version as well: or: If it really were a bug, when customer is null here, etc., then it would of course make sense to guard the code as detailed as described in the article. However, this is not a specific issue of the new language feature. Or to put it more bluntly: is no replacement for were we want an exception on null.BTW, with the new version, we can also make the code even clearer by placing each element on its own line:
You've never seen something like this?
Let's say in the first instance, you write a proof of concept algorithm using basic concepts like List<T>, foreach, stream writing. Accessible to a beginner, safe code, but it'll churn memory (which is GC'd) and run using scalar CPU instructions.
Depending on your requirements you can then progressively enhance the memory churn, or the processing speed:
for(;;), async, LINQ, T[], ArrayPool<T>, Span<T>, NativeMemory.Alloc, Parallel.For, Vector<T>, Vector256<T>, System.Runtime.Intrinsics.
Eventually getting to a point where it's nearly the same as the best C code you could write, with no memory churn (or stop-the-world GC), and SIMD over all CPU cores for blisteringly fast performance, whilst keeping the all/most of the safety.
I think these new language features have the same virtue - I can opt into them later, and intellisense/analysers will optionally make me aware that they exist.
C# is not the only one offering these kind of capabilities, still big kudos to the team, and the .NET performance improvements blog posts are a pleasure to read.
Honestly I like the C# better, and I was initially not super thrilled about the choice to include the period in the TS case, but since there is some overlap between the language design teams I assumed they would be the same.
I have a feeling this is going to make debugging code written just a few months ago incrementally difficult. At least the explicit if statements are easier to follow the intent from months ago.
The syntax is clean though. I'll give it that.
If I say an asseignment I expect the value to be "evaluated".
I could have grasped the expression in all the null values would have been replaced with new instances, but then it would have been too much invasive and magic to work, so - again - I understand why the designers night have been force to settle for the actual behaviour...
But maybe then the half-solution is not worth it
I couldn't imagine what a "Null-Conditional Assignment" would do, and now I see but I don't want this.
Less seriously, I think there's plenty of April Fools opportunity in this space. "Null-Conditional Function Parameters" for example. Suppose we call foo(bar?, baz?) we can now decide that because bar was null, this is actually executing foo(baz) even though that's a completely unrelated overload. Hilarity ensues!
Or what about "Null-Conditional Syntax". If I write ???? inside a namespace block, C# just assumes that when we need stuff which doesn't exist from this namespace it's probably just null anyway, don't stress. Instead of actual work I can just open up a file, paste in ???? and by the time anybody realises none of my new "classes" actually exist or work I've collected my salary anyway.
If your C# code needs this, your "C# code" is 3 javascript files in a trenchcoat.. :-).
I’m glad it’s now a thing. It’s an easy win, helps readability and helps to reduce the verbosity of some functions. Love it. Now, make the runtime faster…
3 more comments available on Hacker News