What .net 10 Gc Changes Mean for Developers
Mood
calm
Sentiment
mixed
Category
other
Key topics
The article discusses the changes to the .NET 10 Garbage Collector (GC) and their implications for developers, sparking a discussion on the benefits and potential drawbacks of these changes.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
4d
Peak period
155
Week 1
Avg / period
53.3
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 1, 2025 at 4:40 AM EDT
about 2 months ago
Step 01 - 02First comment
Oct 5, 2025 at 1:15 AM EDT
4d after posting
Step 02 - 03Peak activity
155 comments in Week 1
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 18, 2025 at 10:54 AM EDT
about 1 month ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Preparing for the .NET 10 GC - https://news.ycombinator.com/item?id=45358527 - Sept 2025 (60 comments)
Won't this potentially cause stack overflows in programs that ran fine in older versions though?
An ArrayList<Float> is a list of pointers though.
Eventually value classes might close the gap, finally available as EA.
Doing a Python 3 would mean no one wanted going to adopt it.
Yes it is long process.
Some of the JEP in the last versions are the initial baby steps for integration.
Free for some (most?) use cases these days.
Basically enterprise edition does not exist anymore as it became the "Oracle GraalVM" with a new license.
But 64 bits of virtual address space is large enough that you can keep the stacks far enough apart that even for pretty extreme numbers of threads you'll run out of physical memory before they start clashing. So you can always just allocate more physical pages to the stack as needed, similar to the heap.
I don't know if the .net runtime actually does this, though.
You set the (max) stack size once when you create the thread and you can’t increase the (max) size after that.
Processes see a virtual address space that is handled by the OS, so you would have to involve the OS if you needed to add to the stack size dynamically.
Many userspace apps already do custom stack handling, it's how things like green threads work. And many non-native runtimes like .net already have custom handling for their managed stacks, as they often have different requirements and limitations to the "native" stack, and often incompatible formats and trying to isolate from possible bugs means there's less benefit to sharing the same stack with "native" code.
That's certainly a possibility, and one that's come up before even between .net framework things migrated to .net core. Though usually it's a sign that something is awry in the first place. Thankfully the default stack sizes can be overridden with config or environment variables.
It’s a missed opportunity and I can’t help but feel that if the .NET team had gotten more involved in the proposals early on then C# in the browser could have been much more viable.
There are a couple unicorns like Figma and that is it.
Performance is much better option with WebGPU compute, and not everyone hates JavaScript.
Whereas on the server it is basically a bunch of companies trying to replicate application servers, been there done that.
It has taken off in the browser. If you've ever used Google Sheets you've used WebAssembly.
Amazon switched their Prime Video app from JavaScript to WebAssembly for double the performance. Is streaming video a niche use case?
Lots of people are building Blazor applications:
https://dotnet.microsoft.com/en-us/apps/aspnet/web-apps/blaz...
> not most people aren’t using a high performance spreadsheet
A spreadsheet making use of WebAssembly couldn't be deployed to the browser if WebAssembly hadn't taken off in browsers.
Practical realities contradict pjmlp's preconceptions.
Microsoft would wish Blazor would take off like React and Angular, in reality it is seldom used outside .NET shops intranets in a way similar to WebForms.
WASM-GC will remove a lot of those and make quite a few languages possible as almost first-class DOM manipulating languages (there's still be cludges as the objects are opaque but they'll be far less bad since they can at least avoid external ID mappings and dual-GC systems that'll behave leakily like old IE ref-counts did).
You still need to usually install plenty of moving pieces to produce a wasm file out of the "place language here", write boilerplate initialisation code, debugging is miserable, only for a few folks to avoid writing JavaScript.
Counted out over N languages, we should see something decent land before long.
1: JavaScript _interoperability_ , ie same heap but incompatible objects (nobody is doing static JS)
2: Java, Schemes and many other GC derived languages ,etc have more "pure" GC models, C# traded some of it for practicality and that would've required some complications to the regular JS GC's.
The .NET team appears to be aware of WasmGC [0], and they have provided their remarks when WasmGC was being designed [1].
For example, .NET has internal pointers which WASMGC's MVP can't handle. This doesn't change that so it's still a barrier to using WASMGC. At the same time, it isn't adding new language requirements that WASMGC doesn't handle - the changes are to the default GC system in .NET.
I agree it's disappointing that the .NET team wasn't able to get WASMGC's MVP to support what .NET needs. However, this change doesn't move .NET further away from WASMGC.
Edit: Looks like you are allowed to benchmark the runtime now. I was able to locate an ancient EULA which forbade this (see section 3.4): https://download.microsoft.com/documents/useterms/visual%20s...
> You may not disclose the results of any benchmark test of the .NET Framework component of the Software to any third party without Microsoft’s prior written approval.
It's because you aren't looking at 20 year old EULA's
>3.4 Benchmark Testing. The Software may contain the Microsoft .NET Framework. You may not disclose the results of any benchmark test of the .NET Framework component of the Software to any third party without Microsoft’s prior written approval.
This person is not likely familiar with the history of the .net framework and .net core because they decided a long time ago they were never going to use it.
It's fine if you stick to JetBrains and pay for their IDE (or do non-commercial projects only), and either work in a shop which isn't closely tied to VS (basically non-existent in my area), or work by yourself.
> The development and deployment tooling is so closely tied to VS that you can't really not use it.
Development tooling: It's 50-50. Some use Visual Studio, some use Rider. It's fine. The only drawback is that VS Live Share and the Jetbrains equivalent don't interoperate.
deployment tooling: There is deployment tooling tied to the IDE? No-one uses that, it seems like a poor idea. I see automated build/test/deploy pipelines in GitHib Actions, and in Octopus Deploy. TeamCity still gets used, I guess.
It's true though that the most common development OS is Windows by far (with Mac as second) and the most common deployment target by far is Linux.
However the fact that there is close to no friction in this dev vs deploy changeover means that the cross-platform stuff just works. At least for server-side things such as HTTP request and queue message processing. I know that the GUI toolkit story is more complex and difficult, but I don't have to deal with it at all so I don't have details or recommendations.
VS has the “Publish” functionality for direct deployment to targets. It works well for doing that and nothing else. As you said, CI/CD keeps deployment IDE agnostic and has far more capabilities (e.g. Azure DevOps, GitHub Actions).
The cross-platform version is mainstream, and this isn't new any more.
.NET on Linux works fine for services. Our .NET services are deployed to Linux hosts, and it's completely unremarkable.
I worked on a mud on linux right after high school for awhile. Spent most of the time on the school's bsdi server prior to that though.
Then I went java, and as they got less permissive and .net got more permissive I switched at some point. I've really loved the direction C# has gone merging in functional programming idioms and have stuck with it for most personal projects but I am currently learning gdscript for some reason even though godot has C# as an option.
The rest of the ecosystem is "more permissive" than .NET since there are far more FOSS libraries for every task under the sun (which don't routinely go commercial without warnings), and fully open / really cross-platform development tooling, including proper IDEs.
https://benchmarksgame-team.pages.debian.net/benchmarksgame/...
[1]: https://docs.oracle.com/en/industries/food-beverage/micros-w...
> Publishing SQL Server benchmarks without prior written approval from Microsoft is generally prohibited by the standard licensing agreements.
But also read these 400 articles to understand our GC. If you are lucky, we will let you change 3 settings.
https://learn.microsoft.com/en-us/dotnet/core/runtime-config...
https://github.com/dotnet/runtime/blob/main/src/coreclr/gc/g...
It pretty much never gets in your way for probably 98% of developers.
TieredCompilation on the other hand caused a bunch of esoteric errors.
How about F#? Isn't F# mostly C# with better ergonomics?
It’s pretty easy to stick to pure F# if what you want is the pure functional programming experience. But what I like about it is its pragmatism, and this is a big reason why it’s the language I chose for the course. It is by-value, eagerly evaluated by default, and has an easy-to-learn idiomatic syntax. It has a large and well-behaved standard library, and you can use C#’s excellent standard library if you need additional things (e.g., mutable data structures). I have used F# in many performance-sensitive applications, and the fact that I can say “you know, inside this function, I’m going to use mutability, raw pointers, and iteration” has been a lifesaver in some places. But because it is a functional language, I can also abstract all that away and pretend that it is functional.
I understand why other FP folks dislike this approach. But the insistence on purity makes many problems artificially difficult. Debugging a lazily evaluated program is a nightmare. There are lots of times I want a well-behaved language but I am not willing to do CS research just to solve common algorithmic problems. The generally pragmatic streak from the SML family makes them easy to love.
Passing or returning a function seems a foreign concept to many devs. They know how to use lambda expressions, but rarely write code that works this way.
We adopted ErrorOr[0] and have a rule that core code must return ErrorOr<T>. Devs have struggled with this and continue to misunderstand how to use the result type.
Agreed with getting developers to see the value. The most convincing argument I’ve been able to make thus far has been “isn’t it embarrassing when your code explodes in production? Imagine being able to find those errors at compile time.” The few who actually understand the distinction between “compile time” and “run time” can usually appreciate why you might want it.
If FP was really better at “all the important things”, why is there such a wide range of opinions, good but also bad? Why is it still a niche paradigm?
Like other posters, I am not going to claim that it is better at all things. OOP’s approach to polymorphism and extensibility is brilliant. But I also know that nearly all of the mistakes I make have to do with not thinking carefully enough about mutability or side-effects, features that are (mostly) verboten in FP. It takes some effort to re-learn how to do things (recursion all the things!) but once you’ve done it, you realize how elegant your code can be. Many of my FP programs are also effectively proofs of their own correctness, which is not a property that many other language styles can offer.
Here’s an appropriate PG essay: https://paulgraham.com/avg.html
As much as I'd like to do more with it, the "just use F#" idea flaunted in this thread is a distant pipe dream for the vast majority of teams.
I suspect that the hidden indirection and runtime magic, may be part of why you love the language. In my experience, however, it leads to poor observability, opaque control flow, and difficult debugging sessions in every organisation and company I’ve ever worked for. It’s fair to argue that this is because the people working with C# are bad at software engineering. Similar to how Uncle Bob will always be correct when he calls teams out for getting his principles wrong. To me that means the language itself has a poor design fit for software development in 2025. Which is probably why we see more and more Go adoption, due to its explicit philosophies. Though to be fair, Python seems to be “winning” as far as adoption goes in the cross platform GC language space. Having worked with Django-Ninja I can certainly see why. It’s so productive, and with stuff like Pyrefly, UV and Ruff it’s very easy to make it a YAGNI experience with decent type control.
I am happy you enjoy C# though, and it’s great to see that it is evolving. If they did more to enhance the developer experience so that people were less inclined to do bad engineering on a thursday afternoon after a long day of useless meetings. Then I would probably agree with you. I'm not sure any of the changes going toward .NET 10 are going in that direction though.
I hate the implicitness of Spring Boot, Quarkus etc. as much as the one in C# projects.
All these magic annotations that save you a few lines of code until they don't, because you get runtime errors due to incompatible annotations.
And then it takes digging through pages of docs or even reporting bugs on repos instead of just fixing a few explicit lines.
Explicitness and Verbosity are orthogonal concepts mostly!
Isn't a GC also a Magic? Or anything above assembly? While I also understand the reluctance to use too much magic, in my experience, it's not the magic, it's how well the magic is tested and developed.
I used to work with Play framework, a web framework built around Akka, an async bundle of libraries. Because it wasn't too popular, only the most common issues were well documented. I thought I hated magic.
Then, I started using Spring Boot, and I loved magic. Spring has so much documentation that you can also become the magician, if you need to.
- code generators, I think I saw it only in regex. Logging can be done via `LoggerDefine` too so attributes are optional. Also code generators have access to full tokenized structure of code, and that means attributes are just design choice of this particular generator you are using. And finally code generators does not produce runtime errors unless code that they generated is invalid.
- Json serialization, sure but you can use your own converters. Attributes are not necessary.
- asp.net routing, yes but those are in controllers, my impression is that minimal APIs are now the go to solution and you have `app.MapGet(path)` so no attributes; you can inject services into minimal APIs and this does not require attributes. Most of the time minimal APIs does not require attributes at all.
- dependency injection, require attributes when you inject services in controllers endpoints, which I never liked nor understood why people do that. What is the use case over injecting it through controller constructor? It is not like constructor is singleton, long living object. It is constructed during Asp.net http pipeline and discarded when no longer necessary.
So occasional usage, may still occur from time to time, in endpoints and DTOs (`[JsonIgnore]` for example) but you have other means to do the same things. It is done via attributes because it is easier and faster to develop.
Also your team should invest some time into testing in my opinion. Integration testing helps a lot with catching those runtime errrors.
And going through converters is (was?) significantly slower for some reason than the built-in serialisation.
> my impression is that minimal APIs are now the go to solution and you have `app.MapGet(path)` so no attribute
Minimal APIs use attributes to explicitly configure how parameters are mapped to the path, query, header fields, body content or for DI dependencies. These can't always be implicit, which BTW means you're stuck in F# if you ever need them, because the codegen still doesn't match what the reflection code expects.
I haven't touched .NET during work hours in ages, these are mostly my pains from hobbyist use of modern .NET from F#. Although the changes I've seen in C#'s ecosystem the last decade don't make me eager to use .NET for web backends again, they somehow kept going with the worst aspects.
I'm fed up by the increasing use of reflection in C#, not the attributes themselves, as it requires testing to ensure even the simplest plumbing will attempt to work as written (same argument we make for static types against dynamic, isn't it?), and makes interop from F# much, much harder; and by the abuse of extension methods, which were the main driver for implicit usings in C#: no one knows which ASP.NET namespaces they need to open anymore.
Where did you saw all of those attributes in minimal APIs? I honestly curious because from my experience - it is very forgiving and works mostly without them.
I am at a (YC, series C) startup that just recently made the switch from TS backend on Nest.js to C# .NET Web API[0]. It's been a progression from Express -> Nest.js -> C#.
What we find is that having attributes in both Nest.js (decorators) and C# allows one part of the team to move faster and another smaller part of the team to isolate complexity.
The indirection and abstraction are explicit decisions to reduce verbosity for 90% of the team for 90% of the use cases because otherwise, there's a lot of repetitive boilerplate.
The use of attributes, reflection, and source generation make the code more "templatized" (true both in our Nest.js codebase as well as the new C# codebase) so that 90% of the devs simply need to "follow the pattern" and 10% of the devs can focus on more complex logic backing those attributes and decorators.
Having the option to dip into source generation, for example, is really powerful in allowing the team to reduce boilerplate.
[0] We are hiring, BTW! Seeking experienced C# engineers; very, very competitive comp and all greenfield work with modern C# in a mixed Linux and macOS environment.
(Of course you are also free to write C# without any of the built in frameworks and write purely explicit handling and routing)
On the other hand, we write CRUD every day so anything that saves repetition with CRUD is a gain.
Yes, they make CRUD stuff very easy and convenient.
await client.Write(
new ClientWriteRequest(
[
// Alice is an admin of form 123
new()
{
Object = "form:124",
Relation = "editor",
User = "user:avery",
},
]
)
);
var checkResponse = await client.Check(
new ClientCheckRequest
{
Object = "form:124",
Relation = "editor",
User = "user:avery",
}
);
var checkResponse2 = await client.Check(
new ClientCheckRequest
{
Object = "form:125",
Relation = "editor",
User = "user:avery",
}
);
This is an abstraction we wrote on top of it: await Permissions
.WithClient(client)
.ToMutate()
.Add<User, Form>("alice", "editor", "226")
.Add<User, Team>("alice", "member", "motion")
.SaveChangesAsync();
var allAllowed = await Permissions
.WithClient(client)
.ToValidate()
.Can<User, Form>("alice", "edit", "226")
.Has<User, Team>("alice", "member", "motion")
.ValidateAllAsync();
You would make the case that the former is better than the latter?Implicit and magic looks nice at first but sometimes it can be annoying. I remember the first time I tried Ruby On Rails and I was looking for a piece of config.
Yes, "convention over configuration". Namely, ungreppsble and magic.
This kind of stuff must be used with a lot of care.
I usually favor explicit and, for config, plain data (usually toml).
This can be extended to hidden or non-obvious allocations and other stuff (when I work with C++).
It is better to know what is going on when you need to and burying it in a couole of layers can make things unnecessarily difficult.
Better than no magic abstractions imo. In our large monorepo, LSP feedback can often be so slow that I can’t even rely on it to be productive. I just intuit and pattern match, and these magical abstractions do help. If I get stuck, then I’ll wade into the docs and code myself, and then ask the owning team if I need more help.
At least with macros I don't need to consider the whole of the codebase and every library when determining what is happening. Instead I can just... Go to the macro.
Maybe you're confusing `System.Reflection.Emit` and source generators? Source generators are just a source tree walker + string templates to write source files.
I agree the syntax is awkward, but all it boils down to is concatenating code in strings and adding it as a file to your codebase.
And the syntax will 100% get cleaner (it;s already happening with stuff like ForAttributeWithMetadataName
Works well until the 10% that understand the behind the scenes leave and you are left with a bunch of developers copy and pasting magic patterns that they don't understand.
I love express because things are very explicit. This is the JSON schema being added to this route. This route is taking in JSON parameters. This is the function that handles this POST request endpoint.
I joined a team using Spring Boot and the staff engineer there couldn't tell me if each request was handled by its own thread or not, he couldn't tell me what variables were shared across requests vs what was uniquely instantiated per request. Absolute insanity, not understanding the very basics of one's own runtime.
Meanwhile in Express, the threading model is stupid simple (there isn't one) and what is shared between requests is obvious (everything declared in an outer scope).
If I wanted explicitness for every little detail I would keep writing in Assembly like in the Z80, 80x86, 68000 days.
Unfortunately we never got Lisp or Smalltalk mainstream, so we got to metaprogramming with what is available, and it is quite powerful when taken advantage of.
Some people avoid wizard jobs, others avoid jobs where magic is looked down upon.
I would also add that in the age of LLM and AI generated applications, discussing programming languages explicitness is kind of irrelevant.
(define-route METHOD PATH BODY)
You can then easily expect the generated code. But in Java and others, you'll have something like @GET(path=PATH)
And there's a whole system hidden behind this, that you have to carefully understand as every annotation implementation is different.I tend to do obvious things whwn I use this kind of tools. In fact, I try to avoid macros.
Even if configurability is not important, I favor sinplification over reuse. In case I need reuse, I go for higher order functions if I can. Macro is the last bullet.
In some circumstances like Json or serialization maybe they can be slightly abused to mark fields and such. But whole code generation can take it so far and magic that it is not worth in many circumstances IMHO, thiugh every tool has its use cases, even macros and annotations.
Coding UX critically leans on familiarity and spread of knowledge. By definition, making a non-obvious macro not known by others makes the UI just worse for a definition of worse which means "less manageable by anyone that looks at it without previous knowledge".
That is also the reason why standard libraries always have an advantage in usability just because people know them or the language constructs themselves.
Using macros for DSLs has been common for decades, and is how frameworks like CLOS were initially implemented.
C# has increasingly become more terse (e.g. switch expressions, collection initializers, object initializers, etc) and, IMO, is a good balance between OOP and functional[0].
Functions are first class objects in C# and teams can write functional style C# if they want. But I suspect that this doesn't scale well in human terms as we've encountered a LOT of trouble trying to get TypeScript devs to adopt more functional programming techniques. We've found that many devs like to use functional code (e.g. Linq, `.filter()`, `.map()`), but dislike writing functional code because most devs are not wired this way and do not have any training in how to write functional code and understanding monads. Asking these devs to use a monad has been like asking kids to eat their carrots.
Across much of our backend TS codebase, there are very, very few cases where developers accept a function as input or return a function as output (almost all of it written by 1 dev out of a team of ~20).
> ...it is build upon OOP design principles and a bunch of “needless” abstraction
Having been working with Nest.js for a while, it's clear to me that most of these abstractions are not "needless" but actually "necessary" to manage complexity of apps beyond a certain scale and the reasons are less technical and more about scaling teams and communicating concepts.Anyone that looks at Nest.js will immediately see the similarities to Spring Boot or .NET Web APIs because it fills the same niche. Whether you call a `*Factory` a "factory" or something else, the core concept of what the thing does still exists whether you're writing C#, Java, Go, or JS: you need a thing that creates instances of things.
You can say "I never use a factory in Go", but if you have a function that creates other things or other functions, that's a factory...you're just not using the nomenclature. Good for you? Or maybe you misunderstand why there is standard nomenclature of common patterns in the first place and are associating these patterns with OOP when in reality, they are almost universal and are rather human language abstractions for programming patterns.
[0] https://medium.com/itnext/getting-functional-with-c-6c74bf27...
I think this is an excellent blog post by the way. My issues with C# (and this applies to a lot of other GC languages) is that most developers would learn a lot from your article. Because none of it is an intuitive part of the language philosophy.
I don't think you should never use OOP or abstractions. I don't think there is a golden rule for when you should use either. I do think you need to understand why you are doing it though, and C# sort of makes people go to abstractions first, not last in my experience. I don't think these changes to the GC is going to help people who write C# without understanding C#, which is frankly most C# developers around here. Because Go is opinionated and explicit it's simply an issue I have to deal with less in Go teams. It's not an issue I have to deal with less in Python teams, but then, everyone who loves Python knows it sucks.
I would not have imagined this to be controversial nor difficult, but it turns out that developers really prefer and understand exceptions. That's because for a backend CRUD API, it's really easy to just throw and catch at a global HTTP pipeline exception filter and for 95% of cases, this is OK and good enough; you're not really going to be able to handle it nor is it worth it to to handle it.
We'll stick with ErrorOr, but developers aren't using it as monad and simply unwrapping the value and the error because, as it turns out, most devs just have a preference/greater familiarity with imperative try-catch handling of errors and practically, in an HTTP backend, there's nothing wrong in most cases with just having a global exception filter do the heavy lifting unless the code path has a clear recovery path.
> I don't think you should never use OOP or abstractions. I don't think there is a golden rule for when you should use either.
I do think there is a "silver rule": OOP when you need structural scaffolding, functional when you have "small contracts" over big ones. An interface or abstract class is a "big contract" that means to understand how to use it, you often have to understand a larger surface area. A function signature is still a contract, but a micro-contract.Depending on what you're building, having structural scaffolding and "big contracts" makes more sense than having lots of micro-contracts (functions). Case in point: REST web APIs make a lot more sense with structural scaffolding. If you write it without structural scaffolding of OOP, it ends up with a lot of repetition and even worse opaqueness with functions wrapping other functions.
The silver rule for OOP vs FP for me: OOP for structural templating for otherwise repetitive code and "big contracts"; FP for "small contracts" and algorithmically complex code. I encourage devs on the team to write both styles depending on what they are building and the nature of complexity in their code. I think this is also why TS and C# are a sweet spot, IMO, because they straddle both OOP and have just enough FP when needed.
All the libraries you use and all methods from the standard library use exceptions. So you have to deal with exceptions in any case.
There's also a million or so libraries that implement types like this. There is no standard, so no interoperability. And people have to learn the pecularities of the chosen library.
I like result types like this, but I'd never try to introduce them in C# (unless at some point they get more language support).
But what I takeaway from this is that Go's approach to error handling is "also not clearly better, it's different".
Even if C# had core language support for result types, you would be surprised how many developers would struggle with it (that is my takeaway from this experience).
I know how to use this pattern, but the C# version still feels weird and cumbersome. Usually you combine this with pattern matching and other functional features and the whole thing makes it convenient in the end. That part is missing in C#. And I think it makes a different in understanding, as you would usually build ony our experience with pattern matching to understand how to handle this case of Result|Error.
> Usually you combine this with pattern matching and other functional features and the whole thing makes it convenient in the end. That part is missing in C#
You mean like this? string foo = result.MatchFirst(
value => value,
firstError => firstError.Description);
Or this? ErrorOr<string> foo = result
.Then(val => val * 2)
.Then(val => $"The result is {val}");
Or this? ErrorOr<string> foo = await result
.ThenDoAsync(val => Task.Delay(val))
.ThenDo(val => Console.WriteLine($"Finsihed waiting {val} seconds."))
.ThenDoAsync(val => Task.FromResult(val * 2))
.ThenDo(val => $"The result is {val}");
With pattern matching like this? var holidays = new DateTime[] {...};
var output = new Appointment(
DayOfWeek.Friday,
new DateTime(2021, 09, 10, 22, 15, 0),
false
) switch
{
{ SocialRate: true } => 5,
{ Day: DayOfWeek.Sunday } => 25,
Appointment a when holidays.Contains(a.Time) => 25,
{ Day: DayOfWeek.Saturday } => 20,
{ Day: DayOfWeek.Friday, Time.Hour: > 12 } => 20,
{ Time.Hour: < 8 or >= 18 } => 15,
_ => 10,
};
C# pattern matching is pretty damn good[0] (seems you are not aware?).[0] https://timdeschryver.dev/blog/pattern-matching-examples-in-...
Reflection is what makes DI feel like "magic". Type signatures don't mean much in reflection-heavy codes. Newcomers won't know many DI framework implicit behaviors & conventions until either they shoot themself in their foot or get RTFM'd.
My pet theory is this kind of "magic" is what makes some people like Golang, which favors explicit wiring over implicit DI framework magic.
> Just don't write bad code
Reminds me with C advices: "Just don't write memory leaks & UAF!".Yes, some programming languages have more landmines and footguns than others (looking at you, JS), and language designers should strive to avoid those as much as possible. But I actually think that C# does avoid those. That is: most of what people complain about are language features that are genuinely important and useful in a narrow scope, but are abused / applied too broadly. It would be impossible to design a language that knows whether you're using Reflection appropriately or not; the question is whether their inclusion of Reflection at all improves the language (it does). C# chose to be a general-purpose, multi-paradigmatic language, and I think they met that goal with flying colors.
> Newcomers won't know many DI framework implicit behaviors & conventions until either they shoot themself in their foot or get RTFM'd
The question is: does the DI framework reduce the overall complexity or not? Good DI frameworks are built on a very small number of (yes, "magic") conventions that are easy to learn. That being said, bad DI frameworks abound.
And can you imagine any other industry where having to read a few pages of documentation before you understood how to do engineering was looked upon with such derision? WTF is wrong with newcomers having to read a few pages of documentation!?
- Attributes can do a lot of magic that is not always obvious or well documented.
- ASP.NET pipeline.
- Source generators.
I love C#, but I have to admit we could have done with less “magic” in cases like these.
Yes, the ASP.NET pipeline is a bit of a mess. My strategy is to plug in a couple adapters that allow me to otherwise avoid it. I rolled my own DI framework, for instance.
Source generators are present in all languages and terrible in all languages, so that certainly is not a criticism of C#. It would be a valid criticism if a language required you to use source generators to work efficiently (e.g. limited languages like VB6/VBA). But I haven't used source generators in C# in at least 10 years, and I honestly don't know why anyone would at this point.
Maybe it sounds like I'm dodging by saying C# is great even though the big official frameworks Microsoft pushes (not to mention many of their tutorials) are kinda bad. I'd be more amenable to that argument if it took more than an afternoon to plug in the few adapters you need to escape their bonds and just do it all your own way with the full joy of pure, clean C#. You can write bad code in any language.
That's not to say there's nothing wrong with C#. There are some features I'd still like to see added (e.g. co-/contra-variance on classes & structs), some that will never be added but I miss sometimes (e.g. higher-kinded types), and some that are wonderful but lagging behind (e.g. Expressions supporting newer language features).
> But I haven't used source generators in C# in at least 10 years, and I honestly don't know why anyone would at this point.
A challenge with .NET web APIs is that it's not possible to detect when interacting with a payload deserialized from JSON whether it's `null` because it was set to `null` or `null` because it was not supplied.A common way to work around this is to provide a `IsSet` boolean:
private bool _isNameSet;
public string? Name { get; set { ...; isNameSet = true; } }
Now you can check if the value is set.However, you can see how tedious this can get without a source Generator. With a source generator, we simply take nullable partial properties and generate the stub automatically.
public partial string? Name { get; set; }
Now a single marker attribute will generate as many `Is*Set` properties as needed.Of course, the other use case is for AOT to avoid reflection by generating the source at runtime.
public Optional<string> Name;
With Optional being something like: class Optional<T> {
public T? Value;
public bool IsSet;
}
I'm actually partial to using IEnumerable for this, and I'd reverse the boolean: class Optional<T> {
public IEnumerable<T> ValueOrEmpty;
public bool IsExplicitNull;
}
With this approach (either one) you can easily define Map (or "Select", if you choose LINQ verbiage) on Optional and go delete 80% of your "if" statements that are checking that boolean.Why mess with source generators? They're just making it slightly easier to do this in a way that is really painful.
I'd strongly recommend that if you find yourself wanting Null to represent two different ideas, then you actually just want those two different ideas represented explicitly, e.g. with an Enum. Which you can still do with a basic wrapper like this. The user didn't say "Null", they said "Unknown" or "Not Applicable" or something. Record that.
public OneOf<string, NotApplicable> Name
A good OneOf implementation is here (I have nothing to do with this library, I just like it):https://github.com/mcintyre321/OneOf
I wrote a JsonConverter for OneOf and just pass those over the wire.
Source generators didn't exist in C# 10 years ago. You probably had something else in mind?
It's just code that generates code. Some of the syntax is awkward, but it's not magic imo.
My experience of .NET even from version 1 is that it has the best debugging experience of any modern language, from the visual studio debugger to sos.dll debugging crash dumps.
https://learn.microsoft.com/en-us/dotnet/core/runtime-config...
87 more comments available on Hacker News
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.