Go's Sweet 16
Mood
supportive
Sentiment
positive
Category
tech
Key topics
Go programming language
software development
open-source
The Go programming language is celebrating its 16th anniversary, marking a major milestone in its development and adoption.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
41m
Peak period
138
Day 1
Avg / period
69.5
Based on 139 loaded comments
Key moments
- 01Story posted
11/14/2025, 10:33:15 PM
4d ago
Step 01 - 02First comment
11/14/2025, 11:14:06 PM
41m after posting
Step 02 - 03Peak activity
138 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
11/17/2025, 6:04:01 PM
1d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
go is amazing. switches from python to go 7 years ago. It's the reason our startup did well
It took a few more years before I actually got around to learning it and I have to say I've never picked up a language so quickly. (Which makes sense, it's got the smallest language spec of any of them)
I'm sure there are plenty of reasons this is wrong, but it feels like Go gets me 80% of the way to Rust with 20% of the effort.
I don't see it. Can you say what 80% you feel like you're getting?
The type system doesn't feel anything alike, I guess the syntax is alike in the sense that Go is a semi-colon language and Rust though actually basically an ML deliberately dresses as a semi-colon language but otherwise not really. They're both relatively modern, so you get decent tooling out of the box.
But this feels a bit like if somebody told me that this new pizza restaurant does a cheese pizza that's 80% similar to the Duck Ho Fun from that little place near the extremely tacky student bar. Duck Ho Fun doesn't have nothing in common with cheese pizza, they're both best (in my opinion) if cooked very quickly with high heat - but there's not a lot of commonality.
I read it as “80% of the way to Rust levels of reliability and performance.” That doesn’t mean that the type system or syntax is at all similar, but that you get some of the same benefits.
I might say that, “C gets you 80% of the way to assembly with 20% of the effort.” From context, you could make a reasonable guess that I’m talking about performance.
Rust beats Go in performance.. but nothing like how far behind Java, C#, or scripting languages (python, ruby, typescript, etc..) are from all the work I've done with them. I get most of the performance of Rust with very little effort a fully contained stdlib/test suite/package manger/formatter/etc.. with Go.
I can only think of two production bugs I've written in Rust this year. Minor bugs. And I write a lot of Rust.
The language has very intentional design around error handling: Result<T,E>, Option<T>, match, if let, functional predicates, mapping, `?`, etc.
Go, on the other hand, has nil and extremely exhausting boilerplate error checking.
Honestly, Go has been one of my worst languages outside of Python, Ruby, and JavaScript for error introduction. It's a total pain in the ass to handle errors and exceptional behavior. And this leads to making mistakes and stupid gotchas.
I'm so glad newer languages are picking up on and copying Rust's design choices from day one. It's a godsend to be done with null and exceptions.
I really want a fast, memory managed, statically typed scripting language somewhere between Rust and Go that's fast to compile like Go, but designed in a safe way like Rust. I need it for my smaller tasks and scripting. Swift is kind of nice, but it's too Apple centric and hard to use outside of Apple platforms.
I'm honestly totally content to keep using Rust in a wife variety of problem domains. It's an S-tier language.
> Go... extremely exhausting boilerplate error checking
This actually isn't correct. That's because Go is the only language that makes you think about errors at every step. If you just ignored them and passed them up like exceptions or maybe you're basically just exchanging handling errors for assuming the whole thing pass/fail.
If you you write actual error checking like Go in Rust (or Java, or any other language) then Go is often less noisy.
It's just two very different approaches to error handling that the dev community is split on. Here's a pretty good explanation from a rust dev: https://www.youtube.com/watch?v=YZhwOWvoR3I
Rust forces you to think about errors exactly as much, but in the common case of passing it on it’s more ergonomic.
It could as well be Haskell :) Only partly a joke: https://zignar.net/2021/07/09/why-haskell-became-my-favorite...
The thing people tend to overvalue is the little syntax differences, like how Scala wanted to be a nicer Java, or even ObjC vs Swift before the latter got async/await.
I'm convinced no more than a handful of humans understand all of C# or C++, and inevitably you'll come across some obscure thing and have to context switch out of reading code to learn whatever the fuck a "partial method" or "generic delegate" means, and then keep reading that codebase if you still have momentum left.
https://262.ecma-international.org/16.0/index.html
I don't agree. (And frankly don't like using JS without at least TypeScript.)
It has some strange or weirdly specified features (ASI? HTML-like Comments?) and unusual features (prototype-based inheritance? a dynamically-bounded this?), but IMO it's a small language.
By the time you understand all of typescript, your templating environment of choice, and especially the increasingly arcane build complexity of the npm world, you've put in hours comparable to what you'd have spent learning C# or Java for sure (probably more). Still easier than C++ or Rust though.
Modules were added in, like, 2016.
How would the proportion of humans that understand all of Rust compare?
C# is actually fairly complex. I'm not sure if it's quite at the same level as Rust, but I wouldn't say it's that far behind in difficulty for complete understanding.
In contrast writing C++ feels like solving an endless series of puzzles, and there is a constant temptation to do Something Really Clever.
The packaging story is better than c++ or python but that's not saying much, the way it handles private repos is a colossal pain, and the fact that originally you had to have everything under one particular blessed directory and modules were an afterthought sure speaks volumes about the critical thinking (or lack thereof) that went into the design.
Also I miss being able to use exceptions.
I'm not saying it's awful, it's just a pretty mid language, is all.
Alas there are plenty of people who do[0] - for some reason Go takes architecture astronaut brain and wacks it up to 11 and god help you if you have one or more of those on your team.
[0] flashbacks to the interface calling an interface calling an interface calling an interface I dealt with last year - NONE OF WHICH WERE NEEDED because it was a bloody hardcoded value in the end.
This always feels like one of those “taste” things that some programmers tend to like on a personal level but has almost no evidence that it leads to more real-world success vs any other language.
Like, people get real work done every day at scale with C# and C++. And Java, and Ruby, and Rust, and JavaScript. And every other language that programmers castigate as being huge and bloated.
I’m not saying it’s wrong to have a preference for smaller languages, I just haven’t seen anything in my career to indicate that smaller languages outperform when it comes to faster delivery or less bugs.
As an aside, I’d even go so far as to say that the main problem with C++ is not that it has so many features in number, but that its features interact with each other in unpredictable ways. Said another way, it’s not the number of nodes in the graph, but the number of edges and the manner of those edges.
I'm in academia doing ML research where, for all intents and purposes, we work exclusively in Python. We had a massive CSV dataset which required sorting, filtering, and other data transformations. Without getting into details, we had to rerun the entire process when new data came in roughly every week. Even using every trick to speed up the Python code, it took around 3 days.
I got so annoyed by it that I decided to rewrite it in a compiled language. Since it had been a few years since I've written any C/C++, which was only for a single class in undergrad and I remember very little of, I decided to give Go a try.
I was able to learn enough of the language and write up a simple program to do the data processing in less than a few hours, which reduced the time it took from 3+ days to less than 2 hours.
I unfortunately haven't had a chance or a need to write any more Go since then. I'm sure other compiled, GC languages (e.g., Nim) would've been just as productive or performant, but I know that C/C++ would've taken me much longer to figure out and would've been much harder to read/understand for the others that work with me who pretty much only know Python. I'm fairly certain that if any of them needed to add to the program, they'd be able to do so without wasting more than a day to do so.
I can imagine myself grappling with a language feature unobvious to me and eventually getting distracted. Sure, there is a lot of things unobvious to me but Go is not one of them and it influenced the whole environment.
Or, when choosing the right language feature, I could end up with weighing up excessively many choices and still failing to get it right, from the language correctness perspective (to make code scalable, look nice, uniform, play well with other features, etc).
An example not related to Go: bash and rc [1]. Understanding 16 pages of Duff’s rc manual was enough for me to start writing scripts faster than I did in bash. It did push me to ease my concerns about program correctness, though, which I welcomed. The whole process became more enjoyable without bashisms getting in the way.
Maybe it’s hard to measure the exact benefit but it should exist.
> As an aside, I’d even go so far as to say that the main problem with C++ is not that it has so many features in number, but that its features interact with each other in unpredictable ways. Said another way, it’s not the number of nodes in the graph, but the number of edges and the manner of those edges.
I think those problems are related. The more features you have, the more difficult it becomes to avoid strange, surprising interactions. It’s like a pharmacist working with a patient who is taking a whole cocktail of prescriptions; it becomes a combinatorial problem to avoid harmful reactions.
To add to the above comment, a lot of what Go does encourages readability... Yes it feels pedantic at moments (error handling), but those cultural, and stylistic elements that seem painful to write make reading better.
Portable binaries are a blessing, fast compile times, and the choices made around 3rd party libraries and vendoring are all just icing on the cake.
That 80 percent feeling is more than just the language, as written, its all the things that come along with it...
I keep using the analogy, that the tools are just nail guns for office workers but some people remain sticks in the mud.
For non-trivial tasks, AI is neither of those. Anything you do with AI needs to be carefully reviewed to correct hallucinations and incorporate it into your mental model of the codebase. You point, you shoot, and that's just the first 10-20% of the effort you need to move past this piece of code. Some people like this tradeoff, and fair enough, but that's nothing like a nailgun.
For trivial tasks, AI is barely worth the effort of prompting. If I really hated typing `if err != nil { return nil, fmt.Errorf("doing x: %w", err) }` so much, I'd make it an editor snippet or macro.
This Go community that you speak of isn't bothered by writing the boilerplate themselves in the first place, though. For everyone else the LLMs provide.
From Rob Pike himself: "It must be familiar, roughly C-like. Programmers working at Google are early in their careers and are most familiar with procedural languages, particularly from the C family. The need to get programmers productive quickly in a new language means that the language cannot be too radical."
However, the main design goal was to reduce build times at Google. This is why unused dependencies are a compile time error.
From Russ Cox this time: "Q. What language do you think Go is trying to displace? ... One of the surprises for me has been the variety of languages that new Go programmers used to use. When we launched, we were trying to explain Go to C++ programmers, but many of the programmers Go has attracted have come from more dynamic languages like Python or Ruby."
I wonder if it's that Ruby/Python programmers were interested in using these kinds of languages but were being pushed away by C/C++.
https://go.dev/doc/faq?utm_source=chatgpt.com#unused_variabl...
> There are two reasons for having no warnings. First, if it’s worth complaining about, it’s worth fixing in the code. (Conversely, if it’s not worth fixing, it’s not worth mentioning.) Second, having the compiler generate warnings encourages the implementation to warn about weak cases that can make compilation noisy, masking real errors that should be fixed.
I believe this was a mistake (one that sadly Zig also follows). In practice there are too many things that wouldn't make sense being compiler errors, so you need to run a linter. When you need to comment out or remove some code temporarily, it won't even build, and then you have to remove a chain of unused vars/imports until it let's you, it's just annoying.
Meanwhile, unlinted go programs are full of little bugs, e.g. unchecked errors or bugs in err-var misuse. If there only were warnings...
I believe the correct approach is to offer two build modes: release and debug.
Debug compiles super fast and allows unused variables etc, but the resulting binary runs super slowly, maybe with extra safety checks too, like the race detector.
Release is the default, is strict and runs fast.
That way you can mess about in development all you want, but need to clean up before releasing. It would also take the pressure off having release builds compile fast, allowing for more optimisation passes.
At least in the golang / unused-vars at Google case, allowing unused vars is explicitly one of the things that makes compilation slower.
In that case it's not "faster compilation as in less optimization". It's "faster compilation as in don't have to chase down and potentially compile more parts of a 5,000,000,000 line codebase because an unused var isn't bringing in a dependency that gets immediately dropped on the floor".
So it's kinda an orthogonal concern.
I think go is fairly small, too, but “size of spec” is not always a good measure for that. Some specs are very tight, others fairly loose, and tightness makes specs larger (example: Swift’s language reference doesn’t even claim to define the full language. https://docs.swift.org/swift-book/documentation/the-swift-pr...: “The grammar described here is intended to help you understand the language in more detail, rather than to allow you to directly implement a parser or compiler.”)
(Also, browsing golang’s spec, I think I spotted an error in https://go.dev/ref/spec#Integer_literals. The grammar says:
decimal_lit = "0" | ( "1" … "9" ) [ [ "_" ] decimal_digits ] .
Given that, how can 0600 and 0_600 be valid integer literals in the examples?) octal_lit = "0" [ "o" | "O" ] [ "_" ] octal_digits .No, the o/O is optional (hence in square brackets), only the leading zero is required. All of these are valid octal literals in Go:
0600 (zero six zero zero)
0_600 (zero underscore six zero zero)
0o600 (zero lower-case-letter-o six zero zero)
0O600 (zero upper-case-letter-o six zero zero)
octal_lit = "0" [ "o" | "O" ] [ "_" ] octal_digits .By 20% of the effort, do you mean learning curve or productivity?
Rust is great. One of the stupidest things in modern programming practice is the slapfight between these two language communities.
Writing microservices at $DAYJOB feels far easier and less guess-work, even if it requires more upfront code, because it’s clear what each piece does and why.
It really feels like a simpler language and ecosystem compared to Python. On top of that, it performs much better!
Recently I made the same assertions as to Go's advantage for LLM/AI orchestration.
https://news.ycombinator.com/item?id=45895897
It would not surprise me that Google (being the massive services company that it is) would have sent an internal memo instructing teams not to use the Python tool chain to produce production agents or tooling and use Golang.
At work we use Uber’s NillAway, so that helps bit. https://github.com/uber-go/nilaway Though actually having the type system handle it would be nicer.
If I had a magic wand, the only things I would add is better nulability checks, add stack traces by default for errors, and exhaustive checks for sum types. Other than that, it does everything I want.
Linters such as https://golangci-lint.run will do this for you.
In development: https://github.com/uber-go/nilaway
The Go codebases look all alike. Not only the language has really few primitives but also the code conventions enforced by standard library, gofmt, and golangci-lint implies that the structure of code bases are very similar.
Many language communities can't even agree on the build tooling.
Consider adding a pre-commit hook if you are allowed to.
I would not use Golang for a big codebase with lots of business logic. Golang has not made a dent in Java usage at big companies, no large company is going to try replacing their Java codebases with Golang because there's no benefit, Java is almost as fast as Golang and has classes and actually has a richer set of concurrency primitives.
I think go needs some more functional aspects, like iterators and result type/pattern matching.
There may be no honor amongst thieves but there is honor amongst langdevs, and when they did Go! dirty, Google made clear which one they are.
Status changed to Unfortunate
PL naming code is:
1. Whoever uses the name first, has claim to the name. Using the name first is measured by: when was the spec published, or when is the first repo commit.
2. A name can be reused IFF the author has abandoned the original project. Usually there's a grace period depending on how long the project was under development. But if a project is abandoned then there's nothing to stop someone from picking up the name.
3. Under no circumstances should a PL dev step on the name of a currently active PL project. If that happens, it's up to the most recently named project to change their name, not the older project even if the newer project has money behind it.
4. Language names with historical notoriety are essentially "retired" despite not being actively developed anymore.
All of this is reasonable, because the PL namespace is still largely unsaturated*. There are plenty of one syllable English words that are out there for grabs. All sorts of animals, verbs, nouns, and human names that are common for PLs are there for the taking. There's no reason to step on someone else's work just because there's some tie in with your company's branding.
So it's pretty bottom basement behavior for luminaries like Ken Thompson and Rob Pike to cosign Google throwing around their weight to step on another dev's work, and then say essentially "make me" when asked to stop.
* This of course does not apply to the single-letter langs, but even still, that namespace doesn't really have 25 langs under active development.
Moreover the author of Go! personally requested that Google not step on his life's work. The man had dedicated a decade and authored a book and several papers on the topic, so it wasn't a close call. Additionally C# built on C++ which built on C. Go had no relationship to Go! at all. Homage and extension are one thing, but Go was not that.
A policy of "do no evil" required Google to acquiesce. Instead they told him to pound sand.
Instead of “int x”
You have “var x int”
Which obscures the type, making it harder to read the code. The only justification is that 16 years ago, some guy thought he was being clever. For 99.99% of code, it’s a worse syntax. Nobody does eight levels of pointer redirection in typical everyday code.
var
foo: char;
Go was developed by many of the minds behind C, and inertia would have led them to C-style declaration. I don't know if they've ever told anybody why they went with the Pascal style, but I would bet money on the fact that Pascal-style declarations are simply easier and faster for computers to parse. And it doesn't just help with compile speed, it also makes syntax highlighting far more reliable and speeds up tooling.Sure, it's initially kind of annoying if you're used to the C style of type before identifier, but it's something you can quickly get to grips with. And as time has gone on, it seems like a style that a lot of modern languages have adopted. Off the top of my head, I think this style is in TypeScript, Python type hints, Go, Rust, Nim, Zig, and Odin. I asked Claude for a few more examples and apparently it's also used by Kotlin, Swift, and various flavors of ML and Haskell.
But hey, if you're still a fan of type before variable, PHP has your back.
class User {
public int $id;
public ?string $name;
public function __construct(int $id, ?string $name) {
$this->id = $id;
$this->name = $name;
}
}I don’t know if this is the reason but Robert Griesemer, one of the three original guys, comes from a Pascal/Modula background.
You can write
var x = 5
how would that work if the type had to be first? Languages that added inference later tend to have “auto” as the type which looks terrible.
Proponents say it has nothing under the hood. I see under-the-hood-magic happen every time.
1) The arrays append is one example. Try removing an element from an array - you must rely on some magic and awkward syntax, and there's no clear explanation what actually happens under the hood (all docs just show you that a slice is a pointer to a piece of vector).
2) enums creation is just nonsense
3) To make matters worse, at work we have a linter that forbids merging a branch if you a) don't do if err != nil for every case b) have >20 for & if/else clauses. This makes you split functions in many pieces, turning your code into enterprise Java.
It feels like, to implement same things, Go is 2x slower than in Rust.
On the positive side,
* interfaces are simpler, without some stricter Rust's limitations; the only problem with them is that in the using code, you can't tell one from a struct
* it's really fast to pick up, I needed just couple of days to see examples and start coding stuff.
I think Go would have been great with
* proper enums (I'll be fine if they have no wrapped data)
* sensible arrays & slices, without any magic and awkward syntax
* iterators
* result unwrapping shorthands
It has proper enums. Granted, it lacks an enum keyword, which seems to trip up many.
Perhaps what you are actually looking for is sum types? Given that you mentioned Rust, which weirdly[1] uses the enum keyword for sum types, this seems likely. Go does indeed lack that. Sum types are not enums, though.
> sensible arrays & slices, without any magic and awkward syntax
Its arrays and slices are exactly the same as how you would find it in C. So it is true that confuses many coming from languages that wrap them in incredible amounts of magic, but the issue you point to here is actually a lack of magic. Any improvements to help those who are accustomed to magic would require adding magic, not taking it away.
> iterators
Is there something about them that you find lacking? They don't seem really any different than iterators in other languages that I can see, although I'll grant you that the anonymous function pattern is a bit unconventional. It is fine, though.
> result unwrapping shorthands
Go wants to add this, and has been trying to for years, but nobody has explained how to do it sensibly. There are all kinds of surface solutions that get 50% of the way there, but nobody wants to tackle the other 50%. You can't force someone to roll up their sleeves, I guess.
[1] Rust uses enums to generate the sum type tag as an implementation detail, so its not quite as weird as it originally seems, but still rather strange that it would name it based on an effectively hidden implementation detail instead of naming it by what the user is actually trying to accomplish. Most likely it started with proper enums and then realized that sum types would be better instead and never thought to change the keyword to go along with that change.
But then again Swift did the same thing, so who knows? To be fair, its "enums" can degrade to proper enums in order to be compatible with Objective-C, so while not a very good reason, at least you can maybe find some kind of understanding in their thinking in that case. Rust, though...
Well, then they look awkward and have give a feel like it's a syntax abuse.
> Its arrays and slices are exactly the same as how you would do it in C. So while it is true that trips up many coming from languages that wrap them in incredible amounts of magic, but the issue you point to here is actually a lack of magic.
In Rust, I see exactly what I work with -- a proper vector, material thing, or a slice, which is a view into a vector. Also, a slice in Rust is always contiguous, it starts from element a and finishes at element b. I can remove an arbitrary element from a middle of a vector, but slice is read-only, and I simply can't. I can push (append) only to a vector. I can insert in the middle of a vector -- and the doc warns me that it'll need to shift every element after it forward. There's just zero magic.
In Go instead, how do I insert an element in the middle of an array? I see suggestions like `myarray[:123] + []MyType{my_element} + myarray[123:]`. (Removing is like myarray[:123] + myarray[124:]`.)
What do I deal in this code with, and what do I get afterwards? Is this a sophisticated slice that keeps 3 views, 2 to myarray and 1 to the anonymous one?
The docs on the internet suggest that slices in go are exactly like in Rust, a contiguous sequence of array's elements. If so, in my example of inserting (as well as when deleting), there must be a lot happening under the hood.
So nothing to worry about?
> how do I insert an element in the middle of an array?
Same as in C. If the array allocation is large enough, you can move the right hand side to the next memory location, and then replace the middle value.
Something like:
replaceWith := 3
replaceAt := 2
array := [5]int{1, 2, 4, 5}
size := 4
for i := size; i > replaceAt; i-- {
array[i] = array[i-1]
}
array[replaceAt] = replaceWith
fmt.Println(array) // Output: [1 2 3 4 5]
If the array is not large enough, well, you are out of luck. Just like C, arrays must be allocated with a fixed size defined at compile time.> The docs on the internet suggest that slices in go are exactly like in Rust, a contiguous sequence of array's elements.
They're exactly like how you'd implement a slice in C:
struct slice {
void *ptr;
size_t len;
size_t cap;
};
The only thing Go really adds, aside from making slice a built-in type, that you wouldn't find in C is the [:] syntax.Which isn't exactly the same as Rust. Technically, a Rust slice looks something like:
struct slice {
void *ptr;
size_t len;
};
There is some obvious overlap, of course. It still has to run on the same computer at the end of the day. But there is enough magic in Rust to hide the details that I think that you lose the nuance in that description. Go, on the other hand, picks up the exact same patterns one uses in C. So if you understand how you'd do it in C, you understand how you'd do it in Go.Of course, that does mean operating a bit lower level than some developers are used to. Go favours making expensive operations obvious so that is a tradeoff it is willing to make, but regardless if it were to make it more familiar to developers coming from the land of magic it stands that it would require more magic, not less.
I’m guessing the go language design went too far into “simplicity” at the expense of reasonableness.
For example, we can make a “simpler” language by not supporting multiplication, just use addition and write your own!
It has iterators - https://pkg.go.dev/iter.
> It lacks simple things like check if a key exists in a map.
What? `value, keyExists := myMap[someKey]`
> Try removing an element from an array - you must rely on some magic and awkward syntax, and there's no clear explanation what actually happens under the hood (all docs just show you that a slice is a pointer to a piece of vector).
First of all, if you're removing elements from the middle of an array, you're using the wrong data structure 99% of the time. If you're doing that in a loop, you're hitting degenerate performance.
Second, https://pkg.go.dev/slices#Delete
If I don't need the value, I have to do awkward tricks with this construct. like `if _, key_exists := my_may[key]; key_exists { ... }`.
Also, you can do `value := myMap[someKey]`, and it will just return a value or nil.
Also, if the map has arrays as elements, it will magically create one, like Python's defaultdict.
This construct (assigning from map subscript) is pure magic, despite all the claims, that there's none in Golang.
...And also: I guess the idea was to make the language minimal and easy to learn, hence primitives have no methods on them. But... after all OOP in limited form is there in Golang, exactly like in Rust. And I don't see the point why custom structs do have methods, and it's easier to use, but basic ones don't, and you have to go import packages.
Not that it's wrong. But it's not easier at all, and learning curve just moves to another place.
>3) To make matters worse, at work we have a linter that forbids merging a branch if you a) don't do if err != nil for every case b) have >20 for & if/else clauses. This makes you split functions in many pieces, turning your code into enterprise Java.
That is not a problem with Go.
It has enums (sum type), tuple, built-in Set[T], and good Iterator methods. It has very nice type inferred lambda function (heavily inspired by the swift syntax)... lots of good stuff!
This combined with the ease of building CLI programs has been an absolute godsend in the past when I've had to quickly spin up CLI tools which use business logic code to fix things.
It is so weird that they still claim this after they have made the the semantic change for 3-clause for-loop in Go 1.22.
When a Go module is upgraded from 1.21- to 1.22+, there are some potential breaking cases which are hard to detect in time. https://go101.org/blog/2024-03-01-for-loop-semantic-changes-...
Go toolchain 1.22 broke compatibility for sure. Even the core team admit it. https://go101.org/bugs/go-build-directive-not-work.html
https://en.wikipedia.org/wiki/Go!_(programming_language)#Con...
> With gopls v0.18.0, we began exploring automatic code modernizers. As Go evolves, every release brings new capabilities and new idioms; new and better ways to do things that Go programmers have been finding other ways to do. Go stands by its compatibility promise—the old way will continue to work in perpetuity—but nevertheless this creates a bifurcation between old idioms and new idioms. Modernizers are static analysis tools that recognize old idioms and suggest faster, more readable, more secure, more modern replacements, and do so with push-button reliability. What gofmt did for stylistic consistency, we hope modernizers can do for idiomatic consistency.
Modernizers seem like a way make Large-Scale Changes (LSCs) more available to the masses. Google has internal tooling to support them [1], but now Go users get a limited form of opt-in LSC support whenever modernizers make a suggestion.
Maybe by 18, or 21, the maturity finally settles in.
I remember making a little web app and seeing the type errors pop up magically in all he right places where I missed things in my structs. It was a life-changing experience.
113 more comments available on Hacker News
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.