Go Is Still Not Good
Original: Go is still not good
Key topics
Regulars are buzzing about a blog post lamenting Go's shortcomings, sparking a lively debate about the language's strengths and weaknesses. Commenters riff on their personal experiences with Go, with some praising its Zen-like simplicity and fast LSP, while others bemoan its lax typing and frustrating development experience. The discussion takes a humorous turn with Metallica references and comparisons to PHP, with some arguing that Go's criticisms are nothing new and others pointing out that it was initially touted as a C and C++ replacement. The thread feels relevant right now because it taps into the ongoing conversation about the trade-offs and limitations of popular programming languages.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
19m
Peak period
145
Day 1
Avg / period
32
Based on 160 loaded comments
Key moments
- 01Story posted
Aug 22, 2025 at 5:25 AM EDT
4 months ago
Step 01 - 02First comment
Aug 22, 2025 at 5:44 AM EDT
19m after posting
Step 02 - 03Peak activity
145 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 1, 2025 at 4:54 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I say switching to Go is like a different kind of Zen. It takes time, to settle in and get in the flow of Go... Unlike the others, the LSP is fast, the developer, not so much. Once you've lost all will to live you become quite proficient at it. /s
I can still check out the code to any of them, open it and it'll look the same as modern code. I can also compile all of them with the latest compiler (1.25?) and it'll just work.
No need to investigate 5 years of package manager changes and new frameworks.
ISTG if I get downvoted for sharing my opinion I will give up on life.
Google's networking services keep being writen in Java/Kotlin, C++, and nowadays Rust.
People like Rob Pike and Ken Thompson certainly knew that you can't put in a GC and cover all systems programming use cases, but they knew that Go could cover their use cases.
Or are you suggesting that they were frustrated with C++ so they decided to write a language they couldn't use instead of C++ for their use case?
> Google's networking services keep being writen in Java/Kotlin, C++, and nowadays Rust.
And? Google is a massive company that uses many languages across many teams. That doesn't mean that some people at Google, incl Go's original creators, would not use Go nowdays to write what they would previously use C++ for.
I would criticize Go from the point of view of more modern languages that have powerful type systems like the ML family, Erlang/Elixir or even the up and coming Gleam. These languages succeed in providing powerful primitives and models for creating good, encapsulating abstractions. ML languages can help one entirely avoid certain errors and understand exactly where a change to code affects other parts of the code — while languages like Erlang provided interesting patterns for handling runtime errors without extensive boilerplate like Go.
It’s a language that hobbles developers under the aegis of “simplicity.” Certainly, there are languages like Python which give too much freedom — and those that are too complex like Rust IMO, but Go is at best a step sideways from such languages. If people have fun or get mileage out of it, that’s fine, but we cannot pretend that it’s really this great tool.
Go release date: 2012
ML: 1997
". They are likely the two most difficult parts of any design for parametric polymorphism. In retrospect, we were biased too much by experience with C++ without concepts and Java generics. We would have been well-served to spend more time with CLU and C++ concepts earlier."
https://go.googlesource.com/proposal/+/master/design/go2draf...
Cargo is amazing, and you can do amazing things with it, I wish Go would invest in this area more.
Also funny you mention Python, a LOT of Go devs are former Python devs, especially in the early days.
The language sits in an awkward space between rust and python where one of them would almost always be a better choice.
But, google rose colored specs...
Rust simply doesn’t cut it for me. I’m hoping Roc might become this, but I’m not holding my breath.
Sure? Depends on use case.
> too much verbosity
Doesn't meaningfully affect anything.
> Too much fucking "if err != nil".
A surface level concern.
> The language sits in an awkward space between rust and python where one of them would almost always be a better choice.
Rust doesn't have a GC so it's stuck to its systems programming niche. If you want the ergonomics of a GC, Rust is out.
Python? Good, but slow, packaging is a joke, dynamic typing (didn't you mention type safety?), async instead of green threads, etc., etc.
You should see what package management was like for golang in the beginning "just pin a link to github". That was probably one of the most embarrassing technical faux pass ive ever seen.
>dynamic typing
Type hinting works very well in python and the option to not use it when prototyping is useful.
>Rust doesn't have a GC so it's stuck to its systems programming niche.
The lack of GC makes it faster than golang. It has a better type system also.
If speed is really a concern, using golang doesnt make much sense.
Go _excels_ at API glue. Get JSON as string, marshal it to a struct, apply business logic, send JSON to a different API.
Everything for that is built in to the standard library and by default performant up to levels where you really don't need to worry about it before your API glue SaaS is making actual money.
The other jarring example of this kind of deferring logical thinking to big corps was people defending Apple's soldering of memory and ssd, specially so on this site, until some Chinese lad proved that all the imagined issues for why Apple had to do such and such was bs post hoc rationalisation.
The same goes with Go, but if you spend enough time, every little while you see the disillusionment of some hardcore fans, even from the Go's core team, and they start asking questions but always start with things like "I know this is Go and holy reasons exists and I am doing a sin to question but why X or Y". It is comedy.
It is infuriating because it is close to being good, but it will never get there - now due to backwards compatibility.
Also Rob Pike quote about Go's origins is spot on.
These sorts of articles have been commonplace even before Go released 1.0 in 2013. In fact, most (if not all) of these complaints could have been written identically back then. The only thing missing from this post that could make me believe it truly was written in 2013 would be a complaint about Go not having generics, which were added a few years ago.
People on HN have been complaining about Go since Go was a weird side-project tucked away at Google that even Google itself didn't care about and didn't bother to dedicate any resources to. Meanwhile, people still keep using it and finding it useful.
Go has a good-enough standard library, and Go can support a "pile-of-if-statements" architecture. This is all you need.
Most enterprise environments are not handled with enough care to move beyond "pile-of-if-statements". Sure, maybe when the code was new it had a decent architecture, but soon the original developers left and then the next wave came in and they had different ideas and dreamed of a "rewrite", which they sneakily started but never finished, then they left, and the 3rd wave of developers came in and by that point the code was a mess and so now they just throw if-statements onto the pile until the Jira tickets are closed, and the company chugs along with its shitty software, and if the company ever leaks the personal data of 100 million people, they aren't financially liable.
Every piece of code looks the same and can be automatically, neutrally, analysed for issues.
It just feels sloppy and I'm worried I'm going to make a mistake.
Its annoying to need to think about whether I’m working with an interface type of concrete type.
And if use pointers everywhere, why not make it the default?
i think you can take these[1][2][3][4] official advices and extrapolate to other cases
[1] https://go.dev/wiki/CodeReviewComments#receiver-type
[2] https://google.github.io/styleguide/go/decisions#receiver-ty...
[3] https://go.dev/doc/effective_go#pointers_vs_values
[4] https://go.dev/doc/faq#methods_on_values_or_pointers
And also when I want a value with stable identity I'd use a pointer.
Now I always use pointers consistently for the readability.
yeah no. you need an acyclic structure to maybe guarantee this, in CPython. other Python implementations are more normal in that you shouldn't rely on finalizers at all.
> It is possible (though not recommended!) for the __del__() method to postpone destruction of the instance by creating a new reference to it. This is called object resurrection.
[0]: https://docs.python.org/3/reference/datamodel.html#object.__...
Reading: cyclic GC, yes, the section I linked explicitly discusses that problem, and how it’s solved.
Yes, yes. Hence the words "almost" and "pretty much". For exactly this reason.
Some critique is definitely valid, but some of it just sounds like they didn't take the time to grasp the language. It's trade offs all the way. For example there is a lot I like about Rust, but still no my favorite language.
I quite like Go and use it when I can. However, I wish there were something like Go, without these issues. It's worth talking about that. For instance, I think most of these critiques are fair but I would quibble with a few:
1. Error scope: yes, this causes code review to be more complex than it needs to be. It's a place for subtle, unnecessary bugs.
2. Two types of nil: yes, this is super confusing.
3. It's not portable: Go isn't as portable as C89, but it's pretty damn portable. It's plenty portable to write a general-purpose pre-built CLI tool in, for instance, which is about my bar for "pragmatic portability."
4. Append ownership & other slice weirdness: yes.
5. Unenforced `defer`: yes, similar to `err`, this introduces subtle bugs that can only be overcome via documentation, careful review, and boilerplate handling.
6. Exceptions on top of err returns: yes.
7. utf-8: Hasn't bitten me, but I don't know how valid this critique is or isn't.
8. Memory use: imo GC is a selling-point of the language, not a detriment.
I don't think the article sounds like someone didn't take the time to grasp the language. It sounds like it's talking about the kind of thing that really only grates on you after you've seriously used the language for a while.
That said I really wish there was a revamp where they did things right in terms of nil, scoping rules etc. However, they've commited to never breaking existing programs (honorable, understandable) so the design space is extremely limited. I prefer dealing with local awkwardness and even excessive verbosity over systemic issues any day.
I'm surprised people in these comments aren't focusing more on the append example.
But I can't help but agree with a lot of points in this article. Go was designed by some old-school folks that maybe stuck a bit too hard to their principles, losing sight of the practical conveniences. That said, it's a _feeling_ I have, and maybe Go would be much worse if it had solved all these quirks. To be fair, I see more leniency in fixing quirks in the last few years, like at some point I didn't think we'd ever see generics, or custom iterators, etc.
The points about RAM and portability seem mostly like personal grievances though. If it was better, that would be nice, of course. But the GC in Go is very unlikely to cause issues in most programs even at very large scale, and it's not that hard to debug. And Go runs on most platforms anyone could ever wish to ship their software on.
But yeah the whole error / nil situation still bothers me. I find myself wishing for Result[Ok, Err] and Optional[T] quite often.
In what universe?
Is it the best or most robust or can you do fancy shit with it? No
But it works well enough to release reliable software along with the massive linter framework that's built on top of Go.
I wonder why that ended up being necessary... ;)
I'd say that it's entirely the other way around: they stuck to the practical convenience of solving the problem that they had in front of them, quickly, instead of analyzing the problem from the first principles, and solving the problem correctly (or using a solution that was Not Invented Here).
Go's filesystem API is the perfect example. You need to open files? Great, we'll create
function, you can open files now, done. What if the file name is not valid UTF-8, though? Who cares, hasn't happen to me in the first 5 years I used Go.You should always be able to iterate the code points of a string, whether or not it's valid Unicode. The iterator can either silently replace any errors with replacement characters, or denote the errors by returning eg, `Result<char, Utf8Error>`, depending on the use case.
All languages that have tried restricting Unicode afaik have ended up adding workarounds for the fact that real world "text" sometimes has encoding errors and it's often better to just preserve the errors instead of corrupting the data through replacement characters, or just refusing to accept some inputs and crashing the program.
In Rust there's bstr/ByteStr (currently being added to std), awkward having to decide which string type to use.
In Python there's PEP-383/"surrogateescape", which works because Python strings are not guaranteed valid (they're potentially ill-formed UTF-32 sequences, with a range restriction). Awkward figuring out when to actually use it.
In Raku there's UTF8-C8, which is probably the weirdest workaround of all (left as an exercise for the reader to try to understand .. oh, and it also interferes with valid Unicode that's not normalized, because that's another stupid restriction).
Meanwhile the Unicode standard itself specifies Unicode strings as being sequences of code units [0][1], so Go is one of the few modern languages that actually implements Unicode (8-bit) strings. Note that at least two out of the three inventors of Go also basically invented UTF-8.
[0] https://www.unicode.org/versions/Unicode16.0.0/core-spec/cha...
> Unicode string: A code unit sequence containing code units of a particular Unicode encoding form.
[1] https://www.unicode.org/versions/Unicode16.0.0/core-spec/cha...
> Unicode strings need not contain well-formed code unit sequences under all conditions. This is equivalent to saying that a particular Unicode string need not be in a Unicode encoding form.
If you use 3) to create a &str/String from invalid bytes, you can't safely use that string as the standard library is unfortunately designed around the assumption that only valid UTF-8 is stored.
https://doc.rust-lang.org/std/primitive.str.html#invariant
> Constructing a non-UTF-8 string slice is not immediate undefined behavior, but any function called on a string slice may assume that it is valid UTF-8, which means that a non-UTF-8 string slice can lead to undefined behavior down the road.
Again, this is the same simplistic, vs just the right abstraction, this just smudges the complexity over a much larger surface area.
If you have a byte array that is not utf-8 encoded, then just... use a byte array.
Yes, and that's a good thing. It allows every code that gets &str/String to assume that the input is valid UTF-8. The alternative would be that every single time you write a function that takes a string as an argument, you have to analyze your code, consider what would happen if the argument was not valid UTF-8, and handle that appropriately. You'd also have to redo the whole analysis every time you modify the function. That's a horrible waste of time: it's much better to:
1) Convert things to String early, and assume validity later, and
2) Make functions that explicitly don't care about validity take &[u8] instead.
This is, of course, exactly what Rust does: I am not aware of a single thing that &str allows you to do that you cannot do with &[u8], except things that do require you to assume it's valid UTF-8.
Because 99.999% of the time you want it to be valid and would like an error if it isn't? If you want to work with invalid UTF-8, that should be a deliberate choice.
At the protocol (or disk, etc) boundary. If I write code that consumes bytes that are intended to be UTF-8, I need to make a choice about what to do if they aren’t UTF-8 somewhere. A strict UTF-8 string forces me to make that choice in a considered location. In a language where a “string” is just bytes, I can forget, or to pieces of code can disagree on what the contract is. And bugs result.
Check out MySQL for a fun example of getting this wildly, impressively wrong. At least a Rust or a type checked-Python 3 wrapper around some MySQL code enforces a degree of correctness, which is much better than having your transaction fail to commit or commit indirectly was down the stack when you get bytes you didn’t expect.
(MySQL can still reject strictly valid UTF-8 data for utterly pathetic historical reasons if you configure it incorrectly.)
Stuff like this matters a great deal on the standard library level.
It's far better to get some � when working with messy data instead of applications refusing to work and erroring out left and right.
So that means that for 99% of scenarios, the difference between char[] and a proper utf8 string is none. They have the same data representation and memory layout.
The problem comes in when people start using string like they use string in PHP. They just use it to store random bytes or other binary data.
This makes no sense with the string type. String is text, but now we don't have text. That's a problem.
We should use byte[] or something for this instead of string. That's an abuse of string. I don't think allowing strings to not be text is too constraining - that's what a string is!
We can try to shove it into objects that work on other text but this won't work in edge cases.
Like if I take text on Linux and try to write a Windows file with that text, it's broken. And vice versa.
Go let's you do the broken thing. In Rust or even using libraries in most languages, you cant. You have to specifically convert between them.
That's why I mean when I say "storing random binary data as text". Sure, Windows almost UTF16 abomination is kind of text, but not really. Its its own thing. That requires a different type of string OR converting it to a normal string.
It maybe legacy cruft downstream of poorly thought out design decisions at the system/OS level, but we're stuck with it. And a language that doesn't provide the tooling necessary to muddle through this mess safely isn't a serious platform to build on, IMHO.
There is room for languages that explicitly make the tradeoff of being easy to use (e.g. a unified string type) at the cost of not handling many real world edge cases correctly. But these should not be used for serious things like backup systems where edge cases result in lost data. Go is making the tradeoff for language simplicity, while being marketed and positioned as a serious language for writing serious programs, which it is not.
Yes this is why all competent libraries don't actually use string for path. They have their own path data type because it's actually a different data type.
Again, you can do the Go thing and just use the broken string, but that's dumb and you shouldn't. They should look at C++ std::filesystem, it's actually quite good in this regard.
> And a language that doesn't provide the tooling necessary to muddle through this mess safely isn't a serious platform to build on, IMHO.
I agree, even PHP does a better job at this than Go, which is really saying something.
> Go is making the tradeoff for language simplicity, while being marketed and positioned as a serious language for writing serious programs, which it is not.
I would agree.
One of the great advances of Unix was that you don't need separate handling for binary data and text data; they are stored in the same kind of file and can be contained in the same kinds of strings (except, sadly, in C). Occasionally you need to do some kind of text-specific processing where you care, but the rest of the time you can keep all your code 8-bit clean so that it can handle any data safely.
Languages that have adopted the approach you advocate, such as Python, frequently have bugs like exception tracebacks they can't print (because stdout is set to ASCII) or filenames they can't open when they're passed in on the command line (because they aren't valid UTF-8).
[]Rune is for sequences of UTF characters. rune is an alias for int32. string, I think, is an alias for []byte.
Consider:
How many times does that loop over 6 bytes iterate? The answer is it iterates twice, with i=0 and i=3.There's also quite a few standard APIs that behave weirdly if a string is not valid utf-8, which wouldn't be the case if it was just a bag of bytes.
A couple quotes from the Go Blog by Rob Pike:
> It’s important to state right up front that a string holds arbitrary bytes. It is not required to hold Unicode text, UTF-8 text, or any other predefined format. As far as the content of a string is concerned, it is exactly equivalent to a slice of bytes.
> Besides the axiomatic detail that Go source code is UTF-8, there’s really only one way that Go treats UTF-8 specially, and that is when using a for range loop on a string.
Both from https://go.dev/blog/strings
If you want UTF-8 in a guaranteed way, use the functions available in unicode/utf8 for that. Using `string` is not sufficient unless you make sure you only put UTF-8 into those strings.
If you put valid UTF-8 into a string, you can be sure that the string holds valid UTF-8, but if someone else puts data into a string, and you assume that it is valid UTF-8, you may have a problem because of that assumption.
Score another for Rust's Safety Culture. It would be convenient to just have &str as an alias for &[u8] but if that mistake had been allowed all the safety checking that Rust now does centrally has to be owned by every single user forever. Instead of a few dozen checks overseen by experts there'd be myriad sprinkled across every project and always ready to bite you.
However no &str is not "an alias for &&String" and I can't quite imagine how you'd think that. String doesn't exist in Rust's core, it's from alloc and thus wouldn't be available if you don't have an allocator.
Of the top of my head, in order of likely difficulty to calculate: byte length, number of code points, number of grapheme/characters, height/width to display.
Maybe it would be best for Str not to have len at all. It could have bytes, code_points, graphemes. And every use would be precise.
FWIW the docs indicate that working with grapheme clusters will never end up in the standard library.
If your API takes &str, and tries to do byte-based indexing, it should almost certainly be taking &[u8] instead.
I mean, really neither should be the default. You should have to pick chars or bytes on use, but I don't think that's palatable; most languages have chosen one or the other as the preferred form. Or some have the joy of being forward thinking in the 90s and built around UCS-2 and later extended to UTF-16, so you've got 16-bit 'characters' with some code points that are two characters. Of course, dealing with operating systems means dealing with whatever they have as well as what the language prefers (or, as discussed elsewhere in this thread, pretending it doesn't exist to make easy things easier and hard things harder)
The answer here isn't to throw up your hands, pick one, and other cases be damned. It's to expose them all and let the engineer choose. To not beat the dead horse of Rust, I'll point that Ruby gets this right too.
Similarly, each of those "views" lets you slice, index, etc. across those concepts naturally. Golang's string is the worst of them all. They're nominally UTF-8, but nothing actually enforces it. But really they're just buckets of bytes, unless you send them to APIs that silently require them to be UTF-8 and drop them on the floor or misbehave if they're not.Height/width to display is font-dependent, so can't just be on a "string" but needs an object with additional context.
https://github.com/rust-lang/rfcs/issues/2692
It does though? Strings are internable, comparable, can be keys, etc.
Nothing? Neither Go nor the OS require file names to be UTF-8, I believe
You can do something like WTF-8 (not a misspelling, alas) to make it bidirectional. Rust does this under the hood but doesn’t expose the internal representation.
In Linux, they’re 8-bit almost-arbitrary strings like you noted, and usually UTF-8. So they always have a convenient 8-bit encoding (I.e. leave them alone). If you hated yourself and wanted to convert them to UTF-16, however, you’d have the same problem Windows does but in reverse.
In general, Windows filenames are Unicode and you can always express those filenames by using the -W APIs (like CreateFileW()).
The upshot is that since the values aren’t always UTF-16, there’s no canonical way to convert them to single byte strings such that valid UTF-16 gets turned into valid UTF-8 but the rest can still be roundtripped. That’s what bastardized encodings like WTF-8 solve. The Rust Path API is the best take on this I’ve seen that doesn’t choke on bad Unicode.
It breaks. Which is weird because you can create a string which isn't valid UTF-8 (eg "\xbd\xb2\x3d\xbc\x20\xe2\x8c\x98") and print it out with no trouble; you just can't pass it to e.g. `os.Create` or `os.Open`.
(Bash and a variety of other utils will also complain about it being valid UTF-8; neovim won't save a file under that name; etc.)
If you stuff random binary data into a string, Go just steams along, as described in this post.
Over the decades I have lost data to tools skipping non-UTF-8 filenames. I should not be blamed for having files that were named before UTF-8 existed.
Windows doing something similar wouldn't surprise me at all. I believe NTFS internally stores filenames as UTF-16, so enforcing UTF-8 at the API boundary sounds likely.
Yes, that was my assumption when bash et al also had problems with it.
I've said this before, but much of Go's design looks like it's imitating the C++ style at Google. The comments where I see people saying they like something about Go it's often an idiom that showed up first in the C++ macros or tooling.
I used to check this before I left Google, and I'm sure it's becoming less true over time. But to me it looks like the idea of Go was basically "what if we created a Python-like compiled language that was easier to onboard than C++ but which still had our C++ ergonomics?"
But certainly, anyone will bring their previous experience to the project, so there must be some Plan9 influence in there somewhere
Go’s more chaotic approach to allow strings to have non-Unicode contents is IMO more ergonomic. You validate that strings are UTF-8 at the place where you care that they are UTF-8. (So I’m agreeing.)
The problem with this, as with any lack of static typing, is that you now have to validate at _every_ place that cares, or to carefully track whether a value has already been validated, instead of validating once and letting the compiler check that it happened.
Validation is nice but Rust’s principled approach leaves me high and dry sometimes. Maybe Rust will finish figuring out the OsString interface and at that point we can say Rust has “won” the conversation, but it’s not there yet, and it’s been years.
Except when it doesn’t and then you have to deal with fucking Cthulhu because everyone thought they could just make incorrect assumptions that aren’t actually enforced anywhere because “oh that never happens”.
That isn’t engineering. It’s programming by coincidence.
> Maybe Rust will finish figuring out the OsString interface
The entire reason OsString is painful to use is because those problems exist and are real. Golang drops them on the floor and forces you pick up the mess on the random day when an unlucky end user loses data. Rust forces you to confront them, as unfortunate as they are. It's painful once, and then the problem is solved for the indefinite future.
Rust also provides OsStrExt if you don’t care about portability, which greatly removes many of these issues.
I don’t know how that’s not ideal: mistakes are hard, but you can opt into better ergonomics if you don’t need the portability. If you end up needing portability later, the compiler will tell you that you can’t use the shortcuts you opted into.
It seems like there's some confusion in the GGGGGP post, since Go works correctly even if the filename is not valid UTF-8 .. maybe that's why they haven't noticed any issues.
677 more comments available on Hacker News