Go Subtleties
Posted3 months agoActive3 months ago
harrisoncramer.meTechstoryHigh profile
calmmixed
Debate
70/100
Go Programming LanguageSoftware DevelopmentProgramming Subtleties
Key topics
Go Programming Language
Software Development
Programming Subtleties
The article 'Go subtleties' discusses various lesser-known aspects of the Go programming language, sparking a discussion on the language's design choices and potential pitfalls.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
9d
Peak period
149
Day 10
Avg / period
53.3
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 13, 2025 at 3:42 AM EDT
3 months ago
Step 01 - 02First comment
Oct 22, 2025 at 4:27 AM EDT
9d after posting
Step 02 - 03Peak activity
149 comments in Day 10
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 24, 2025 at 10:40 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45565793Type: storyLast synced: 11/20/2025, 8:00:11 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The lack of features means all the complexity is offloaded to the programmer. Where other languages can take some of the complexity burden off the programmer.
Go isn't simple, it's basic.
Go has its warts for sure. But saying the simplicity of Go is "just virtue signaling" is so far beyond ignorant that I can only conclude this opinion of yours is nothing more than the typical pseudo-religious biases that lesser experienced developers smugly cling to.
Go has one of the easiest tool chains to get started. There's no esconfig, virtualenv and other bullshit to deal with. You don't need a dozen `use` headers just to define the runtime version nor trust your luck with a thousand dependencies that are impossible to realistically audit because nobody bothered to bundle a useful standard library with it. You don't have multi-page indecipherable template errors, 50 different ways to accomplish the same simple problem nor arguments about what subset of the language is allowed to be used when reviewing pull requests. There isn't undefined behaviour nor subtle incompatibilities between different runtime implementations causing fragmentation of the language.
The problem with Go is that it is boring and that's boring for developers. But it's also the reason why it is simple.
So it's not virtue signaling at all. It's not flawless and it's definitely boring. But that doesn't mean it isn't also simple.
Edit: In case anyone accuses me of being a fanboy, I'm not. I much preferred the ALGOL lineage of languages to the B lineage. I definitely don't like a lot of the recent additions to Go, particularly around range iteration. But that's my personal preference.
No, I'm comparing to more than a dozen different languages that I've used commercially. And there were direct references there to Perl, Java, Pascal, procedural SQL, and many, many others too.
> There are languages out there that are easy to build, have a reasonable std lib
Sure. And the existence of them doesn't mean Go isn't also simple.
> and don't offload the complexity of the world onto the programmer.
I disagree. Every language makes tradeoffs, and those tradeoffs always end up being complexities that the programmer has to negotiate. This is something I've seen, without exception, in my 40 years of language agnosticism and part-time language designer.
Cue Rich Hickey's Simple made Easy: https://www.youtube-nocookie.com/embed/SxdOUGdseq4 / https://ghostarchive.org/varchive/SxdOUGdseq4
If you read & write Go regularly, the rather verbose error handling simply fades into the background.
That said, errors in Go don't really translate to Exceptions as generally thought of; panic, however; may be does.
Making changes to error handling wasn't for the lack of trying, though: https://news.ycombinator.com/item?id=44171677
> issue with nil pointers
This is why most APIs strive for a non-nil zero value, where possible, as methods (on structs) can still dictate if it will act on a pointer. Though, I get what you're saying with Go missing Optional / Maybe / ? operator, as the only other way to warn about nil types is through documentation; ex: https://github.com/tailscale/tailscale/blob/afaa23c3b4/syncs... (a recent example I stumbled upon).
Static code analysers like nilaway (https://news.ycombinator.com/item?id=38300425) help, but these aren't without false positives (annoying) & false negatives (fatal).
Then again that would mean that the nil identifier would be coerced into a typed nil and we would check for the nilness of what is inside an interface in any(somepointer) == nil.
wrt the current behavior, it also makes sense to have a nil value that remains untyped. But in many other cases we do have that automatic inference/coercion, for instance when we set a pointer to nil.(p = nil)
That's quite subtle and that ship has sailed though.
It's not straightforward but probably something that will be considered at some point I reckon when thinking about making union interfaces first class. That will require to track a not nil typestate/predicate in the backend, something like that I guess.
basically `if v.(nil){...}
creates two branches. In one we know v is not nil (outside the if block) and it can therefore be assigned to non nillable variables so to speak...
In light of that fact, it would cause the interface rules to grow a unique wart that doesn't accomplish anything if interfaces tried to ban putting "nil" pointers into them. The correct answer is to not to create invalid values in the first place [1] and basically "don't do that", but that's not a "don't do that because it ought to do what you think and it just doesn't for some reason", it's a "don't do that because what you think should happen is in fact wrong and you need to learn to think the right thing".
Interfaces can not decide to not box nil values, because interfaces are not supposed to "know" what is and is not a legal value that implements them. It is the responsibility of the code that puts a value into the interface to ensure that the value correctly implements the interface. Note how you could not have io.Reader label itself as "not containing a nil" in my example above, because io.Reader has no way to "know" what my Repeater is. The job of an io.Reader value is to Read([]byte) (int error), and if it can't do that, it is not io.Reader's "fault". It is the fault of the code that made a promise that some value fits into the io.Reader interface when it doesn't.
In Go, nil is not the same thing as invalid [2] and until you stop forcing that idea into the language from other previous languages you've used you're going to not just have a bad time here, but elsewhere as well, e.g., in the behavior of the various nil values for slice and map and such.
One can more justifiably make the complaint that there is often no easy way to make a clearly-invalid value in Go the way a sum type can clearly declare an "Invalid/None/Empty/NULL", or even declare multiple such values in a single type if the semantics call for it, but that's a separate issue and doesn't make "nil" be the invalid value in current Go. Go does not have a dedicated "invalid" value, nor does it have a value of a given type that methods can not be called on.
(You can also ask for Go to have more features that make it harder to stick invalid values into an interface, but if you try to follow that to the point where it is literally impossible, you end up in dependently-typed languages, which currently have no practical implementations. Nothing can prevent you, in any current popular language, from labelling a bit of code as implementing an interface/trait/set of methods and simply being wrong about that fact. So it's all a question of where the tradeoffs are in the end, since "totally accurately correct interfaces" are not currently known to even be possible.)
[1]: https://jerf.org/iri/post/2957/
[2]: https://jerf.org/iri/post/2023/value_validity/
What's frustrating is that 99.99% of written go code doesn't work this way and so people _do_ shoot themselves in the foot all the time, and so at some point you have to concede that what we have might be logical but it isn't intuitive. And that kinda sucks for a language that prides itself on simplicity.
I also get that there's no easy way to address this. The best I can imagine is a way to declare that a method Y on type x can't take nil so (*x)(nil) shouldn't be considered as satisfying that method on an interface.. and thus not boxed automatically into that interface type. But yeah I get that's gonna get messy. If I could think of a good solution I'd make a proposal.
If you understand that there isn't really a fix and just wish there was one anyhow, while I still disagree in some details it's in the range I wouldn't fuss about. I understand that sort of wishing perfectly; don't think there's ever been a language I've used for a long time that I've had similar sorts of "I just wish it could work this way even though I understand why it can't." Maybe someday we'll be "blessed" with some sort of LLM-based language that can do things like that... for better or for worse.
You are not wrong that it is a sharp edge. Completely removing nils from interfaces is not possible because: 1. not backward compatible
However I would nuance a little. Having an empty interface ie. a untyped nil is useful. Having typed nils in interfaces is arguable. Because every value type that has methods can make pointer. That means potential deref if any such pointer is passed to an interface variable instead of the value itself.
Being able to keep nil from some interfaces would be useful.
You're not wrong. In general there is not much value in having working methods on a typed nil pointer.
If we think in terms of bottom wrt type theory, yes it is supposed to implement every type. But that would be closer to untyped nil and that's not how go's type system works either. It is close though. We just don't have a language concept for nillable int because variables are auto initialized to 0. And because it would be difficult to encode such an information purely virtually. But that could be possible in theory, without mechanical sympathy. I digress. The takeaway is that I don't think a linter can do the trick easily but there has been good attempts. And it is worth pondering, you're right.
I'm full up on tasks, though; even if I were writing a linter for Go that would not currently be my top goal, though it's definitely quite interesting overall.
But any(nil) == nil returns true like you'd expect.
The reason that any((*int)(nil)) == nil is false is the same reason that any(uint(2)) == 2 is false: interfaces compare values and types.
any(uint(2)) == int(2) should return false indeed however.
Importantly, untyped constants don't exist at runtime, and non-primitive types like interfaces aren't constants, so any(uint(2)) == 2 can't behave the way you want without some pretty significant changes to the language's semantics. Either untyped constants would have to get a runtime representation--and equality comparisons would have to introduce some heavyweight reflection--or else interfaces would have to be hoisted into the constant part of the language--which is quite tricky to get right--and then you just end up in a situation where any(uint(2)) == 2 works but x == 2 doesn't when x turns out to be any(uint(2)) at runtime.
That means following the type pointer of LHS, switching on its underlying type (with 15 valid possibilities [1]) or similar, and then casting either RHS to LHS's type, or LHS to the untyped representation, and finally doing the equality check. Something like this (modulo choice of representation and possible optimizations):
[1]: Untyped integer constants can be compared with any of uint8..uint64, int8..int64, int, uint, uintptr, float32, float64, complex64, or complex128Loudest arguments against returning concrete types were on the terraform core team and the excuse was it makes testing easier. I disagree.
That’s why Go added abstractions later like fs.FS and fs.File.
Whereas consider its counterpart net.Conn. net.Conn is one of the most successful interfaces in the Go standard library. It’s the foundation of the net, net/http, tls, and net/rpc packages, and has been stable since Go 1.0. It didn't need a replacement fs.Fs.If you will always only ever have one implementation in absolute permanence and no mocking/fake/alternative implementation is ever required in eternity, return a concrete type. Otherwise, consider whether returning an interface makes more sense.
The advice of returning concrete types is paired with defining interfaces when you need them on the consumer side.
It's returning interfaces that prevents good evolution, since the standard library will not add methods to interfaces, it can only document things like: all current standard library implementations additionally satisfy XXX interfaces.
Due to lack of native support of defaults for optional methods , many interfaces in Go are using hacks for optional methods added by evolution.
The Value interface has a `IsBoolFlag()` optional method not part of the interface signature
The other way for evolution is just add sub-interfaces. Like `io.WriterTo` and `io.ReaderFrom` which are effectively just extensions of `io.Writer` and `io.Reader` with `WriteTo` and `ReadFrom` methods - which are checked for in consumers like `io.Copy`.
Anyways, my point was specifically about generic interfaces and alternative implementations, so it appears you agree.
Go's standard library interfaces (like net.Conn) earned their place.
Premature interfaces calcify mistakes and that's what the guideline pushes back on.
That’s exactly the pattern I use for most Go development
I assume this is because on is an array of struct pointers and the other is an array of fat pointers, since Go has reified interfaces (unlike higher-level languages).
I find that people try to use interfaces like they’re using an OO language. Go is not OO.
This is fine for a lot of general purpose code that exits when running into problems. But when errors are an expected part of a long lived process, like an API, it’s painful to build logic around and conditionally handle them.
The ergonomics of errors.Is and As are pretty bad and there doesn’t seem to be a clear indication as when to expect a sentinel, concrete, or pointer to a concrete error.
All that to say, I think Go’s errors really illustrate the benefit of “return values, not interfaces”. Though for errors specifically, I’m not sure you could improve them without some pretty bad tradeoffs around flexibility.
My post https://news.ycombinator.com/item?id=44982491 got a lot of hate from people who defend Go by saying "so just don't do that!", and people trying to explain my own blog post to me.
Which is really handy when shit's on fire and you need to find the error yesterday. You can just follow what happens instead of trying to figure out the cool tricks the original programmer put in with their super-expressive language.
Yes, the bug is on line 42, but it does two dozen things on the single line...
I think people often get burnt by bad abstractions in expressive languages, but it's not a problem of the language, but the author's unfamiliarity with the tools at their disposal.
If someone starts being clever with abstractions before understanding the fundamentals, it can lead to badly designed abstractions.
So I guess if there's less things to master, you can start designing good abstractions sooner.
So, in my experience, if we invest time to truly understand the tools at our disposal, expressive languages tend to be a great boon to comprehension and maintenance.
But yes, there's definitely been times early in my career where I abstracted before I understood, or had to deal with other bad abstractions
But, on a serious note, I agree with you. Go lacks a lot of power, especially in its type system, that causes a ton of problems (and downtime) that in other languages is trivial to prevent statically.
But it is hardly ever the weak type system that is at fault, just good use of a stronger type system could have prevented the issue.
Once you start to make "invalid states unpresentable" and enforcing those states at the edges of your type system suddenly a lot of bizarre errors don't happen anymore.
There is, perhaps, some segment of the developer community who believe that they are infallible and don't need to write tests, but then have the type system exclaim their preconceived notions are wrong, and then come to love the type system for steering them in a better direction, while still remaining oblivious to all the things the incomplete type system is unable to statically assert. But that's a rather bizarre place to be.
You still need tests for functionality (this function does what it should) but the type system removes many error cases automatically.
But doesn't change the tests you need to write, and those tests are going to incidentally cover anything the type system is also going to catch, so the type system isn't going to somehow make your software more reliable.
A much more expressive type system can get you there, but you won't find that in any language anyone actually uses on a normal basis.
If you have to share a codebase with a large group of people with varying skill levels, limiting their ability to screw up can definitely be a feature, which a language can have or lack.
As always, it comes with tradeoffs. Would you rather have the ability to use good, expressive abstractions or remove the group’s ability to write bad ones? It probably depends on your situation and goals.
I've tried my best to make indecipherable go code and failed. Do you have any examples?
A mate of mine did Comp Sci back in uni when First Years were taught Turbo Pascal showed me some, when I was still doing stuff in ZX Spectrum BASIC and Z80 assembler in high school. It was immediately clear what was going on, even if the syntax was a bit unfamiliar.
By contrast I've had to sit and pick apart things with strings and strings of ternary operators in the same expression, as part of case structures that relied on fallthrough, because someone wanted to show how clever they were.
My Pascal-using mate called stuff like that "Yngwie Malmsteen Programming". It's a phrase that's stuck with me, over 30 years later.
Don't do that "WEEDLYWEEDLYWEEDLY" shit. You're just showing off.
On your own time.
When you're writing code for work, stuff that other people have to eventually read and understand, you be as boring as possible. Skip all the tricks and make code readable, not cute. Someone might have to understand and fix it at 3 in the morning while everything is on fire.
Anyway his argument was "but the code should be obvious! You shouldn't need comments to explain what the code does!"
Yes Robert, but you need comments to explain what the code expects to do stuff to, and why you want that.
Turns out that removing the "Development Manager" as he styled himself's write access to the Subversion repository causes ripples in the fabric of reality right up to the C suite, but I could back my decision up with solid evidence that he was causing more problems than he was solving.
Go is simple just like assembly is simple.
> The Map type is specialized. Most code should use a plain Go map instead, with separate locking or coordination, for better type safety and to make it easier to maintain other invariants along with the map content.
The documentation basically says that it's optimized for some cases that wouldn't affect the complaint above.
What I wanted to point towards with my earlier comment is that sync.Map doesn't use resource based mutexes, it uses one mutex for everything which will always be the slowest case.
There's no real borrowing concept in Go, as there would be in Rust for that case, and if you argue that simplicity should be preferred then normal map[] should be threadsafe by default, which, in return, likely will require compile time expansion because of generics.
The core value of Go's "we want simplicity" approach always feels like it's a facade, because there is so many exceptions and footguns along the way. These footguns almost always were conscious decisions where they could have made a decision to break legacy behavior for better simplicity but decided that this is not what they want, which is weird as it's not fitting the initial language design promise.
Simplicity is hard. You may see it as dumb, other see it as priceless attribute of the language.
P1: The type and its method vtable
P2: The value
Once I understood that I could intuit how a nil Foo was not a nil Bar and not an untyped nil either
willem-dafoe-head-tap.gif
go install github.com/butuzov/ireturn/cmd/ireturn@latest ireturn ./...
sync.WaitGroup(n) panics if Add(x) is called after a Done(-1) & n isn't now zero. Unsure if WaitGroups and easy belong in the same sentence. May be they do, but I'd rather reimplement Java's CountDownLatch & CyclicBarrier APIs in Go instead. #£@&+!Genuinely asking, I'm relatively new to Golang and would love to have a better sense of what parts of the ecosystem are worth learning about.
That said 2.5 years later there’s been many improvements to the stdlib (like waitGroup.Go) such that I no longer feel the need for it going forward.
My experience developing in it always gave me the impression that the designers of the language looked at C and thought "all this is missing is garbage collection and then we'll have the perfect language".
I feel like a large amount of the feeling of productivity developers get from writing Go code originates from their sheer LOC output due to having to reproduce what other languages can do in just a few lines thanks to proper language & standard library features.
You have to put thought into such things as:
- Did I add explicit checks for all the errors my function calls might return?
- Are all of my resources (e.g. file handles) cleaned up properly in all scenarios? Or did I forget a "defer file.Close()"? (A language like C++ solved this problem with RAII in the 1980s)
- Does my Go channel spaghetti properly implement a worker pool system with the right semaphores and error handling?
You can easily check this with a linter.
> Are all of my resources (e.g. file handles) cleaned up properly in all scenarios? Or did I forget a "defer file.Close()"? (A language like C++ solved this problem with RAII in the 1980s)
You can forget to use `with` in Python, I guess that's also C now too eh?
> Does my Go channel spaghetti properly implement a worker pool system with the right semaphores and error handling?
Then stop writing spaghetti and use a higher level abstraction like `x/sync/errgroup.Group`.
You can check anything with a linter, but it's better when the language disallows you from making the mistake in the first place.
>You can forget to use `with` in Python, I guess that's also C now too eh?
When using `with` in Python you don't have to think about what exactly needs to be cleaned up, and it'll happen automatically when there is any kind of error. Consider `http.Get` in Go:
resp, err := http.Get(url)
if err == nil { resp.Body.Close() }
return err
Here you need to specifically remember to call `resp.Body.Close` and in which case to call it. Needlessly complicated.
>Then stop writing spaghetti and use a higher level abstraction like `x/sync/errgroup.Group`.
Why is this not part of the standard library? And why does it not implement basic functionality like collecting results?
You don't need to check if err was nil before calling resp.Body.Close()
https://pkg.go.dev/net/http#Get
> When err is nil, resp always contains a non-nil resp.Body. Caller should close resp.Body when done reading from it.
https://pkg.go.dev/net/http#Response
> The http Client and Transport guarantee that Body is always non-nil, even on responses without a body or responses with a zero-length body. It is the caller's responsibility to close Body.
Calling http.Get() returns an object that symbolises the response. The response body itself might be multiple terabytes, so http.Get() shouldn't read it for you, but give you a Reader of some sort.
The question then is, when does the Reader get closed? The answer should be "when the caller is done with it". This can't be automatic handled when the resp object goes out of scope, as it would preclude the caller e.g. passing the response to another goroutine for handling, or putting it in an array, or similar.
Go tooling is more than happy to tell you that there's an io.ReadCloser in one of the structs returned to you, and it can see that you didn't Close() it, store it, or pass it to somewhere else, before the struct it was in went out of scope.
I think the end result is code which is quite easy to understand and maintain, because it is quite plain stuff with a clear control flow at the end of the day. Go code is the most pleasant code to debug of all the languages I've worked with, and there is not a close second.
Given that I spend much more time in the maintenance phase, it's a trade-off I'm quite happy to make.
(This is of course all my experience; very IMO)
Its premature if I don't know the answer to that question with my current information, which is a common scenario for me when I'm initially writing a new set of usecases.
If I get a 3rd copy of a thing, then its likely going to become an abstraction (and I'll probably have better understanding of the thing at the time to do that abstraction). If I don't get a 3rd copy of that thing, then its probably fine for the thing to be copied in 2 places, regardless of what the answer to my question is.
After doing a bit of frontend JS I was quickly dissuaded of that notion, all I was doing was writing really long boilerplate.
This was in the Java 6 days, so before a lot of nice features were added, for example a simple callback required the creation of a class that implements an interface with the method (so 3 unique names and a bunch of boilerplate to type out, you could get away with 2 names if you used an anonymous class).
C is so limited that you would try to avoid mutation and even complex datastructures.
Go is "powerful" enough to let you shoot yourself much harder.
Go with `const` and NonNull<ptr> (call it a reference if you need) would be a much nicer language
> Although we entertained occasional thoughts about implementing one of the major languages of the time like Fortran, PL/I, or Algol 68, such a project seemed hopelessly large for our resources: much simpler and smaller tools were called for. All these languages influenced our work, but it was more fun to do things on our own.
From https://www.nokia.com/bell-labs/about/dennis-m-ritchie/chist...
Go grew up from the failed design with Alef in Plan 9, which got a second chance with Limbo on Inferno.
https://en.wikipedia.org/wiki/Alef_(programming_language)
> Rob Pike later explained Alef's demise by pointing to its lack of automatic memory management, despite Pike's and other people's urging Winterbottom to add garbage collection to the language;
https://doc.cat-v.org/inferno/4th_edition/limbo_language/lim...
You will notice some of the similarities between Limbo and Go, with a little sprikle of Oberon-2 method syntax, and SYSTEM replaced by unsafe.
https://ssw.jku.at/Research/Papers/Oberon2.pdf
An alternative is to introduce something like annotations, but I'm sure there will be resistance as it makes the language lean closer to e.g. Java.
But my take on that is that if you want stricter typing like that, you should actually go to Java or C# or whatever.
Support for the types of metaprogramming/metadata that annotations are used for is a useful attribute of languages in general
That and one or two other examples in the article smelled vaguely of PHP to me: features piled up in response to immediate needs instead of coherent design. For a language that famously refused to add generics for years (then did them badly, IMHO), it seems off-brand.
> Go 1.25 introduced a waitgroup.Go function that lets you add Go routines to a waitgroup more easily. It takes the place of using the go keyword, [...]
99% of the time, you don't want to use sync.WaitGroup, but rather errgroup.Group. This is basically sync.WaitGroup with error handling. It also has optional context/cancellation support. See https://pkg.go.dev/golang.org/x/sync/errgroup
I know it's not part of the standard library, but it's part of the http://golang.org/x/ packages. TBH, golang.org/x/ is stuff that should be in the standard library but isn't, for some reason.
I discovered it after I had already written my own utility to do exactly the same thing, and the code was almost line for line the same, which was pretty funny. But it was a great opportunity to delete some code from the repo without having to refactor anything!
One of the core strengths of Go is that it fits the zen of Python's " There should be one-- and preferably only one --obvious way to do it" and it does this very nicely.
think of it as testing/staging before being merged into stable stdlib
How does it cancel in-progress goroutines when the provided context is cancelled?
Still, this is nicer than hand-rolling a WG every time.
With standard waitgroups I always move my states as a struct with something like a nested *data struct and an err property which is then pushed through the channel. But this way, my error handling is after the read instead of right at the Wait() call.
The one thing I wish Go had more than anything is read-only slices (like C#).
The one thing I wish more other languages had that Go has is structural typing (anything with Foo() method can be used as an interface { Foo() }.
ReadOnlySpan<T> in C# is great! In my opinion, Go essentially designed in “span” from the start.
Interesting approach regarding using strings as containers for raw bytes, but when you create one over a []byte I believe it makes a copy almost always (always?) so you can’t get a zero-cost read-only view of the data to pass to other functions.
One can use unsafe for a zero-copy conversion, but now you are breaking the semantics: a string becomes mutable, because its underlying bytes are mutable.
Or! One can often handle strings and bytes interchangeably with generics: https://github.com/clipperhouse/stringish
One way that you will find it is that they used to be called open arrays in some of them.
On the other hand, now that we have iterators in Go, you can create a wrapper for []byte that only allows reading, yet is iterable.
But then we're abstracting away, which is a no-go in Go and also creates problems later on when you get custom types with custom logic.
My guess is that it is due to many developers bringing reference semantics with them from other languages to Go. It leads to thinking about data in terms of pointers instead of values.
> Runes correspond to code points in Go, which are between 1 and 4 bytes long.
That's the dumbest thing I've read in this month. Why did they use the wrong word, sowing confusion¹, when any other programming language and the Unicode standard uses the correct expression "code point"?
¹ https://codepoints.net/runic already exists
Actually no, these are Unicode scalars, not code points; they exclude the surrogate category.
I agree that rune is a very poor name for it. It both mistakes what runes actually are and clashes with the runic block. But C# has adopted the Rune name for some reason.
Rust simply calls these char, and OCaml uchar (unicode char), which are much better choices.
Your use of the fallacy falls short of the reasoning standard expected here on HN. I did not downvote you, because I'd rather engage with words and effect change, but it does not surprise me that someone else did.
A grapheme can be multiple codepoints, with modifiers, joiners, etc.
This is true in all languages, it’s a Unicode thing, not a Go thing. Shameless plug, here is a grapheme tokenizer for Go: https://github.com/clipperhouse/uax29/tree/master/graphemes
I'm saving this one. Not exactly how I'd explain it, but it's simplified enough to share with my current co-workers without being misleading.
I do not use Go but ran into this when I had to write a Go wrapper for some Rust stuff the other day. I was baffled.
I was so surprised by the design choice to need to put recover in in deferred function calls. It’s crazy to smush together the error handling and normal execution code.
Assuming recover has to exist, I think forcing it to be in a deferred function is genius because it composes so well with how defers work in go. It's guaranteed to run "when the function returns" which is exactly the time to catch such truly catastrophic behaviors.
Until go1.23 [0], Recover() comes in handy for fault reports, however; ex: https://github.com/hashicorp/terraform/blob/325d18262e/inter...
[0] which introduced debug.SetCrashOutput: https://pkg.go.dev/runtime/debug#SetCrashOutput
func Foo() { try { maybePanic() } catch (err any) { doSomething(err) }
}vs
func Foo() { defer func() { if err := recover(); err != nil { doSomething(err) } }()
I also found this very confusing:
> When updating a map inside of a loop there’s no guarantee that the update will be made during that iteration. The only guarantee is that by the time the loop finishes, the map contains your updates.
That's totally wrong, right? It makes it sound magical. There's a light explainer but I think it would be a lot more clear to say that of course the update is made immediately, but the "range" iterator may not see it.
In Python, calling "{}".format(x) is string formatting, while string interpolation would be to use the language feature of "f-strings" such as f"{x}" to do the same thing. As far as I know, go doesn't have string interpolation, it only has convenient string formatting functions via the fmt package.
Basically, if you format strings with a language feature: interpolation. If you use a library to format strings: string formatting.
The difference is format strings are a string with indicators that say where to insert values, usually passed as additional arguments, which follow after the string. String interpolation has the arguments inside the string, which says how to pull the values out of the surrounding context.
Interpolation is where the value is placed directly in the string rather than appended as parameters.
Eg “I am $age years old”.
This does result in the side effect that interpolation is typically a language feature rather than a library feature. But there’s nothing from preventing someone writing an interpolation library, albeit you’d need a language with decent reflection or a one that’s dynamic from the outset.
It's all just spelling. Your compiler just turns
into anyhow. It's not a huge transform.I think people get bizarrely hung up on the tiny details of this between languages... but then, I think that extensive use of string interpolation is generally a code smell at best anyhow, so I'm probably off the beaten path in more than one way here.
To write? Maybe so. Now try to modify it. Enjoy matching quotation marks and juggling commas. It's awful, which is why everybody uses fmt.Sprintf() instead.
String interpolation is a must have these days, I wish the Go devs would wise up to that fact.
But then, like I said, I consider extensive use of this a code smell at best. If you're doing this often enough that this is an actual problem for you, then you're probably doing something wrong. Most uses of string interpolation I see are wrong somehow, and that wrongness is often a security issue.
Format strings also have a history* of crashing or worse and have historically been a very legitimate security concern by themselves. At least Go didn't inherit that.
*Well, still-present if you still use the bad functions in C or C++.
Which I would consider one of the code smells in question, because logging should be structured anyhow.
I understand there are a lot of code bases in the world that already exist that lack structured logging. That may make it "all things considered the right engineering decision to not fix this architectural flaw today", but it doesn't make it not a code smell or an architectural flaw.
That makes it a lot clearer where the problem is, which is also where the solution is: get the list of keys you want to work on ahead of time and iterate over those while modifying the map.
The part about changing a map while iterating is wrong though. The reason you may or may not get it is because go iterates in intentionally random order. It's nothing to do with speed. It's to prevent users from depending on the iteration order. It randomly chooses starting bucket and then goes in circular order, as well as randomly generates a perm of 0..7 inside each bucket. So if your edit goes into a bucket or a slot already visited then it won't be there.
Also, python is not an example to the contrary. Modifying python dicts while iterating is a `RuntimeError: dictionary changed size during iteration`
Is index-based string interpolation easier to follow? I would find it easier to understand a string interpolation when the variable name is right there, rather than having to count along the arguments to find the particular one it's referencing
In Python you'll actually get a RuntimeError here, because Python detects that you're modifying the dictionary while iterating over it.
The upshot of this dogmatism is that its comparatively easy to dev on long-lived Go projects. If I join a new team with an old Go project, there's a very good chance that I'll be able to load it up in my IDE and get all of Go's excellent LSP, debug, linting, testing, etc. tooling going immediately. And when I start reading the code, its likely not going to look very different from a new Go project I'd start up today.
(BTW Thanks OP for these subtleties, there were a few things I learned about).
No, it's just doing the usual "replace unprintable characters when printing" behavior. The data is unchanged, you have no guarantees of UTF-8 validity at all: https://go.dev/play/p/IpYjcMqtmP0
25 more comments available on Hacker News