Go's Escape Analysis and Why My Function Return Worked
Key topics
The eternal puzzle of Go's escape analysis has developers scratching their heads, particularly when it comes to understanding why certain function returns work as expected. Some argue that escape analysis is just an optimization that shouldn't affect one's understanding of Go code, suggesting that assuming everything allocates on the heap can simplify things. However, others point out that the semantics are clearly defined and similar to C, making it unnecessary to overcomplicate the issue. As the discussion unfolds, it becomes clear that inlining and the behavior of returning values (like slices) are crucial to grasping how Go handles memory allocation.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
43m
Peak period
56
Day 7
Avg / period
12.2
Based on 73 loaded comments
Key moments
- 01Story posted
Dec 5, 2025 at 6:03 AM EST
29 days ago
Step 01 - 02First comment
Dec 5, 2025 at 6:46 AM EST
43m after posting
Step 02 - 03Peak activity
56 comments in Day 7
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 14, 2025 at 5:18 PM EST
19 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Assuming to everything allocates on the heap, will solve this specific confusion.
My understanding is that C will let you crash quite fast if the stack becomes too large, go will dynamically grow the stack as needed. So it's possible to think you're working on the heap, but you are actually threshing the runtime with expensive stack grow calls. Go certainly tries to be smart about it with various strategies, but a rapid stack grow rate will have it's cost.
The initial stack size seems to be 2kb, a more on a few systems. So far I understand you can allocate a large local i.e. 8kb, that doesn't escape and grow the stack immediately. (Of course that adds up if you have a chain of calls with smaller allocs). So recursion is certainly not the only concern.
I am pretty sure the escape analysis doesn't affect the initial stack size. Escape analysis does determine where an allocation lives. So if your allocation is lower then what escape analysis considers heap and bigger then the initial stack size, the stack needs to grow.
What I am certain about, is that I have runtime.newstack calls accounting for +20% of my benchmark times (go testing). My code is quite shallow (3-4 calls deep) and anything of size should be on the heap (global/preallocated) and the code has zero allocations. I don't use goroutines either, it might me I still make a mistake or it's the overhead from the testing benchmark. But this obviously doesn't seem to be anything super unusual.
Depends what you mean by “large”. As of 1.24 Go will put slices several KB into the stack frame:
Goes on the stack if it does not escape (you can see Go request a large stack frame) goes on the heap (Go calls runtime.makeslice).If you use indexed slice literals gc does not even check and you can create megabyte-sized slices on the stack.
And creates an on-slice stack whose size is only limited by Go's 1GB limit on individual stackframes: https://godbolt.org/z/rKzo8jre6 https://godbolt.org/z/don99e9cn
Interesting, [...] syntax works here as expected. So escape analysis simply doesn't look at the element list.
Why? It is the same as in C.
* Of course, in reality, Java also does escape analysis to allocate on the stack, though it's less likely to happen because of the lack of value types.
(Pedants: I'm aware that the official distinction in C is between automatic and non-automatic storage.)
What you wrote is not the same in C and Go, because GC and escape analysis. But 9rx is also correct that what OP wrote is the same in C and Go.
So OP almost learned about escape analysis, but their example didn't actually do it. So double confusion on their side.
Escape analysis is the reason your `x` is on the heap. Because it escaped. Otherwise it'd be on the stack.[1]
Now if by "semantics of the code" you mean "just pretend everything is on the heap, and you won't need to think about escape analysis", then sure.
Now in terms of what actually happens, your code triggers escape analysis, and OP does not.
[1] Well, another way to say this I guess is that without escape analysis, a language would be forced to never use the stack.
Like any optimization, it makes sense to talk about what "will" happen, even if a language (or a specific compiler) makes no specific promises.
Escape analysis enables an optimization.
I think I understand you to be saying that "escape analysis" is not why returning a pointer to a local works in Go, but it's what allows some variables to be on the stack, despite the ability to return pointers to other "local" variables.
Or similar to how the compiler can allow "a * 6" to never use a mul instruction, but just two shifts and an add.
Which is probably a better way to think about it.
> So clearly you don’t need to think about the details of escape analysis to understand what your code does
Right. To circle back to the context: Yeah, OP thought this was due to escape analysis, and that's why it worked. No, it's just a detail about why other code does something else. (but not really, because OP returned the slice by value)
So I suppose it's more correct to say that we were never discussing escape analysis at all. An escape analysis post would be talking about allocation counts and memory fragmentation, not "why does this work?".
Claude (per OPs post) led them astray.
A straightforward reading of the code suggests that it should do what it does.
The confusion here is a property of C, not of Go. It's a property of C that you need to care about the difference between the stack and the heap, it's not a general fact about programming. I don't think Go is doing anything confusing.
But yeah, to your point, returning a slice in a GC language is not some exotic thing.
I commented elsewhere on this post that I rarely have to think about stacks and heaps when writing Go, so maybe this isn’t my issue to care about either.
Sure, Go has escape analysis, but is that really what's happening here?
Isn't this a better example of escape analysis: https://go.dev/play/p/qX4aWnnwQV2
Both arrays in this example seem to be on the heap.
If you want to confirm, you have to use the Go compiler directly. Take the following code:
}func main() { logs := readLogsFromPartition(1) fmt.Printf("%p\n", &logs[0]) }
Since 1.17 it’s not impossible for escape analysis to come into play for slices but afaik that is only a consideration for slices with a statically known size under 64KiB.
In my experience, the average programmer isn’t even aware of the stack vs heap distinction these days. If you learned to write code in something like Python then coming at Go from “above” this will just work the way you expect.
If you come at Go from “below” then yeah it’s a bit weird.
That said, when it matters it matters a lot. In those times I wish it was more visible in Go code, but I would want it to not get in the way the rest of the time. But I’m ok with the status quo of hunting down my notes on escape analysis every few months and taking a few minutes to get reacquainted.
Side note: I love how you used “from above” and “from below”. It makes me feel angelic as somebody who came from above; even if Java and Ruby hardly seemed like heaven.
The stack is known at the compile time and it can also be thrown away wholesale when the function is done, making allocations on the stack relatively cheaper.
This FOSDEM talk by Sümer Cip goes in to it a bit: https://archive.fosdem.org/2025/schedule/event/fosdem-2025-5...
Coming (back then) from C/C++ gamedev - I was puzzled, then I understood the mantra - it's better for the process to die fast, instead of being pegged by GC and not answering to the client.
Then we started looking what made it use GC so much.
I guess it might be similar to Go - in the past I've seen some projects using a "baloon" - to circumvent Go's GC heuristic - e.g. if you blow this dummy baloon that takes half of your memory GC might not kick so much... Something like this... Then again obviously bad solution long term
I also came "from above".
even in C, the concept of returning a pointer to a stack allocated variable is explicitly considered undefined behavior (not illegal, explicitly undefined by the standard, and yes that means unsafe to use). It be one thing if the the standard disallowed it.
but that's only because the memory location pointed to by the pointer will be unknown (even perhaps immediately). the returning of the variable's value itself worked fine. In fact, one can return a stack allocated struct just fine.
TLDR: I don't see what the difference between returning a stack allocated struct in C and a stack allocated slice in Go is to a C programmer. (my guess is that the C programmer thinks that a stack allocated slice in Go is a pointer to a slice, when it isn't, it's a "struct" that wraps a pointer)
The following Go code also works perfectly well, where it would obviously be UB in C:
As the author shows in their explanations, they thought that the backing array for the slice gets allocated on the stack, but then the slice (which contains/represents a pointer to the stack-allocated array) gets returned. This is a somewhat weird set of assumptions to make (especially give that the actual array is allocated in a different function that we don't get to see, ReadFromFile, but apparently this is how the author thought through the code.
Of course the compiler could inline it or do something else but semantically its a copy.
gc could create i on the stack then copy it to the heap, but if you plug that code into godbolt you can see that it is not that dumb, it creates a heap allocation then writes the literal directly into that.
Back in Python 2.1 days, there was no guarantee that a locally scoped variable would continue to exist past the end of the method. It was not guaranteed to vanish or go fully out of scope, but you could not rely on it being available afterwards. I remember this changing from 2.3 onwards (because we relied on the behaviour at work) - from that point onwards you could reliably "catch" and reuse a variable after the scope it was declared in had ended, and the runtime would ensure that the "second use" maintained the reference count correctly. GC did not get in the way or concurrently disappear the variable from underneath you anymore.
Then from 2008 onwards the same stability was extended to more complex data types. Again, I remember this from having work code give me headaches for yanking supposedly out-of-scope variable into thin air, and the only difference being a .1 version difference between the work laptop (where things worked as you'd expect) and the target SoC device (where they didn't).
I am so glad I never taken up C. This sound like a nightmare of a DX to me.
And these days, if you're bothering with C you probably care about these things. Accidentally promoting from the stack to the heap would be annoying.
The thing being returned is a slice (a fat pointer) that has pointer, length, capacity. In the code linked you'll see the fat pointer being returned from the function as values.