Go Proposal: Secret Mode
Key topics
The Go Proposal: Secret Mode has sparked a lively debate, with commenters weighing in on the potential risks and benefits of silently downgrading on unsupported platforms. While some, like fsmv, express concerns that users might think they're safe when they're not, others, like fastest963, argue that there's no realistic alternative to generating keys on unsupported platforms. The discussion also veers into a tangent on Go's cryptography support, with some, like pants2, touting its well-supported standard libraries and major projects like Vault and K8s, while others, like adastra22 and int_19h, question whether it's truly the "best" and how it stacks up against other languages like C, C++, Java, or C#. As the conversation unfolds, it becomes clear that the proposal's goals, such as identifying existing code that would benefit from protection, are being scrutinized, with kbolino clarifying the purpose of the `secret.Enabled()` function.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
44
96-108h
Avg / period
14.1
Based on 113 loaded comments
Key moments
- 01Story posted
Dec 9, 2025 at 4:10 PM EST
24 days ago
Step 01 - 02First comment
Dec 9, 2025 at 5:42 PM EST
2h after posting
Step 02 - 03Peak activity
44 comments in 96-108h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 16, 2025 at 3:57 AM EST
18 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Go has the best support for cryptography of any language
1. Well-supported standard libraries generally written by Google
2. Major projects like Vault and K8s that use those implementations and publish new stuff
3. Primary client language for many blockchains, bringing cryptography contributions from the likes of Ethereum Foundation, Tendermint, Algorand, ZK rollups, etc
Because there is tremendous support for cryptography in, say, the C/C++ ecosystem, which has traditionally been the default language of cryptographers.
I'm a big fan of the go standard library + /x/ packages.
The documentation in this article already explains what it actually does:
This means it is just a way of checking that you are running inside the context which secret.Do provides, but doesn't guarantee that secret.Do is actually offering the protection you desire.https://go-review.googlesource.com/c/go/+/729920
Windows and MacOS?
Go is supposed to be cross-platform. I guess it's cross-platform until it isn't, and will silently change the semantics of security-critical operations (yes, every library builder will definitely remember to check if it's enabled.)
Many advanced Go features start in certain platforms and then expand to others once the kinks are worked out. It’s a common pattern and has many benefits. Why port before its stable?
I look forward to your PR.
Which is exactly why it should fail explicitly on unsupported platforms unless the developer says otherwise. I'm not sure how Go developers make things obvious, but presumably you have an ugly method or configuration option like:
...for when a developer understands the risk and doesn't want to panic.Most of our agency projects that have .NET in them, are brown field ongoing projects that mostly focus on .NET Framework, thus we end up only using modern .NET when given the opportunity to deliver new microservices, and the customer happens to be a .NET shop.
The last time this happened, .NET 8 had just been released, and most devs I work with tend to be journeyman they aren't chasing programming language blogs to find out what changes in each release, or online communities, they do the agency work and go home for friends, family and non programming related hobbies.
But, this is probably a net improvement over the current situation, and this is still experimental, so, changes can happen before it gets to GA.
This is likely done for platform performance and having a manual version likely hinders the GC in a way that's deemed too impactful. Beyond this, if SysV or others contribute specific patches that aren't brute forced (such as RiscV extensions), I would assume that the go maintainers would accept it..
Writing cryptography code in Go is incredibly annoying and cumbersome due lack of operator overloading, forcinforcing you to do method calls like `foo.Add(bar.Mul(baz).Mod(modulus)).Mod(modulus)`. These also often end up having to be bignums instead of using generic fixed-size field arithmetic types. Rust has incredibly extensive cryptographic libraries, the low-level taking advantage of this operator overloading so the code ends up being able to following the notation in literature more closely. The elliptic_curve crate in particular is very nice to work with.
Also fips140.WithoutEnforcement (also coming in 1.26): https://github.com/golang/go/issues/74630#issuecomment-32282...
And an in-progress proposal to make these various "bubble" functions have consistent semantics: https://github.com/golang/go/issues/76477
I mean, if you're worried about ensuring data gets zeroed out, you probably also don't want to leak it via side channels, either.
I don't understand. Why do you need it in a garbage-collected language?
My impression was that you are not able to access any register in these language. It is handled by the compiler instead.
Since "safety" is an encompassing term, it's easy to find more rigorous definitions of the term that Go would flunk; for instance, it relies on explicit synchronization for shared memory variables. People aren't wrong for calling out that other languages have stronger correctness stories, especially regarding concurrency. But they are wrong for extending those claims to "Go isn't memory safe".
https://www.memorysafety.org/docs/memory-safety/
It is true that go is only memory unsafe in a specific scenario, but such things aren’t possible in true memory safe languages like c# of Java. That it only occurs in multithreaded scenarios matters little especially since concurrency is a huge selling point of the language and baked in.
Java can have data races, but those data races cannot be directly exploited into memory safety issues like you can with Go
https://www.ralfj.de/blog/2025/07/24/memory-safety.html
Blatantly false. From Ralf’s post:
> panic: runtime error: invalid memory address or nil pointer dereference [signal SIGSEGV: segmentation violation code=0x1 addr=0x2a pc=0x468863]
The panic address is 42, a value being mutated, not a nil pointer. You could easily imagine this address pointing to a legal but unintended memory address resulting in a read or write of unintended memory.
> When that happens, we will run the Ptr version of get, which will dereference the Int’s val field as a pointer – and hence the program accesses address 42, and crashes.
If you don’t see an exploit gadget there based on a violation of memory safety I don’t know how to have a productive conversation.
The 42 is an explicit value in the example code. From what I understand the code repeatedly changes the value assigned to an interface variable from an object containing a pointer to an object containing an integer. Since interface variables store the type of the assigned value, but do not update both type and value atomically a different thread can interpret whatever integer you put into it as a valid pointer. Putting a large enough value into the integer should avoid the protected memory page around 0 and allow for some old fashioned memory corruption.
Of course, Go allows more than that, with data races it's possible to reach use after free or other kinds of memory unsafety, but just segfaults don't mark a language memory unsafe.
This stems from the fact that Go uses fat pointers for interfaces, so they can't be atomically assigned. Built-in maps and slices are also not corruption-safe.
In contrast, Java does provide this guarantee. You can mutate structures across threads, and you will NOT get data corruption. It can result in null pointer exceptions, infinite loops, but not in corruption.
Serious systems built in memory-unsafe languages yield continual streams of exploitable vulnerabilities; that remains true even when those systems are maintained by the best-resourced security teams in the world. Functionally no Go projects have this property.
> Serious systems built in memory-unsafe languages yield continual streams of exploitable vulnerabilities
I'm not saying that Go is as unsafe as C. But it definitely is NOT completely safe. I've seen memory corruptions from improper data sync in my own code.
Also, by your definition, e.g. Rust is not memory safe. And "It is true that Rust is only memory unsafe in a specific scenario, but [...]". I hope you agree.
If you are concerned about secrets being zeroed out in almost any language, you need some sort of support for it. Non-GC'd languages are prone to optimize away zeroing out of memory before deallocation, because under normal circumstances a write to a value just before deallocation that is never effectfully read can be dropped without visible consequence to the rest of the program. And as compilers get smarter it can be harder to fool them with code, like, simply reading afterwards with no further visible effect might have been enough to fool 20th century compilers but nowadays I wouldn't count on my compiler being that stupid.
There are also plenty of languages where you may want to use values that are immutable within the context of the language, so there isn't even a way to express "let's zero out this RAM".
Basically, if you don't build this in as a language feature, you have a whole lot of pressures constantly pushing you in the other direction, because why wouldn't you want to avoid the cost of zeroing memory if you can? All kinds of reasons to try to avoid that.
I used to not really see the need for this level of detail on things... then you see useful (IE in the wild exploits) for even complex factors like CPU branch prediction (for a while now), and the need starts to become much more clear.
And any language which can call C code that is resident in the same virtual memory space can have its own restrictions bypassed by said C code. This even applies to more restrictive runtimes like the JVM or Python.
In practice it provides a straightforward path to complying with government crypto certification requirements like FIPS 140 that were written with languages in mind where this is an issue.
It may sound naive, but packages which include data like said session related or any other that should not persist (until the next Global GC) - why don't you just scramble their value before ending your current action?
And dont get me wrong - yes that implies extra computation yada yada - but until a a solution is practical and builtin - i'd just recommend to scramble such variables with new data so no matter how long it will persist, a dump would just return your "random" scramble and nothing actually relevant.
Imagine, 3 level nesting calls where each calls another 3 methods, we are talking about 28 functions each with couple of variables, of course you can still clean them up, but imagine how clean code will look if you don't have to.
Just like garbage collection, you can free up memory yourself, but someone forgot something and we have either memory leak or security issues.
If you had to prompt a user for a password, you’d read it in, use it, then thrash the value.
It’s not pretty, but a similar concept.As another response pointed out, its also possible that said secret data is still in the register, which no matter what we do to the curr value could exist.
Thanks for pointing it out!
This is essentially already the case whenever you use encryption, because there are tell-tale signs you can detect (e.g., RSA S-Box). But this will make it even easier and also tip you off to critical sections that are sensitive yet don't involve encryption (e.g., secure strings).
1) You are almost certainly going to be passing that key material to some core crypto functions, and those functions may allocate and copy your data around; while core crypto operations could probably be identified and given special protection in their own right, this still creates a hole for "helper" functions that sit in the middle
2) The compiler can always keep some data in registers, and most lines of Go code can be interrupted at any time, with the registers of the running goroutine copied somewhere temporarily; this is beyond your control and cannot be patched up after the fact by you even once control returns to your goroutine
So, even with your approach, (2) is a pretty serious and fundamental issue, and (1) is a pretty serious but mostly ergonomic issue. The two APIs also illustrate a basic difference in posture: secret.Do wipes everything except what intentionally escapes, while scramble wipes only what you think is important to wipe.
While in my case i had a program in which i created an instance of such a secret , "used it" and than scrambled the variable it never left so it worked.
Tho i didn't think of (2) which is especially problematic.
Prolly still would scramble on places its viable to implement, trying to reduce the surface even if i cannot fully remove it.
First one being, it was -very- tricky to use properly for most cases, APIs to the outside world typically would give a byte[] or string or char[] and then you fall into the problem space you mention. That is, if you used a byte[] or char[] array, and GC does a relocation of the data, it may still be present in the old spot.
(Worth noting, the type itself doesn't do that, whatever you pass in gets copied to a non-gc buffer.)
The second issue is that there's not a unified unix memory protection system like in windows; The windows implementation is able to use Crypt32 such that only the current process can read the memory it used for the buffeer.
In a nutshell, if you have a function like
...then even if you sanitize heap allocations and whatnot, there is still a possibility that those "secret calculations" will use the space allocated for the return value as a temporary location, and then you'll have secret data leaked in that type's padding.More realistically, I'm assuming you're aware that optimizing compilers often simplify `memset(p, 0, size); free(p);` to `free(p);`. A compiler frontend can use things like `memset_s` to force rewrites, but this will only affect the locals created by the frontend. It's entirely possible that the LLVM backend notices that the IR wants to erase some variable, and then decides to just copy the data to another location on the stack and work with that, say to utilize instruction-level parallelism.
I'm partially talking out of my ass here, I don't actually know if LLVM utilizes this. I'm sure it does for small types, but maybe not with aggregates? Either way, this is something that can break very easily as optimizing compilers improve, similarly to how cryptography library authors have found that their "constant-time" hacks are now optimized to conditional jumps.
Of course, this ignores the overall issue that Rust does not have a runtime. If you enter the secret mode, the stack frames of all nested invoked functions needs to be erased, but no information about the size of that stack is accessible. For all you know, memcpy might save some dangerous data to stack (say, spill the vector registers or something), but since it's implemented in libc and linked dynamically, there is simply no information available on the size of the stack allocation.
This is a long yap, but personally, I've found that trying to harden general-purpose languages simply doesn't work well enough. Hopefully everyone realizes by now that a borrow checker is a must if you want to prevent memory unsoundness issues in a low-level language; similarly, I believe an entirely novel concept is needed for cryptographical applications. I don't buy that you can just bolt it onto an existing language.
You could definitely zero registers that way, and a allocator that zeros on drop should be easy. The only tricky thing would be zeroing the stack - how do you know how deep to go? I wonder what Go's solution to that is...
And stack's the main problem, yeah. It's kind of the main reason why zeroing registers is not enough. That and inter-procedural optimizations.
In general though getting to a fairly predictable place is possible and the typical case of key material shouldn’t have highly arbitrary stacks, if you do you’re losing (see io comment above).
https://docs.rs/zeroize/1.8.1/zeroize/ has been effective for some users, it’s helped black box tests searching for key material no longer find it. There are also some docs there on how to avoid common pitfalls and links to ongoing language level discussions on the remaining and more complex register level issues.
https://pkg.go.dev/runtime#SetFinalizer
AddCleanup might be too heavy, it is cheaper to just set a bit in the header/info zone of memory blocks.
This is just my own speculation, I don't know the internals of Go beyond a few articles I've read over the years. Somewhat more familiar with C#, JS and Rust, and even then I'll sometimes confuse certain features between them.
Linux has memfd_secret ( https://man7.org/linux/man-pages/man2/memfd_secret.2.html ), that allow you to create a secure memory region that can't be directly mapped into regular RAM.
That doesn't make sense to me. How can the "offset in an array itself" be "secret" if it's "always" 100? 100 isn't secret.
The example could probably have been better phrased.
The only thing that makes sense to me is a scenario with a lot of addresses. E.g. if there's an array of 256 integers, and those integers themselves aren't secret. Then there's a key composed of 32 of those integers, and the code picks which integers to use for the key by using pointers to them. If an attacker is able to know those 32 pointers, then the attacker can easily know what 32 integers the key is made of, and can thus know the key. Since the secret package doesn't erase pointers, it doesn't protect against this attack. The solution is to use 32 array indexes to choose the 32 integers, not 32 pointers to choose the 32 integers. The array indexes will be erased by the secret package.
Seems like this should raise a compiler error or panic on runtime.
Big thumbs down from me.
1) allocations via memguard bypass gc entirely
2) they are encrypted at all times when not unsealed
3) pages are mprotected to prevent leakage via swap
4) and so on...
Not as ergonomic as OP's proposal, of course.
[1] https://github.com/awnumar/memguard
https://github.com/conradkleinespel/rpassword/issues/100#iss...