Tracking Trust with Rust in the Kernel
Posted4 months agoActive3 months ago
lwn.netTechstoryHigh profile
calmpositive
Debate
20/100
RustKernel DevelopmentType SystemsSecurity
Key topics
Rust
Kernel Development
Type Systems
Security
The article discusses using Rust's type system to track trust in kernel development, and commenters explore the benefits and parallels of this approach in other areas, such as embedded systems and web development.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
4d
Peak period
25
84-96h
Avg / period
8.5
Comment distribution51 data points
Loading chart...
Based on 51 loaded comments
Key moments
- 01Story posted
Sep 15, 2025 at 6:54 AM EDT
4 months ago
Step 01 - 02First comment
Sep 18, 2025 at 11:49 PM EDT
4d after posting
Step 02 - 03Peak activity
25 comments in 84-96h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 22, 2025 at 5:36 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45248242Type: storyLast synced: 11/20/2025, 2:38:27 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Though it reminds me of Alexis's "Parse, don't validate", isn't `syscall :: u8 -> Untrusted<u8>` considered as "validate"?
I hope kernel codes that consume it will transform it to appropriate type as well `Untrusted<u8> -> T`.
Untrusted doesn’t validate - it just ensures you don’t accidentally access the data until you’ve ingested data that could be potentially attacking you.
Writing a web app? All user input is untrusted until you process it. And if Untrusted<String> can’t be converted to String accidentally then it forces the programmer to think about it.
Unfortunately in Java (my everyday language) this isn’t feasible. I’d want to be able to join or process Untrusted<String> the same as normal. Really it would need to be built into the stars library.
Back to the article it sounds like this could work really well for the kernel. I hope this kind of idea catches on outside of that.
Dynamically sized objects on the stack has always been difficult. You could argue that C special casing fixed length strings to allow that is the odd one out.
What makes Rust special is that you need to acknowledge the potential errors when unwrapping the type. With a standard library and culture built on exposing the edge cases at compile time.
That is where the guarantees and feeling of certainty comes from.
If you are iterating over the files in a directory the Rust standard library gives you paths. Not strings.
To convert a path that to a regular string type which is valid unicode you need to acknowledge that the path may be invalid. Either unwrapping and panicking or handling the error.
To do this the standard library gives two options:
https://doc.rust-lang.org/std/path/struct.PathBuf.html#metho...Or accept that the conversion may be lossy:
https://doc.rust-lang.org/std/path/struct.PathBuf.html#metho...You get to make a choice, and panicking/erroring is perfectly valid if you only expect valid unicode paths.
As the world and your software changes if you suddenly encounter a non-unicode path then you will immediately know where the error comes from and can fix the issue. Instead of trying to pinpoint the root source of an error far exposing itself far down stream.
> With a standard library and culture built on exposing the edge cases at compile time.
So, how exactly is "you need to acknowledge that the path may be invalid" from your example a compile-time trait? That's certainly not something you know during the compile-time, and if so, then you should be treating all the files like that, so, I see no difference here wrt other languages.
https://play.rust-lang.org/?version=stable&mode=debug&editio...
We can't know if a generic path is valid at compile time. But we can at compile time ensure that we must acknowledge and make a choice for the invalid case.
This now (in)famous blogpost on Go is what happens when you just pretend that everything works:
https://fasterthanli.me/articles/i-want-off-mr-golangs-wild-...
While a pragmatic choice, in reality it isn't very practical IMO. Most of the times you really don't know what to do with the error so you end up turning the condition either into an exception, because it really is an exceptional case, and/or assert on it as an invariant that is broken and which you believe is a programming (usage) error. In any case, you don't really know how to handle it well, and this is I believe often the case for kernel design too - they will try to eliminate as much as possible such cases with testing because they can't afford to panick during the runtime because that would actually be detrimental for QoS.
Even in C++ you have something not quite the same but similar with [[nodiscard]] qualifier but in practice I haven't really seen it being used much but when I did it was mostly annoying to deal with. It's like a hot potato that is being thrown across the API boundaries but nobody wants to deal with it.
For instance, Java could do the same, but in practice conversion methods throw runtime exceptions that you won't be warned about and your code will randomly crash if conversions fail, except in some cases when it doesn't. For casting between numeric types the language does have some legacy cruft, but for web applications nothing is stopping you from doing this. Java's explicit exception system is great at forcing developers to deal with potential failures, but in practice Java developers chose not to deal with exceptions so often that exceptions now get hidden.
Rust does this stuff in the standard library and that gives you the advantage that you don't need to explain the concept and convince every developer that this is a good idea. The same way Java has optional nullability annotations that are rarely used in practice because not every developer feels like adding them, unlike languages where nullability is part of the type system.
The concept is not exclusive to Rust, but I also haven't found it very popular outside of Rust developer circles.
Every time I see someone bringing up the idea of adding checked exceptions to a language (usually in these endless exceptions vs returning errors debates), it is met with "it won't work, look at Java", and it feels like a real shame. I'm sure complications of additional syntax would pay off just as fast as it does for regular type annotations.
It's more like "comptime safety feeling" => "language w/ visibility modifier" but the converse is not necessarily true. Without language support, it's back to C convention or workaround again.
In practice, the JVM may still monomorphize it, but it is not guaranteed to, and this would be a good reason to avoid unnecessary uses of generics in a high performance codebase like a kernel, if you chose to write one in Java.
But GP described something they wanted from a type system and basically said container with `Functor`-like behavior is not possible to do in Java. It's possible, albeit with a performance drawback and a bit more clunky to work with compared to Rust, Haskell, or a language with native HKT support.
If you make UntrustedString a subclass of String (trusted) then you lose type safety. So TrustedString has to be the subclass. Easy enough.
But now string literals are “untrusted”. So you have to do new String(“this is trusted content”) everywhere you need it, which is a pain.
And you can’t add trusted strings. The operator will return a normal String (untrusted) so you have to cast it. Same with any function you call like substring.
So you have to live with that, or make overrides for every single string function that fix the types where necessary.
It’s just really non-ergonomic. I think having it built in would likely make it far better.
I don't see anything wrong with `new Validated("literal")` (or functional friendly `Validated.of("literal")`). If you intend to create a `Validated`, then create it via constructor / static factory method that enforces necessary validations to create `Validated`.
Like `Optional<T>` and `Stream<T>`, you could define `Validated::map(Function<? super String,String>)` if you want. With `map()`, you could operate `Validated` with anything that accepts `String` like usual.With that said, I don't recommend using `Validated` in your actual Spring project though, use (OOP) value objects instead. I used value objects quite a lot on my legacy Spring project. It plays nicely with functional-style, cover "validation" stuff, and avoiding primitive obsession. Putting `String` on `UserId` will result in a loud compiler error.
- it might get swapped out or migrated, meaning your thread goes to sleep if you touch it.
- it can always change concurrently.
- the pointer is only valid within the current context (coz it's a pointer into the user address space).
- I've actually never thought about this before but also the user could free it concurrently I think?
So there's actually already C APIs in the kernel that force you to be aware of this, at least for strings (stuff where the user passes a pointer to their memory). There's also fancy compiler stuff for marking struct fields as pointing to user data, which IIUC can be used to detect if you're forgetting to use the right conversion APIs.
This isn't the case for values passed in registers though so I think Trusted<u64> would be totally new.
ANYWAY, overall: this is cool and good but it's actually one of the lower-impact things from Rust in the kernel IMO. I don't think the bugs that this prevents are actually all that common in practice. TOCTOU would be the biggest one by far but they are rare compared to the daily deluge of incredibly basic UAF bugs that Rust also prevents.
Yup. But if you only send it back to the malloc heap, the kernel doesn't notice, it will happily read data that has been freed.
Unless the data is unmapped, then the kernel is going to get a page fault while trying to read your input. What goes up must come down, so that gets translated to SIGSEGV, and the program stops.
I think I was thinking of the mmap lock. But this was a dumb question mark, this lock isn't just randomly taken when threads enter syscalls. So yeah this is indeed another reason why copy_from_user etc is needed.
And then the audience will nod yes, accept the message of what is possible, and go back to whatever approach they were doing already.
It isn't the typing, rather the culture.
At least in modern c++. Then again, modern C++ seems to play the same role as modern Java, in that some places use it but most of them are stuck at the version they picked when they started developing on a piece of software decades ago.
Turns out updating software versions in most companies is really hard, and updating humans atittude to specific programming practices even harder, unless their job is on the line.
And even so, it doesn't matter if IT says no, or the customer doesn't pay for consulting services to upgrade existing projects.
As for incompatible changes, there aren’t any in Rust by design. That limits some of the changes they can make even in an edition but in practice it works ok. Even over a longer time span, I think the Rust community will figure out a way to trim irrelevant cruft away.
I am not so optimistic when Rust achieves a similar market share and historical baggage, as C and C++ have today, growing since the 1970's.
It is of course better designed, and with better tooling, that doesn't spare it from market forces and companies sponsoring the foundation.
See where Linux foundation is today for a comparable example.
Both Rust's OwnedFd and Rust's Mutex<T> have analogous C++ which are used, but lacking a borrowck C++ can't express the idea "Maximum of one person can have this"
What happens if I keep a mutable reference to the Goose despite unlocking the protective mutex? In C++ the answer is you lose mutual exclusion, in Rust that doesn't compile. C++ people will say "Don't do that" but "not doing that" does not scale whereas it doesn't compile scales just fine.
Static analysis in c++ is akin to clippy but without a --fix option and many more false positives. Also clippy is more about stylistic consistency within the ecosystem and completely optional, not about safety which isn’t optional.
Commercial tools like PVS and Coventry, or those used in high integrity computing, also allow for rule customisation.
However the first step is to use anything at all, a challenge since lint was born in 1979.
1 more comments available on Hacker News