Writing Memory Safe Jit Compilers
Posted3 months agoActive3 months ago
medium.comTechstory
calmpositive
Debate
40/100
Jit CompilersMemory SafetyCompiler Development
Key topics
Jit Compilers
Memory Safety
Compiler Development
The article discusses how GraalVM's JIT compiler achieves memory safety, sparking a discussion on the trade-offs between performance, safety, and complexity in compiler design.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
1h
Peak period
18
0-12h
Avg / period
6.7
Comment distribution20 data points
Loading chart...
Based on 20 loaded comments
Key moments
- 01Story posted
Sep 25, 2025 at 10:10 PM EDT
3 months ago
Step 01 - 02First comment
Sep 25, 2025 at 11:21 PM EDT
1h after posting
Step 02 - 03Peak activity
18 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 30, 2025 at 10:36 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45381813Type: storyLast synced: 11/20/2025, 1:32:57 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
> Using a memory-safe language for these components and removing JIT compilers could work, but would significantly reduce the engine's performance (ranging, depending on the type of workload, from 1.5–10× or more for computationally intensive tasks)
I don't get it, why the dichotomy between "no JIT" and "sandboxed JIT"? Isn't the not also the option of producing JITted code with safety guarantees, similarly to how the Rust compiler generates code with safety guarantees?
The difference being that your rust programs aren't usually written by malicious people. The goal is to prevent accidental memory safety issues not intentional ones. In contrast the JIT compiler is compiling code made by the adversary.
Rust only guarantees it up to bugs in the analysis though, which is usually okay for rust, but not for truly adversarial inputs (JavaScript)
The better comparison might be ebpf, where you take the output of one compiler, then verify with a second compiler, then compile with a third compiler, so there are that many more gates you need to pass to get malicious input to produce exploitable output, while still getting speed
https://www.youtube.com/watch?v=pksRrON5XfU
The optimizations do have to be correct. However, there are some significant factors that make it a lot easier in the Truffle architecture:
1. The optimizations are themselves implemented in a language with memory safety and lots of static analysis tools, so they're just less likely to be buggy for the usual reasons.
2. Most of the complex optimizations that matter for a language like Javascript or Python are implemented in the interpreter, which is then partially evaluated. In a classical language VM complex optimizations like PICs, specialization, speculation etc are all hand coded in C++ by manipulating compiler IR. It's a very unnatural way to work with a program. In the Truffle architecture they're implemented in normal Java in relatively straight-line logic, so there's just far fewer ways to mess up.
3. The intrinsics are also all written in a memory safe language. Buggy intrinsics are a common cause of compiler errors.
It is nonetheless true that at some points the program is transformed, and those transforms can have bugs. Just like how languages with memory safety and strong standard libraries don't eliminate all bugs, just a lot of them. It's still a big upgrade in correctness, I think.
Essentially it exchanges the bugs of a V8 engine for the bugs of the GraalVM, right?
Which is not without real benefit, of course; it's a lot easier to build and secure one genericized VM runtime toolkit than it is to rebuild everything from scratch for every single language runtime like V8 that might need to exist. It's far better to competently solve the difficult problem once than to try to competently solve it over and over and over.
Basically it's easier to implement optimizations correctly if you're expressing them in regular logic instead of hand-written graph manipulations. The underlying approach is fundamentally different.
You get many of the guarantees of compiled code (strong correctness, reduced mismatch between interpreter vs JIT semantics, etc.), while still being very close to native performance.
In fact, Python is already moving in that direction: the new bytecode-based copy-and-patch JIT in Python 3.13 shows correct results even before heavy performance tuning.
So to me this seems like a very promising road: I wonder how practical this is if the base language is not C/C++ but Rust (or any kind of memory safe language).
[0] https://arxiv.org/abs/2011.13127