A Basic Just-in-Time Compiler (2015)
Key topics
Diving into the nuances of just-in-time (JIT) compilation, commenters dissect the term's meaning and its implications for various programming languages. While some argue that JIT refers to compilation during execution, others point out that this definition could encompass a broad range of systems, including interactive programming environments like Lisp and GHC. A key point of contention revolves around dynamic typing, with some commenters suggesting that languages like JavaScript and Lua are more suited to interpretation or JIT compilation due to their inherent flexibility, making ahead-of-time (AOT) compilation less efficient. As the discussion unfolds, it becomes clear that the distinction between JIT and AOT compilation is not always clear-cut, and the choice between them often depends on the specific language and its characteristics.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
25
Day 1
Avg / period
9
Based on 27 loaded comments
Key moments
- 01Story posted
Jan 2, 2026 at 8:18 PM EST
9 days ago
Step 01 - 02First comment
Jan 2, 2026 at 10:15 PM EST
2h after posting
Step 02 - 03Peak activity
25 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Jan 11, 2026 at 11:07 AM EST
7h ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The canonical definition of JIT is "compilation during execution of a program". Usually, a program is being interpreted first, then switches to compiled code in the middle of execution. This is not what this article does.
What this article does is sometimes called on-the-fly AOT, or just on-the-fly compilation. I'd prefer not overloading the term "JIT".
EDIT: at least GHC seems to be a traditional AOT compiler.
Like the guesses above, I can understand difficulty with AOT compilation in conjunction with certain use cases; however, I can not think of a language that based on its definition would be less amenable to AOT compilation.
AOT situations where a lot of context is missing:
• Loosly typed languages. Code can be very general. Much more general than how it is actually used in any given situation, but without knowing what the full situation is, all that generality must be complied.
• Increment AOT compilation. If modules have been compiled separately, useful context wasn't available during optimization.
• Code whose structure is very sensitive to data statistics or other conditional runtime information. This is the prime advantage of JIT over AOT. Unless the AOT compiler is working in conjunction with representative data and a profiler.
Those are all cases where JIT has advantages.
A language where JIT is optimal, is by definition, less amenable to AOT compilation.
I agree what they have isn't JIT compilation, but not for that reason. Tiered execution was never a central part of JIT compilation either. It was a fairly new invention in comparison.
The reason what they describe isn't JIT compilation is IMO fairly boring: it's not compiling the input program in any meaningful way, but simply writing hard-coded logic into executable memory that it already knows the program intended to perform. Sure there's a small degree of freedom based on the particular arithmetic operations being mentioned, but that's... very little. When your compiler already knows the high-level source code logic before it's even read the source code, it's... not a compiler. It's just a dynamic code emitter.
The program reads the logic from stdin and translates it into machine instructions. I can agree that there is not a lot of a freedom in what can be done, but I think it just means that the source language is not Turing complete. I don't believe that compiler needs to deal with a Turing complete language to claim the title "JIT compiler".
"Not Turing-complete" is quite the understatement.
A "compiler" is software that translates computer code from one programming language into another language. Not "any software that reads input and produces output."
The input language here is... not even a programming language to begin with. Literally all it can express is linear functions. My fixed-function calculator is more powerful than that! If this is a programming language then I guess everyone who ever typed on a calculator is a programmer too.
These terms are not related to the complexity of the problem. The first compilers could only translate for formulas, hence FORTRAN.
Wiki's first sentence would be called "transpiler", I believe, or as Aho and Ullman call it in the section it almost quotes, a translator. A compiler is supposed to transpile to a lower target, closer to the hardware.
And even then you leave out the gist of the argument: the complexity of the source language is not relevant. None of the definitions require it. Just that it's more symbolic than the output. Which is true in the case of the OP. A string like "+2 *3 -5" isn't machine executable but represents are recurrent relation, while the output of OP's code is.
Yes, of course.
> Literally all it can express is linear functions.
And why a language to express linear functions can't be a programming language?
> My fixed-function calculator is more powerful than that!
But it isn't compiling, it is interpreting, is it? So your fixed-function calculator is not a compiler, it is an interpreter. It is irrelevant how powerful it is. There are much even more powerful interpreters and less powerful compilers.
The example we see gets computer code in one language and translates it into machine instructions. Talking about 'understatement' you are adding to this definition more constraints and these are very fuzzy constraints on what counts as a programming language.
Well, this includes what I refer to as "on-the-fly" AOT, like SBCL, CCL, Chez Scheme... Even ECL can be configured to work this way. As I mentioned in another comment, people in those circles do not refer to these as "JIT" at all, instead saying "I wish my implementation was JIT instead of on-the-fly AOT"!
I agree with your point. I think the point of JIT is that a program can be best sped up when there's a large amount of context - of which there's a lot when it's running. A powerful enough JIT compiler can even convert an "interpreter" into a "JIT compiler": See PyPy and Truffle/Graal.
Of course, there are edge cases like embedding libtcc, but I think it's a reasonable definition.
Discounting books, many other well written articles on JIT have been shared on HN over the years [0][1][2]; the one I particularly liked as it introduces the trinity in a concise way: Compiler, Interpreter, JIT https://nickdesaulniers.github.io/blog/2015/05/25/interprete... / https://archive.vn/HaFlQ (2015).
[0] How to JIT - an introduction, https://eli.thegreenplace.net/2013/11/05/how-to-jit-an-intro... (2013).
[1] Bytecode compilers and interpreters, https://bernsteinbear.com/blog/bytecode-interpreters/ (2019).
[2] Let's build a Simple Interp, https://ruslanspivak.com/lsbasi-part1/ (2015).
But I believe sysconf(_SC_PAGESIZE) will always be 4KB, because the “may” is at the user’s discretion, not the system’s. Except on Cosmopolitan where it will always be 64KB, because Windows NT for Alpha (yes, seriously).