Litex: the First Formal Language Learnable in 1-2 Hours
Posted3 months agoActive3 months ago
github.comTechstoryHigh profile
skepticalmixed
Debate
70/100
Formal LanguagesProof AssistantsProgramming Languages
Key topics
Formal Languages
Proof Assistants
Programming Languages
The HN community discusses Litex, a new formal language for coding reasoning that claims to be learnable in 1-2 hours, with some users expressing skepticism and others finding it potentially interesting.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
N/A
Peak period
37
48-54h
Avg / period
12.2
Comment distribution73 data points
Loading chart...
Based on 73 loaded comments
Key moments
- 01Story posted
Sep 25, 2025 at 1:43 AM EDT
3 months ago
Step 01 - 02First comment
Sep 25, 2025 at 1:43 AM EDT
0s after posting
Step 02 - 03Peak activity
37 comments in 48-54h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 29, 2025 at 4:35 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45369629Type: storyLast synced: 11/20/2025, 3:50:08 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Making Litex intuitive to both human and AI is the mission of Litex. That is how Litex scales formal reasoning: making it accessible to more people, applicable to more complex problems, and usable by large-scale AI systems.
The comparision between Litex and Lean is on our website(https://litexlang.com). There is also a straightforward tutorial about it on our web that you do not want to miss.
Contact me if you are interested! Really hope we can scale formal reasoning in AI era together!
Well, I propose an alternative proof in lean4:
---One thing I like about the lean proof, as opposed to the litex proof, is that it specifies why the steps are correct. If litex's strategy is "you describe the steps you want to take, and litex will automatically figure out why they're correct", how are you supposed to do any nontrivial proofs?
I learned about the `linear_combination` tactic from your example. Other than that, I use `have` and `exact`, which (a) are not advanced, and (b) are also used in your example.
Before that, my first attempt at the lean proof looked like this:
> We are comparing Lean and Litex under conditions where they don’t rely too much on external packagesThis proof does have the advantage of not needing to import Mathlib.Tactic. Although again, that's something your proof does.
HAHA, what i am saying here is, each statement you write, is automatically checked by Litex, using its known specific facts and universal facts. Since any math proof, no matter complex or simple, can be divided into many small steps and each small step can be verified in that way, I guess it is fair to say "automatically figure out why they're correct" to a non-mathematicl person when introducing Litex.
I took a look at the example and it's very intuitive. I'm trying to guess which heuristics is it using. Does it try to find exact matches of whatever is inside the parenthesis?
@GP?: Doesn't Lean have a "please fill the blanks" mode for short proof?
Yes, there are several depending on what you want to do. The catchall is `simp` (short for 'simplify').
Why do we need more than one "do what I mean" command? (Or more than zero "do what I say" commands?) Because it's impossible to know what you mean. If you try to use a lot of `simp`, you'll notice that you often have to tell it what to do.
Note that what we really want for this problem is a linear algebra package that can do row reduction, not a formal prover. This problem is all arithmetic.
One should also take a look at Isabelle/HOLs AFP here. You can get very far with Metis et al but it is very inefficient computationally.
Especially when proofs get larger and/or layer on abstractions (proving something nontrivial likely involves building on existing algorithms etc.) the ability to make proofs efficient to verify is important.
We should try to be charitable (but with a healthy amount of skepticism!); it's possible they meant "Even a child [with a good understanding of Litex] could [mechanically] formalize this multivariate equation in Litex in 2 minutes [as opposed to remembering and writing Lean 4 syntax]"
The other claim is doubtful too:
> while it require an experienced expert hours of work in Lean 4.
No, it doesn't. If you have an actual expert, it only takes a few minutes.
And besides, isn't this exactly what an artificial intelligence would solve? Take some complex system and derive something from it. That's exactly what intelligence is about. But LLMs can't deal with the complex but very logical (by definition) and unambiguous system like Lean so we need to dumb it down.
Turns out, LLMs are not actually intelligent! We should stop calling them that. Unfortunately, there are too many folks in our industry following this hyped-up terminology. It's delusional.
Note that I'm not saying LLMs are useless. They are very useful for many applications. But they are not intelligent.
Even low-IQ humans can in principle learn how to use Lean to represent a multivariate system. It might take a while, but in principle their brain is capable of that feat. In contrast, no matter how long I sit down with ChatGPT or Gemini or whatnot, it won't be able to. Because they are not intelligent.
It's a great achievement of the AI hype that the burden of proof has been reversed. Here I am, having to defend my claim that they are not intelligent. But the burden of proof should be on those claiming intelligence. The claim that earth is a sphere was extraordinary and needed convincing evidence. The claim that species have evolved through evolution was. But the claim that LLMs are intelligent is so self-evident that rejecting the idea needs evidence? That's upside-down!
It's because all you've done is made a claim without any evidence. Someone pointed out a challenge that most claims about them not being intelligent can't submit any evidence that can't also be met by an LLM.
But instead of submitting any evidence to support your claim, you descended into hyperbole about how hard done by you are being expected to support your claims.
In science, it's okay to say we don't know. The amount of disagreement - even amongst smart people - about if LLMs are intelligent or not, suggest to me that we just don't have universally accepted research and definitions that are tight enough to decisively say.
But you're talking not only like you _know_ the answer for sure, so much that you don't need to support it with evidence or credentials, because those who disagree are obviously just poor victims of the AI hype machine.
Please make sure you pass your knowledge of your LLM discoveries onto the scientific community, you could change the world!
It's like a religion. "God exists" is the claim. Nobody needs to provide evidence that this is not the case. LLMs are intelligent" is the claim. Nobody needs to provide evidence against that. In either case, the burden of proof is with the one making the claim.
> In science, it's okay to say we don't know
But that's not what's happening. LLMs are called "AI". You know what the I stands for, right? It's not "artificial we-don't-know-if-intelligent".
Like I suggested, like the person you responded to suggested: when science tries to prove or disprove LLM intelligence it generally descends into disagreements about definition or evidence (neither of which you provided).
The reason why no evidence was provided for the original LLMs are intelligent claim is because - if you read through this thread - you were the first to make that claim, and the counter claim.
> But that's not what's happening. LLMs are called "AI". You know what the I stands for, right? It's not "artificial we-don't-know-if-intelligent".
I don't know what you want me to say here - if you want to continue acting like there's some widely accepted and agreed on definition of intelligence that everyone who isn't you is an idiot for not knowing, then carry on.
I don't have a reliable definition for intelligence. Like I said, if you do, please share your finding with science, you could settle some fairly big debates and change the world in a meaningful way.
1. LLMs are widely called "intelligent". Evidence for my claim: The term "artificial intelligence" that is used everywhere. It has its own TLD.
2. There is no evidence that this terminology is applicable. Questioning it faces some variant of "well do you have evidence to the contrary?". Evidence for my claim: This thread.
You are welcome to disprove my claims, as in the scientific spirit that you say you uphold.
That's the country code TLD for Anguilla: https://en.wikipedia.org/wiki/.ai
some people claim this is artificial intelligence of a lower quality than humans, and these people expect that such mechanisms will eventually match and then potentially surpass humans.
then there's another crowd coming along and claiming no, this isn't intelligence at all, for example it can't tie its shoelaces.
my point was that every time you try to say that no this can't be what intelligence means, it needs to do X, I can find a human who can't do X, no matter how many years you might try to coach them. (for example, I will never be a musician/composer. I simply lack the gene.)
The retort is always "oh but in principle a human could do this". well, maybe next year's LLM will do it in practice, not just in principle, for all I know.
As they say, person who says it can't be done should not stop person doing it.
Heavier than air flight was once thought to be impossible. As long as you don't have a solid mathematical theorem that says only carbon replicators born from sexual intercourse can be intelligent, I expect some day silicon devices will do everything carbon creatures can do and more.
Indeed, the point you are making is reasonable. But I'm trying to say that the premise is wrong. Nobody should be expected to come up with a reason why it is not intelligence. We should expect to be presented with evidence that it is intelligence. Absent that, the null hypothesis is that it isn't, just like any other computer program before isn't, uncontroversially.
I'm sure you already got my point, apologies for repeating it, but some clarification to clearly carve out our points may not hurt.
That's an article of faith. In principle, elephants can fly at least once.
We also teach 2x2 systems to 18 y.o. in the fists year of the university for architects, medics and other degree that don't need a huge amount on math. (Other degrees like engineering or physics get 4x4 or bigger systems that definitively need the Gauss method.)
[1]https://anniemueller.com/posts/how-i-a-non-developer-read-th...
>Making Litex intuitive to both humans and AI is Litex's core mission. This is how Litex scales formal reasoning: by making it accessible to more people, applicable to more complex problems, and usable by large-scale AI systems. These benefits stem from Litex's potential to lower the entrance barrier by 10x and reduce the cost of constructing formalized proofs by 10x, making formal reasoning as natural as writing.
>Even Kids can formalize the multivariate equation in Litex in 2 minutes, while it require an experienced expert hours of work in Lean 4. It is a typical example of how Litex lowers the entrance barrier by 10x, lowers the cost of constructing formalized proofs by 10x, making formalization as easy and fast as natural writing. No foreign keywords, no twisted syntax, or complex semantics. Just plain reasoning.
Boilerplate and constant repetition
> Simplicity is the ultimate sophistication. - Leonardo da Vinci
https://checkyourfact.com/2019/07/19/fact-check-leonardo-da-...
https://quoteinvestigator.com/2015/04/02/simple/
I'm not sure why people don't spend two minutes looking up a quote before sharing it, feels like most people have zero care about quality or polish today, everything is half-assed.
> Use `have` to declare an object with checking its existence.
an object with what?
Taking that as the definition, this definitely is not the first formal language learnable in 1-2 hours. I would think, for example, that the language consisting of just the empty string is older and learnable in 1-2 hours.
They probably mean something like “formal language used for writing mathematical proofs that is (about) as powerful as Lean”, though.
Besides pure maths you can also use that to verify the correctness of software. E.g. say you implemented a shortest path algorithm:
You could proof something like: For all `graph` and each `path` in `graph` from `start` to `end`:And in general I am very skeptical of the claim that Litex "can be learned by anyone in 1–2 hours". Even just the difference between `have`/`let`/`know` would take a while to master. The syntax for functions is not at all intuitive (and understandably so!). & so on. The trivial subset of the language used in the README may be easy to learn but a) it would not get you very far b) most likely every other related toolbox (Lean, HOL, etc) has a similar "trivial" fragment.
But, always good to see effort in this problem space!
For instance, the tutorial says that "The daily properties" (whatever this means) of "+, -, , /, ^, %" are "already in the Litex kernel". What about associativity of and +, or distribution of * over +? Are these part of the "daily properties"? And if so, why didn't transitivity of >= not make the cut?
Just trying to understand the design choices here, this is very interesting.
Since transitivity of >= is not implemented, one has to call this self_defined_axiom_larger_equal_is_transitive to make x >= 17 here, so
``` know forall x N: x >= 47 => x >= 17 ```
is essential
I feel really lucky that Litex has drawn so much attention from you guys! I always like the geek culture of HN, and have absolutely no idea why such a random guy from a random background can rush into the top 10 on Hacker News.
Litex gets its name from Lisp and LaTeX. I want to make Litex as elegant and deep as Lisp, and at the same time as pragmatic as LaTeX.
Many people have raised questions and suggestions about Litex, and I’m truly grateful. Since I’m developing the Litex core on my own, a lot of the documentation is still incomplete — I’ll try my best to improve it soon! All of your suggestions are really helpful. Thank you so much!
https://github.com/enjoy-digital/litex
The simplest formal language is the empty set, which I would argue doesn’t take hours to learn.
So “formal language” is almost certainly not what is meant here, but it’s not clear what else exactly is meant either.
9 more comments available on Hacker News