String Theory Inspires a Brilliant, Baffling New Math Proof
Key topics
The math world is abuzz with a new proof that's got everyone talking - and scratching their heads. As commenters dive into the complex topic, some poke fun at the esoteric nature of the subject, with one joking about "speedrunning a graduate course" just to understand the basics. Meanwhile, others appreciate the effort to explain the concept, even if it's tough to grasp, and some experts chime in to validate the terminology and concepts. The lighthearted debate about the accessibility of advanced math topics reveals a shared enthusiasm for exploring the intricacies of the field.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
69
0-6h
Avg / period
16
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 12, 2025 at 11:23 AM EST
28 days ago
Step 01 - 02First comment
Dec 12, 2025 at 1:39 PM EST
2h after posting
Step 02 - 03Peak activity
69 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 16, 2025 at 10:12 AM EST
24 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
(I more or less do have the background to read these things, but it's super off-putting to start the article about a crazy new proof from a Fields medallist with an introduction to manifolds.)
I think it's nice someone wrote about this, even if it's super technical and I cannot understand it completely.
I think Steve Jobs very much enjoyed life, and you know what kind of an attitude he had about things.
"You want many folds!" We gottem!
> "equivalent to many + -fold"
https://en.wiktionary.org/wiki/manifold#Etymology_1
Here's the explanation of the string theory idea from en.wiki, which I thought could have been handwaved better in TFA:
Mirror symmetry translates the dimension number of the (p, q)-th differential form h^(p,q) for the original manifold into h^(n-p,q) of that for the counter pair manifold.
https://en.wikipedia.org/wiki/Homological_mirror_symmetry#Ho...
Stop repeating garbage ideas from garbage youtubers.
Do you mean that have been falsified? Of course, no standing theory delivers falsified predictios, when that happens you throw the theory in the garbage.
Do you mean that can be falsified in principle? In that case String Theory has falsifiable predictions, I gave you one. In principle, we can make experiment that would falsify special relativity. In fact, we've made such experiments in the past and those experiments have never seen special relativity being violated. The test of special relativity are the most precise tests existing in science.
> but our other theories
This is not how scientific research is done. The way you do it is you a theory, the theory makes predictions, you make experiments, and the predictions fail, you reject that theory. The fact that you might have other theories saying other things doens't matter for that theory.
So string theories said "Lorentz invariance not violated", we've made the experiments, and the prediction wasn't wrong, so you don't reject the theory. The logic is not unlike that of p-testing. You don't prove a theory correct is the experiments agree with it. Instead you prove it false if the experiments disagree with it.
This is not a new prediction... String theory makes no new predictions, I hear.
There are some reformulations of existing physics theories, like Lagrangian mechanics or Hamiltonian mechanics are both reformulations of Newtonian mechanics. But these don't make new predictions - they're just better for calculating or understanding certain things.
You have no clue what you're talking about. Did you hear this in some youtube video and have been looking to try it on someone?
String theory doesn't work this way, whatever was measured will be explained as an afterthought by free parameter tuning.
Uh, iirc . I don’t remember what value specifically. Some ratio of masses or something? Idr. And I certainly don’t know the calculation.
This is what a theory is: assume XYZ is true, and see how much of the world you can explain. Why is XYZ? That theory doesn't explain it.
Theoretical physics is: what is the smallest set of XYZ assumptions that can explain other theories. So if you can come up with a theory that's internally self-consistent that _predicts_ something which is postulated by another successful theory, that's a very convincing result.
The Polyakov action is kinda by default manifestly Lorentz invariant, but in order to quantize it, one generally first picks the light cone gauge, where this gauge choice treats some of the coordinates differently, losing the manifest Lorentz invariance. The reason for making this gauge choice is in order to make unitarity clear (/sorta automatic).
An alternative route keeps manifest Lorentz invariance, but proceeding this way, unitarity is not clear.
And then, in the critical dimensions (26 or 10, as appropriate; We have fermions, so, presumably 10) it can be shown that a certain issue (chiral anomaly, I think it was) gets cancelled out, and therefore the two approaches agree.
But, I guess, if one imposes the light cone gauge, if not in a space of dimensionality the critical dimension, the issue doesn’t cancel out and Lorentz invariance is violated? (Previously I was under the impression that when the dimensionality is wrong, things just diverged, and I’m not particularly confident about the “actually it implies violations of Lorentz invariance” thing I just read.)
You understand that this have nothing to do with actual Lorentz invariance, yes? It sounds like you don't really understand the meaning of those terms you're using.
Do you understand what "manifest Lorentz invariance" means?
Let me stop you right now to inform you you don't understand how scientific theories are structured. Special relativity is not a prediction of special relativity. Likewise, 1+1=2 isn't a predict of arithmetic, it's the starting point.
[1] https://inspirehep.net/literature/262241
So A) the paper isn't actually about string theory and B) it's not clear that the claim it makes is actually correct for the field theory it supposedly applies to.
For example you can have string theories that lead to finsler spacetimes, which were used to explain the opera results.
We might get lucky that some ToE would generate low-energy predictions different from GR and QFT, but there's no reason to think that it must.
It's not like there's some great low-energy predictions that we're just ignoring. The difficulty of a beyond-Standard-Model theory is inherent to the domain of the question, and that's going to plague any alternative to String Theory just as much.
AFAIK an EoT is not required to design experiments to determine if it's a real physical phenomenon vs. a mathematical trick; people are trying to think up those experiments now (at least for hidden variable models of QM).
1. interactions at the event horizon of a black hole -- could the theory describe Hawking radiation?
2. large elements -- these are where special relativity influences the electrons [1]
It's also possible (and worth checking) that a unified theory would provide explanations for phenomena and observed data we are ascribing to Dark Matter and Dark Energy.
I wonder if there are other phenomena such as effects on electronics (i.e. QM electrons) in GR environments (such as geostationary satellites). Or possibly things like testing the double slit experiment in those conditions.
[1] https://physics.stackexchange.com/questions/646114/why-do-re...
re: "GR environments (such as geostationary satellites)" - a geostationary orbit (or any orbit) is not an environment to test the interaction of GR and QM - it is a place to test GR on its own, as geostationary satellites have done. In order to test a theory of everything, the gravity needs to be strong enough to not be negligible in comparison to quantum effects, i.e. black holes, neutron stars etc. your example (1) is therefore a much better answer than (2)
For geostationary orbits I was thinking of things like how you need to use both special and general relativity for GPS when accounting for the time dilation between the satellite and the Earth (ground). I was wondering if similar things would apply at a quantum level for something QM related so that you would have both QM and GR at play.
So it may be better to have e.g. entangled particles with them placed/interacting in a way that GR effects come into play and measuring that effect.
But yes, devising tests for this would be hard. However, Einstein thought that we wouldn't be able to detect gravitational waves, so who knows what would be possible.
In some ways saying that we don't have a theory of quantum gravity is overblown. It is perfectly possible to quantize gravity in QFT the same way we quantize the electromagnetic field. This approach is applicable in almost all circumstances. But unlike in the case of QED, the equations blow up at high energies which implies that the theory breaks down in that regime. But the only places we know of where the energies are high enough that the quantization of the gravitational field would be relevant would be near the singularity of a black hole or right at the beginning of the Big Bang.
Some physicists have been trying to build an updated model of the universe based on mathematical objects that can be described as little vibrating strings. They've not been successful in closing the loop and constructing a model that actually describes reality accurately, but they've done a lot of work that wasn't necessarily all to waste.
It's probably either just the wrong abstraction or missing some fundamental changes that would make it accurate.
It would also be tremendously helpful if we had some new physics where there was a significant difference between an experiment and either GR or the standard model. Unfortunately the standard model keeps being proven right.
About tests of quantum gravity, there have been proposals for feasible tests using gravitationally-induced entanglement protocols:
https://arxiv.org/abs/1707.06036
Like a book is a book because it's got pages with words on them glued to a spine with covers. It's not "not a book" because the plot makes no sense.
Scientists don't care about what "a theory" is, it's not philosophically important to them. It's just a vague term for a collection of ideas or a model or whatever.
lol the confidence.
I think half a billion isn't that expensive for a program that searches for a potential "theory of everything" that can profoundly change our understanding of the universe (even if it brings no results!)
Right?
Holography desperately needs it's own Brian Greene style ambassador to share the good news. In terms of momentum and taking center stage, it's now in the place where String Theory used to be like 10-15 years ago as the frontier idea with all the excitement ever momentum behind it, and it has been borne from the fruit of string theory. It's quite amazing times we're living in but I think there's been no energy in the post covid world to take a breath and appreciate it.
You can start with a single Moon base but generally it isn't worth the mission control investment once you start to build out Mars.
The Moon is interesting because it's there, it's fairly close, it's a test bed for off-world construction, manufacturing, and life support, and there are experiments you can do on the dark side that aren't possible elsewhere.
Especially big telescopes.
It has many of the same life support issues as Mars, and any Moon solutions are likely to work on Mars and the asteroids, more quickly and successfully than trying to do the same R&D far, far away.
Will it pay for itself? Not for a long, long time. But frontier projects rarely do.
The benefit comes from the investment, the R&D, the new science and engineering, and the jobs created.
It's also handy if you need a remote off-site backup.
However, it is much easier to see us send robots to mine these asteroids, or send robots to the moon to build a giant telescope on the dark side (if that makes sense), then it is to see us build cities on the moon to build said telescope, and to mine those asteroids.
You see the difference here is that the end goal of mining asteroids are resources being sent to earth and exploited, while the goal of space settlements are the settlements them selves, that is some hypothetical space expansion is the goal, and that makes no sense, nobodies lives will improve from space expansion (except for the grifters’ during the grift).
Aspiring to goals and accomplishing them makes life worth living to a lot of people. Furthermore, humanity seems to have an innate drive to explore and learn.
Even to those left at home, it's inspirational to think that there are people who are taking steps to explore the universe.
Maybe it won't help anyone live but it will give a lot of people something to live for.
The Moon has nothing to offer Mars explorers as everything will be different and solutions for the unique lunar conditions (two weeks of darkness, temperature extremes, moon dust, vacuum) do not apply to Mars at all. It’like saying living under the ocean is good practice for living in the Artic, but we should start under the ocean because it’s closer.
Current tech, yes. Current economics, no. We're talking about the far future. If raw materials become more scarce, it's hard to say what economics (or needed resources) might support extra-planetary resource collection. What's for sure, is mining Mars will be harder than mining asteroids.
0/1
> (Mars) is a much easier environment to stay in, even over as short a period as a month
0/2
> Mars makes much more sense than the Moon, which has little of interest
Redundant
> (the moon) isn’t a stepping stone to anywhere.
0/3
Humanity has gotten there before Mars for the precise reason that it is a stepping stone.
None of what you posted is factually true.
aside from that, this number is meaningless without context: how much do other fields of research get?
For a field repeatedly challenged for not bringing testable predictions to bear, the fact that so much of its rich theoretical framework has been able to be worked out with minimal infrastructure investment is a welcome blessing which, I would hope, critics and supporters alike can celebrate.
I'm not saying I fully agree with the position, but one way of looking at it is that thousands of incredibly smart people got nerd-sniped into working on a problem that actually has no solution. I sometimes wonder if there will ever be a point where people give up on it, as opposed to pursuing a field that bears some mathematical fruit, always with some future promise, but contributes nothing to physics.
For every professional string theorist, you get hundreds of people who were brought up in an academic system that values rigor and depth of scientific thinking.
That's literally what a modern technological economy is built on.
Getting useful novel results out of this is almost a lucky side effect.
You do get people who are happy for a few years since they can live their childhood dream of being a physicist before the turn to actual jobs.
Most groundbreaking proofs these days aren’t just cross-discipline but usually involve one or several totally novel techniques.
All that to say: I think you’re dramatically underestimating the difficulty involved in this, EVEN if the author(s) were a(n) expert(s) in machine readable mathematics, which is highly UNlikely given that they are necessarily (a) deep expert(s) in at LEAST one other field.
I know HN can be volatile sometimes, but I sincerely want to hear more about these parts of math that are not pure deductive reasoning.
Do you just mean that we must assume something to get the ball rolling, or what?
Stuff that we can deduce in math with common sense, geometric intuition, etc. can be incredibly difficult to formalize so that a machine can do it.
"...etc. can be incredibly difficult to formalize so that a machine can do it." ?
1. do it = search for a proof
2. do it = verify a purported proof?
I don't understand what you're confused about.
Formalizing hot garbage supposedly describing a proof can be arbitrarily difficult.
The problem is not a missing library. The number of definitions and lemmas indirectly used is often not that much. Most of the time wasted when formalizing is discovering time and time again that prior authors are wasting your time, sometimes with verifiably false assumptions, but the community keeps sending you around to another gap-filling approach.
The problem is that such constructions were later found to be full of hidden assumptions. Like working in a plane vs on a spherical surface etc.
The advantage of systems like MetaMath are:
1. prover and verifier are essentially separate code bases, indeed the MM prover is essentially absent, its up to humans or other pieces of software to generate proofs. The database just contains explicit axioms, definitions, theorems claims, with proofs for each theorem. The verifier is a minimalistic routine with a minimum amount of lines of code (basically substitution maps, with strict conditions). The proof is a concrete object, a finite list of steps.
2. None of the axioms are hardcoded or optimized, like they tend to be in proof systems where proof search and verification are intermixed, forcing axioms upon the user.
They're called "axioms"
I think the subject at question here is mathematical truth, not "mathematics" whatever that means.
people on hn love making these kinds of declarative statements (the one you responded to, not yours itself) - "for X just do Y" as a kind of dunk on the implied author they're responding to (as if anyone asked them to begin with). they absolutely always grossly exaggerate/underestimate/misrepresent the relevance/value/efficacy of Y for X. usually these declarative statements briskly follow some other post on the frontpage. i work on GPU/AI/compilers and the number of times i'm compelled to say to people on here "do you have any idea how painful/pointless/unnecessary it is to use Y for X?" is embarrassing (for hn).
i really don't get even get it - no one can see your number of "likes". twitter i get - fb i get - etc but what are even the incentives for making shit up on here.
I wish we were a bit more self-critical about this, but it's a tough problem when what brings the community together in the first place is a sense of superiority: prestigious schools, high salaries, impressive employers, supposedly refined tastes. We're at the top of the world, right?
Being pompous and self obsessed requires none of those things.
Sufficient, but not necessary
That would save a lot of people a lot of time, and its not random peoples time saved, its highly educated peoples time being saved. That would allow much more novel research to happen with the same amount of expert-years.
If population of planet A would use formal verification, and planet B refuses to, which planet do you predict will evolve faster?
Currently, in 2025, it is not possible in most fields for a random expert to produce a machine checked proof. The work of everyone in the field coming together to create a machine checked proof is also more work for than for the whole field to learn an important result in the traditional way.
This is a fixable problem. People are working hard on building up a big library of checked proofs, to serve as building blocks. We're working on having LLMs read a paper, and fill out a template for that machine checked proof, to greatly reduce the work. In fields where the libraries are built up, this is invaluable.
But as a general vision of how people should be expected to work? This is more 2035, or maybe 2045, than 2025. That future is visible, but isn't here.
So it's not really about the planets not being in the right positions yet.
The roman empire lasted for centuries. If they wanted to do rigorous science, they could have built cars, helicopters, ... But they didn't (in Rome, do as the Romans do).
This is not about the planets not being in the right position, but about Romans in Rome.
I could believe you, an internet stranger. And believe that this problem was effectively solved 20 years ago.
Or I could read Terry Tao's https://terrytao.wordpress.com/wp-content/uploads/2024/03/ma... and believe his experience that creating a machine checkable version of an informal proof currently takes something like 20x the work. And the machine checkable version can't just reference the existing literature, because most of that isn't in machine checkable form either.
I'm going with the guy who is considered one of the greatest living mathematicians. There is an awful lot that goes into creating a machine checkable proof ecosystem, and the file format isn't the hard bit.
If ultimate readership (over all future) were less than 20 per theorem, or whatever the vagueness factor would be, the current paradigm would be fine.
If ultimate readership (not citation count) were higher than 20 per theorem, its a net societal loss to have the readers guess what the actual proof is, its collectively less effort for the author to formalize the theorem than it would be to have the readers guess the actual proof. As mathematicians both read and author proofs, they would save themselves time, or would be able to move the frontier of mathematics faster. From a taxpayer perspective, we should precondition mathematics funding (not publication) on machine readable proofs, this doesn't mean every mathematician would have to do it, if some hypothetical person had crazy good intuitions, and the rate of results high enough this person could hire people to formalize it for them, to meet the precondition. As long as the results are successfully formalized, this team could continue producing mathematics.
On HN, that might be as simple as display sort order -- highly engaging comments bubble up to the top, and being at the top, receive more attention in turn.
The highly fit extremes are -- I think -- always going to be hyper-specialized to exploit the environment. In a way, they tell you more about the environment than whatever their content ostensibly is.
is it so hard to understand that after a few such events, you wish for authors to check their own work by formalizing it, saving countless hours for your readers, by selecting your paper WITH machine readable proof versus another author's paper without a machine readable proof?
To demonstrate with another example: "Gee, dying sucks. It's 2025, have you considered just living forever?"
To this, one might attempt to justify: "Isn't it sufficient that dying sucks a lot? Is it so hard to understand that having seen people die, I really don't want to do that? It really really sucks!", to which could be replied: "It doesn't matter that it sucks, because that doesn't make it any easier to avoid."
- Locating water doesn't become more tractable because you are thirsty.
- Popping a balloon doesn't become more tractable because you like the sound.
- Readjusting my seat height doesn't become more tractable because it's uncomfortable.
The specific example I chose was for the purpose of being evocative, but is still precisely correct in providing an example of: presenting a wish for X as evidence of tractability of X is silly.
I object to any argument of the form: "Oh, but this wish is a medium wish and you're talking about a large wish. Totally different."
I hold that my position holds in the presence of small, medium, _or_ large wishes. For any kind of wish you'd like!
- Escaping death doesn't become more tractable because you don't want to die.
This is trivially 'willfully misunderstood', whereas my original framing is more difficult -- you'd need to ignore the parallel with the root level comment, the parallel with the conversation structure thus far, etc. Less clear, but more defensible. It's harder to plausibly say it is something it is not, and harder to plausibly take it to mean a position I don't hold (as I do basically think that requiring formalized proofs is a _practically_ impossible ask).
By your own reckoning, you understood it regardless. It did the job.
It does demonstrate my original original point though, which is that messages under optimization reflect environmental pressures in addition to their content.
What do Bitcoin etc. actually prove in each block? that a nonce was bruteforced until some hash had so many leading zero's? Comparatively speaking, which blockchain would be more convincing as a store of value: one that doesn't substantially attract mathematicians and cryptographers versus one that does?
Investors would select the formal verification chain as it would actually attract the attention of mathematicians, and mathematicians would be rewarded for the formalization of existing or novel proofs.
We don't need to wait for the magic constellation of the planets 20 years from now nor wait for LLM's etc to do all the heavy lifting (although they will quickly be used by mathematics "miners"), a mere alignment of incentives can do it.
If one takes the time to read the free book accompanying the metamath software, and re implements it in about a weekend time, you learn to understand how it works internally. Then playing around a little with mmj2 or so you quickly learn how to formalize a proof you understand. If you understand your own proof its easy to formalize it. One doesn't need to be "an expert in machine readable mathematics".
If I find my own proofs, or if the proof of someone else is clearly written, formalization is not hard at all.
Let us assume for the sake of this discussion that Wiles' latest proof for FLT is in fact complete, while his earlier proof wasn't. It took Wiles and his helper more than a weekend to close the gap. Imagine no one had challenged the proof or pointed out this gap. Anyone tasked with formalizing it would face the challenge of trying to figure out which result (incorrectly presumed to be already known) was used in a certain step. The formalizer is in fact finishing an unfinished proof.
After succeeding in closing this gap, who else was willing to point at the next gap? There is always some sense of prestige lost when pointing at a "gap" and then observing the original prover(s) close that gap, in a sense they saw how to prove it while the challenger did not. This dynamic is unhealthy. To claim a proof we should expect a complete proof, the burden of proof should lay on the proving claimant not on the verifier.
Is it really known to be the frontier as long as its not verified? I would call the act of rigorous verification the acknowledgement of a frontier shift.
Consider your favorite dead-end in science, perhaps alchemy, the search for alcahest, the search for the philosophers stone, etc. I think nobody today would pretend these ideas were at the frontier, because today it is collectively identified as pseudoscience, which failed to replicate / verify.
If I were the first to place a flag on some mountain, that claim may or may not be true in the eyes of others, but time will tell and others replicating the feat will be able to confirm observation of my flag.
As long as no one can verify my claims they are rightfully contentious, and as more and more people are able to verify or invalidate my claims it becomes clear if I did or did not move the frontier.
* there's way more than one person involved here
Some random definition of flippant: "not showing a serious or respectful attitude."
In my view publishing proofs in 2025 without machine readable proofs is quite flippant yes. We have powerful computation systems, with huge amounts of RAM and storage, yet the bulk does most of mathematics on a computer in the form of... high-tech calligraphy? People are wasting each others time by not using formal verification, to me that is disrespectful.
Is pointing out this disrespectful facet of collective behavior disrespectful?
For example are people who point out the problems associated with GHG emissions, really being flippant when they point out the flippant behavior of excess emission?
Prompt: "Analyze the mathematics in the paper. Find mistakes, note instabilities (if present) and generally criticize as needed."
https://gemini.google.com/share/6860fdaea334
The mathematics here are far beyond me, but it's interesting that Gemini more or less concurs with chatgpt with respect to "load bearing reliance on other preprints".
Gemini's summary (there's much more in the link above that builds up to this):
The mathematics is largely robust but relies on heavy "black box" theorems (Iritani's blowup formula, Givental's equivariant mirror symmetry) to bypass difficult geometric identifications in the non-archimedean setting. The primary instability is the lack of a non-archimedean Riemann-Hilbert correspondence, which limits the "enhanced" atoms theory. The core results on cubic fourfolds hold because the specific spectral decomposition (eigenvalues $0, 9, 9\zeta, 9\zeta^2$) is robust and distinct22, but the framework's extension to finer invariants (integral structures) is currently obstructed.
The article clearly states that there are multiple reading groups across the world attempting to get to grips with each small aspect of the ideas involved. That they even attempt this implies to me that the ideas are considered worth studying by some serious players in the field: the group (its way more than just one Fields toting bloke) have enough credibility for that.
It's memorable, at least :)
En.wiki has a quick explanation
Mirror symmetry translates the dimension number of the (p, q)-th differential form h^(p,q) for the original manifold into h^(n-p,q) of that for the counter pair manifold.
(n=4 for the paper)
https://en.wikipedia.org/wiki/Homological_mirror_symmetry#Ho...
Don't miss the caption from the nd of the previous page :)
Imho string theory shouldnt be taking the credit from Kontsevich
6 more comments available on Hacker News