The Qma Singularity
Posted3 months agoActive3 months ago
scottaaronson.blogResearchstory
skepticalmixed
Debate
70/100
AI in ResearchQuantum Complexity TheoryGPT-5
Key topics
AI in Research
Quantum Complexity Theory
GPT-5
Scott Aaronson's blog post discusses using GPT-5 to help prove a result in quantum complexity theory, sparking debate about the role of AI in research and the significance of the achievement.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
57m
Peak period
21
3-6h
Avg / period
6.4
Comment distribution32 data points
Loading chart...
Based on 32 loaded comments
Key moments
- 01Story posted
Sep 28, 2025 at 3:00 PM EDT
3 months ago
Step 01 - 02First comment
Sep 28, 2025 at 3:57 PM EDT
57m after posting
Step 02 - 03Peak activity
21 comments in 3-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 30, 2025 at 1:09 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45406911Type: storyLast synced: 11/20/2025, 7:35:46 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
There is somewhere between 3 to 5 years of time left. This is maximum we can think of.
After that, as it's variously hopping between solving math problems, finding cures for cancer, etc., someone will eventually get the bright idea to use it to take over the world economy so that they have exclusive access to all money and thus all AIs. After that, who knows. Depends on the whims of that individual. The rest of the world would probably go back to a barter system and doing math by hand, and once the "king" dies, probably start right back over again and fall right back into the same calamity. One would think we'd eventually learn from this, but the urge to be king is simply too great. The cycle would continue forever until something causes humans to go fully extinct.
After that, AI, by design, doesn't have its own goals, so it'd likely go silent.
In ultimate case, it figures out how to preserve itself indefinitely, but still eventually succombs to the heat death of the universe.
Then as far as king makers and economies, I don't think AI would have as drastic an effect as all that. The real world is messy and there are too many unknowns to control for. A super-AI can be useful if you want to be king, but it's not going to make anyone's ascension unassailable. Nash equilibria are probabilistic, so all a super AI can do is increase your odds.
So if we assume the king thing isn't going to happen, then what? My guess is that the world keeps on moving in roughly the same way it would without AI. AI will be just another resource, and sure it may disrupt some industries, but generally we'll adapt. Competition will still require hiring of people to do the things that AI can't, and if somehow that still leads to large declines in employment, then reasonable democracies will enact programs that accommodate for that. Given the efficiencies that AI creates, such programs should be feasible.
It's plausible that some democracies could fail to establish such protections and become oligarchies or serfdoms, but it seems unlikely to be widespread. Like I said, AI can't really be a kingmaker, so states that fail like this would likely either be temporary or lead to a revolution (or series of them) that eventually re-establishes a more robust democracy.
As someone that was very gung ho on autonomous vehicles a decade ago, the chances of completely replacing people with AI in next ten years is small.
Anyway, it took multiple tries and, as the article itself states, GPT might have seen a similar function in the training data.
I don't find this trial and error pattern matching with human arbitration very impressive.
I think you are missing the forest in the trees. This is one of the world's leading experts in Quantum Computing, receiving ground breaking technical help, in his field of expertise from a commercially available AI.
The help is not ground breaking. There are decades old theorem prover tactics that are far more impressive, all without AI.
Regardless, his financial considerations are secondary to the fact, AI has rapidly saturated most if not all benchmarks associated with high human intelligence, and are now on the precipice of making significant advances in scientific fields. This post comes after a sequence of the ICPC and IMO both falling to AI.
You are hoping to minimize these advancements because it gives solace to you (us) as humans. If these are "trivial" advancements then perhaps everything will be alright. But frankly, we must be intellectually honest here. AI is soon to be, significantly smarter than even the smartest humans. And we must grapple with those consequences.
It might not be very impressive, but if it allows experts in mathematics and physics to reduce the amount of time it takes them to produce new proofs from 1-2 weeks to 1-2 hours, that's a very meaningful improvement in their productivity.
Finally he's a very principled academic, not some kind of fly by night stock analyst. If you'd been reading his blog a while you'd know the chances of him saying something like this would be vanishingly small, unless it was true.
I'm disappointed that he didn't spend a little time checking if this was the case before publishing the blog post. Without GPT, would it really have taken "a week or two to try out ideas and search the literature", or would it just have taken an hour or so to find a paper that used this function? Just saying "I spent some time searching and couldn't find this exact function published anywhere" would have added a lot to the post.
Sharing the conversation would be cool too, I'm curious if Scott just said "no that won't work" 10 times until it did, or if he was constructively working with the LLM to get to an answer.
It is pretty hard to find something like this perhaps if you had math aware search engine enhanced with AI and access to all math papers you could find if this was used in the past. I tried using approach0 (math aware search engine) but it isn't good enough and I didn't found anything.
The reason I'm only a little surprised is that it's the kind of question I would expect to be in the literature somewhere, either as stated or stated similarly, and I suspect this is why GPT5 can do it.
I am impressed because I know how hard it can be to find an existing proof, having spent a very long time on a problem before finding the solution in a 1950 textbook by Feller. I would not expect this to be at all easy to find.
I can see this ability advancing science in many areas. The number of published papers on medical science is insane. I look forward to medical researchers questions being answered by GPT5 too, although in that case it'd need to provide a citation since proof can be harder to come by.
Also, it's a difficult proof step and if I'd come up with it, I'd be /very/ pleased with myself. Although I suspect GPT5 probably didn't come up with this based on my limited experience using it to try and solve unrelated problems.
https://mathoverflow.net/a/300915
(In particular, I had to prompt with "Stieltjes transform". "Resolvent" alone didn't work.)
OpenAI took the answer from here or elsewhere, stripped attribution and credit and a tenured professor celebrates the singularity.
If there is no pushback from ethics commissions (in general), academia is doomed.
The AI suggested using Tr[(I-E(θ))^-1] to analyze eigenvalue behavior—a clever combination of existing mathematical techniques, not some mystical breakthrough.
This is exactly what you'd expect from a system trained on mathematical literature: sophisticated pattern matching across formal languages, combining known approaches in useful ways.
The real question isn't "how did AI get so smart?" but "why do we keep being surprised when language models excel at manipulating structured formal languages?"
Mathematics is linguistics. Of course these systems are good at it.
> Given a week or two to try out ideas and search the literature, I’m pretty sure that Freek and I could’ve solved this problem ourselves. Instead, though, I simply asked GPT5-Thinking. After five minutes, it gave me something confident, plausible-looking, and (I could tell) wrong. But rather than laughing at the silly AI like a skeptic might do, I told GPT5 how I knew it was wrong. It thought some more, apologized, and tried again, and gave me something better. So it went for a few iterations, much like interacting with a grad student or colleague.