More on Whether Useful Quantum Computing Is “imminent”
Key topics
The debate around whether useful quantum computing is "imminent" rages on, with commenters dissecting the validity of using factoring large numbers as a benchmark for quantum computing progress. While some argue that this benchmark is silly, as it doesn't accurately reflect a quantum computer's capabilities, others counter that the difficulty in scaling quantum systems up supports its relevance. As one commenter notes, quantum systems become exponentially less stable as they grow, highlighting the challenges in achieving practical quantum computing. The discussion reveals a nuanced understanding of the field's progress and the complexities of measuring it.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
53m
Peak period
51
0-6h
Avg / period
11.5
Based on 127 loaded comments
Key moments
- 01Story posted
Dec 21, 2025 at 3:53 PM EST
20 days ago
Step 01 - 02First comment
Dec 21, 2025 at 4:46 PM EST
53m after posting
Step 02 - 03Peak activity
51 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 24, 2025 at 6:55 PM EST
16 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
So the "useful quantum computing" that is "imminent" is not the kind of quantum computing that involves the factorization of nearly prime numbers?
[0] https://algassert.com/post/2500
Like if you were building one of the first normal computers, how big numbers you can multiply would be a terrible benchmark since once you have figured out how to multiply small numbers its fairly trivial to multiply big numbers. The challenge is making the computer multiply numbers at all.
This isn't a perfect metaphor as scaling is harder in a quantum setting, but we are mostly at the stage where we are trying to get the things to work at all. Once we reach the stage where we can factor small numbers reliably, the amount of time to go from smaller numbers to bigger numbers will be probably be relatively short.
In QC systems, the "difficulty" scales very badly with the number of gates or steps of the algorithm.
Its not like addition where you can repeat a process in parallel and bam-ALU. From what I understand as a layperson, the size of the inputs is absolutely part of the scaling.
So it seems like it takes an exponentially bigger device to factor 21 than 15, then 35 than 21, and so on, but if I understand right, at some point this levels out and it's only relatively speaking a little harder to factor say 10^30 than 10^29.
Why are we so confident this is true given all of the experience so far trying to scale up from factoring 15 to factoring 21?
In the case of quantum algorithms in BQP, though, one of those properties is SNR of analog calculations (which is assumed to be infinite). SNR, as a general principle, is known to scale really poorly.
As far as i understand, that isn't an assumption.
The assumption is that the SNR of logical (error-corrected) qubits is near infinite, and that such logical qubits can be constructed from noisey physical qubits.
This is an argument I've heard before and I don't really understand it[1]. I get that you can make a logical qubit out of physical qubits and build in error correction so the logical qubit has perfect SNR, but surely if (say the number of physical qubits you need to get the nth logical qubit is O(n^2) for example, then the SNR (of the whole system) isn't near infinite it's really bad.
[1] Which may well be because I don't understand quantum mechanics ...
The hard problem then remains how to connect those qubits at scale. Using a coaxial cable for each qubit is impractical; some form of multiplexing is needed. This, in turn, causes qubits to decohere while waiting for their control signal.
I don't think we have any "real" experience scaling from 15 to 21. Or at least not in the way shor's algorithm would be implemented in practise on fault tolerant qubits.
If what you are saying is that error rates increase exponentially such that quantum error correction can never correct more errors than it introduces, i don't think that is a widely accepted position in the field.
Google has been working on this for years
Don't ask me if they've the top supercomputers beat, ask Gemini :)
Either this relation is not that strong, or factoring should "imminently" become a reasonable benchmark, or useful quantum computing cannot be "imminent". So which one is it?
I think you are the author of the blogpost I linked to? Did I maybe interpret it too negatively, and was it not meant to suggest that the second option is still quite some time away?
the other problem is that factoring 21 is so easy that it actually makes it harder to prove you've factored it with a functional quantum computer. for big numbers, your program can fail 99% of the time because if you get the result once, you prove that the algorithm worked. 21 is small enough that it's hard not to factor, so demonstrating that you've factored it with a qc is fairly hard. I wouldn't be surprised as a result if the first number publicly factored by a quantum computer (using error correction) was in the thousands instead of 21. By using a number that is not absolutely tiny, it becomes a lot easier to show that the system works.
Surely if someone managed to factorize a 3 or 4 digits number, they would have published it as it's far enough of weaponization to be worth publishing.
The reality is that quantum computing is still very very hard, and very very far from being able what is theoretically possible with them.
If results in quantum computing would start to "go dark", unpublished in scientific literature and only communicated to the government/ military, shouldn't he be one of the first to know or at least notice?
If you were to guess what reasons there might be that it WON’T happen, what would some of those reasons be?
- Too few researchers, as in my area of quantum computing. I would state there is one other group that has any academic rigour, and is actually making significant and important progress. The two other groups are using non reproducible results for credit and funding for private companies. You have FAANG style companies also doing research, and the research that comes out still is clearly for funding. It doesn't stand up under scrutiny of method (there usually isn't one although that will soon change as I am in the process of producing a recipe to get to the point we are currently at which is as far as anyone is at) and repeatability.
- Too little progress. Now this is due to the research focus being spread too thin. We have currently the classic digital (qubit) vs analogue (photonic) quantum computing fight, and even within each we have such broad variations of where to focus. Therefore each category is still really just at the start as we are going in so many different directions. We aren't pooling our resources and trying to make progress together. This is also where a lack of openness regarding results and methods harms us. Likewise a lack of automation. Most significant research is done by human hand, which means building on it at a different research facility often requires learning off the person who developed the method in person if possible or at worse, just developing a method again which is a waste of time. If we don't see the results, the funding won't be there. Obviously classical computing eventually found a use case and then it became useful for the public but I fear we may not get to that stage as we may take too long.
As an aside, we may also get to the stage whereby, it is useful but only in a military/security setting. I have worked on a security project (I was not bound by any NDA surprisingly but I'm still wary) featuring a quantum setup, that could of sorts be comparable to a single board computer (say of an ESP32), although much larger. There is some value to it, and that particular project could be implemented into security right now (I do not believe it has or will, I believe it was viability) and isn't that far off. But that particular project has no other uses, outside of the military/security.
I agree there's a lot of poorly written papers and unrigorous research. I'm at the beginning of my PhD, so I still don't quite have every group vetted yet. Could you share your area, and what groups to follow (yours and the other good one)?
Quantum computing is currently stuck somewhere in the 1800's, when a lot of the theory was still being worked out and few functional devices had even been constructed.
Yet, for sure we should keep funding both quantum computing and nuclear fusion research.
> This is the clearest warning that I can offer in public right now about the urgency of migrating to post-quantum cryptosystems...
That has a clear implication that he knows something that he doesn't want to say publically
Still, if that's true, it's an example of the very thing Scott's talking about: there are advances in the field that aren't being made public.
it doesnt need to be imminent for people to start moving now to post-quantum.
if he thinks we are 10 years away from QC, we need to start moving now
Either way he must have known people would read it like you did when he wrote that; so we can safely assume it's boasting at the very least.
Nevertheless, Planck did not understand well enough the requirements for a good system of fundamental units of measurement, so he did not find any good way to integrate Planck's constant in a system of fundamental units and he has made the same mistake made by Stoney 25 years before him (after computing the value of the elementary electric charge) and he has chosen the wrong method for defining the unit of mass among two variants previously proposed by Maxwell (the 2 variants were deriving the unit of mass from the mass of some atom or molecule and deriving the unit of mass from the Newtonian constant of gravitation).
All dimensionless systems of fundamental units are worthless in practice (because they cause huge uncertainties in all values of absolute measurements) and they do not have any special theoretical significance.
For the number of independently chosen fundamental units of measurement there exists an optimum value and the systems with either more or fewer fundamental units lead to greater uncertainties in the values of the physical quantities and to superfluous computations in the mathematical models.
The dimensionless systems of units are not simpler, but more complicated, so attempting to eliminate the independently chosen fundamental units is the wrong goal when searching for the best system of units of measurement.
My point is that the values of the so-called "Planck units" have absolutely no physical significance, therefore it is extremely wrong to use them in any reasoning about what is possible or impossible or about anything else.
In a useful system of fundamental units, for all units there are "natural" choices, except for one, which is the scale factor of the spatio-temporal units. For this scale factor of space-time, in the current state of knowledge there is no special value, so it is chosen solely based on the practical ease of building standards of frequency and wave-number that have adequate reproducibility and stability.
The theoretical entropy for a Schwartzchild black hole is nicely expressed using the Planck area.
So…
No. Your assertion that they have no value in theory, is wrong.
(Also, like, angular momentum is quantized in multiples of hbar or hbar/2 or something like that.)
It may be true that they aren’t a good system of units for actual measurements, on account of the high uncertainty (especially for G).
But, there is a reason why it is common to use units where G=c=hbar=1 : it is quite convenient.
More realistically it breaking down would hopefully give us a new physics frontier.
Yes, that is exactly the point. The example statevector you guys are talking about can (tautologically) be written in a basis in which only one of its amplitudes is nonzero.
Let's call |ψ⟩ the initial state of the Shor algorithm, i.e. the superposition of all classical bitstrings.
|ψ⟩ = |00..00⟩ + |00..01⟩ + |00..10⟩ + .. + |11..11⟩
That state is factorizable, i.e. it is *completely* unentangled. In the X basis (a.k.a. the Hadamard basis) it can be written as
|ψ⟩ = |00..00⟩ + |00..01⟩ + |00..10⟩ + .. + |11..11⟩ = |++..++⟩
You can see that even from the preparation circuit of the Shor algorithm. It is just single-qubit Hadamard gates -- there are no entangling gates. Preparing this state is a triviality and in optical systems we have been able to prepare it for decades. Shining a wide laser pulse on a CD basically prepares exactly that state.
> Changing basis does not affect the number of basis functions.
I do not know what "number of basis functions" means. If you are referring to "non zero entries in the column-vector representation of the state in a given basis", then of course it changes. Here is a trivial example: take the x-y plane and take the unit vector along x. It has one non-zero coefficient. Now express the same vector in a basis rotated at 45deg. It has two non-zero coefficients in that basis.
---
Generally speaking, any physical argument that is valid only in a single basis is automatically a weak argument, because physics is not basis dependent. It is just that some bases make deriving results easier.
Preparing a state that is a superposition of all possible states of the "computational basis" is something we have been able to do since before people started talking seriously about quantum computers.
- I am not saying that you have to find a basis in which your amplitudes are not small, I am saying that such a basis always exists. So any argument about "small amplitudes would potentially cause problems" probably does not hold, because there is no physical reality to "an amplitude" or "a basis" -- these are all arbitrary choices and the laws of physics do not change if you pick a different basis.
- In classical probability we are not worried about vanishingly small probabilities in probability distributions that we achieve all the time. Take a one-time pad of n bits. Its stochastic state vector in the natural basis is filled with exponentially small entries 1/2^n. We create one-time pads all the time and nature does not seem to mind.
- Most textbooks that include Shor's algorithm also include proof that you do not need precise gates. Shor's algorithm (or the quantum Fourier transform more specifically) converges even if you have finite absolute precision of the various gates.
- Preparing the initial state to extremely high precision in an optical quantum computer is trivial and it has been trivial for decades. There isn't really anything quantum to it.
Still, if we add 10^12 complex amplitudes and each one is off by one part in 10^{-6}, we could easily have serious problems with the accuracy of the sum. And 10^12 amplitudes is "only" around 40 qubits.
Shor's algorithm starts with the qubits in a superposition of all possible bitstrings. That is the only place we have exponentially small amplitudes at the start (in a particular choice of a basis), and there is no entanglement in that state to begin with.
We do get interesting entangled states after the oracle step, that is true. And it is fair to have a vague sense that entanglement is weird. I just want to be clear that your last point (forgetting about amplitudes, and focusing on the weirdness of entangled qubits) is a gut feeling, not something based in the mathematics that has proven to be a correct description of nature over many orders of magnitude.
Of course, it would be great if it turns out that quantum mechanics is wrong in some parameter regime -- that would be the most exciting thing in Physics in a century. There is just not much hope it is wrong in this particular way.
2^256 states are comfortably distinct in that many dimensions with amplitude ~1. Their distinctness is entirely direction.
The obvious parallels to vector embeddings and high-dimensional tensor properties have some groups working out how to combine them in "quantum AI", and because that doesn't require the same precision (like trained neurel nets still work usefully after heavy quantization and noise), quantum AI might arrive before regular quantum computation, and might be feasible even if the latter is not.
https://en.wikipedia.org/wiki/Quantum_computational_chemistr...
The video is essentially an argument from the software side (ironically she thinks the hardware side is going pretty well). Even if the hardware wasn't so hard to build or scale, there are surprisingly few problems where quantum algorithms have turned out to be useful.
It's a good reason to implement post-quantum cryptography.
They would hack random, long unused, dead addresses holding 5 figure amounts and slowly convert those to money. They would eventually start to significantly lower the value and eventually crash bitcoin if too greedy, but could get filthy rich.
> I’m going to close this post with a warning. When Frisch and Peierls wrote their now-famous memo in March 1940, estimating the mass of Uranium-235 that would be needed for a fission bomb, they didn’t publish it in a journal, but communicated the result through military channels only. As recently as February 1939, Frisch and Meitner had published in Nature their theoretical explanation of recent experiments, showing that the uranium nucleus could fission when bombarded by neutrons. But by 1940, Frisch and Peierls realized that the time for open publication of these matters had passed.
> Similarly, at some point, the people doing detailed estimates of how many physical qubits and gates it’ll take to break actually deployed cryptosystems using Shor’s algorithm are going to stop publishing those estimates, if for no other reason than the risk of giving too much information to adversaries. Indeed, for all we know, that point may have been passed already. This is the clearest warning that I can offer in public right now about the urgency of migrating to post-quantum cryptosystems, a process that I’m grateful is already underway.
Does anyone know how much underway it is? Do we need to worry that the switch away from RSA won't be broadly deployed before quantum decryption becomes available?
This estimate, however, assumes that interaction can be turned on between arbitrary two qubits. In practice, we can only do nearest-neighbour interactions on a square lattice, and we need to simulate the interaction between two arbitrary qubits by repeated application of SWAP gates, mangling the interaction through as in the 15th puzzle. This two-qubit simulation would add about `n` SWAP gates, which would then multiply the noise factor by the same factor, hence now we need an error rate for logical qubits on a square lattice to be around ~n^(-4/3)
Now comes the error correction. The estimates are somewhat hard to make here, as they depend on the sensitivity of the readout mechanism, but for example let’s say a 10-bit number can be factored with a logical qubit error rate of 10^{-5}. Then we apply a surface code that scales exponentially, reducing the error rate by 10 times with 10 physical qubits, which we could express as ~1/10^{m/10}, where m is the number of physical qubits (which is rather optimistic). Putting in the numbers, it would follow that we need 40 physical qubits for a logical qubit, hence in total 400k physical qubits.
That may sound reasonable, but then we made the assumption that while manipulating the individual physical qubits, decoherence for each individual qubit does not happen while they are waiting for their turn. This, in fact, scales poorly with the number of qubits on the chip because physical constraints limit the number of coaxial cables that can be attached, hence multiplexing of control signals and hence the waiting of the qubits is imminent. This waiting is even more pronounced in the quantum computer cluster proposals that tend to surface sometimes.
[1]: https://link.springer.com/article/10.1007/s11432-023-3961-3
The error correction milestone matters because it's the gate to scaling. Previous quantum systems had error rates that increased faster than you could add qubits, making large-scale quantum computing impossible. If Willow actually demonstrates below-threshold error rates at scale (I'd want independent verification), that unblocks the path to 1000+ logical qubit systems. But we're still probably 5-7 years from "useful quantum advantage" on problems like drug discovery or materials simulation.
The economic argument is underrated. Even if quantum computers achieve theoretical advantage, they need to beat rapidly improving classical algorithms running on cheaper hardware. Every year we delay, classical GPUs get faster and quantum algorithms get optimized for near-term noisy hardware. The crossover point might be narrower than people expect.
What I find fascinating is the potential for hybrid classical-quantum algorithms where quantum computers handle specific subroutines (like sampling from complex distributions or solving linear algebra problems) while classical computers do pre/post-processing. That's probably the first commercial application - not replacing classical computers entirely but augmenting them for specific bottlenecks. Imagine a drug discovery pipeline where the 3D protein folding simulation runs on quantum hardware but everything else is classical.
QC is not a panacea. There are a handful of algorithms that are in BQP-P, and most of those aren't really used in tasks I would imagine the average person frequently engaging in. Simultaneously, quantum computers necessarily have complications that classical computers lack. Combined, I doubt people will be using purely quantum computers ever.
once someone makes a widget that extracts an RSA payload, their govt will seize, spend & scale
they will try to keep it quiet but they will start a spending spree that will be visible from space
I have a degree in chemistry from that institution, and don't have a clue what this means beyond the $1,000,000,000 economic impact this facility is supposed to make upon our fair city, over the next decade.
[•] <https://quantumzeitgeist.com/vanderbilt-university-quantum-q...>
[0] In partnership with our government-subsidized fiber network, EPB
The fact that error correction seems to be struggling implies unaccounted for noise that is not heat. Who knows maybe gravitational waves heck your setup no matter what you do!