Quantum Error Correction Goes Foom
Key topics
The provocative title "Quantum Error Correction Goes FOOM" sparked a lively debate about the implications of a breakthrough in quantum computing, with commenters scrambling to decipher the enigmatic "FOOM" reference, variously interpreting it as a nod to AI singularity, a Pentium bug, or even explosive ignition. As the discussion unfolded, the author clarified that the title alluded to a pattern of rapid progress in quantum error correction, with some commenters expressing skepticism about the potential for a huge leap in logical qubits. The conversation also touched on the limitations of quantum computing, with some commenters highlighting that quantum speed still matters and that even an unbounded array of qubits can't solve every problem instantly. Amidst the discussion, a consensus emerged that quantum computers are not a panacea, and their capabilities are still narrowly defined.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
2h
Peak period
7
0-6h
Avg / period
4
Based on 24 loaded comments
Key moments
- 01Story posted
Dec 25, 2025 at 4:18 AM EST
17 days ago
Step 01 - 02First comment
Dec 25, 2025 at 6:46 AM EST
2h after posting
Step 02 - 03Peak activity
7 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 27, 2025 at 4:47 PM EST
14 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
https://en.wikipedia.org/wiki/Pentium_F00F_bug
https://www.lesswrong.com/posts/tjH8XPxAnr6JRbh7k/hard-takeo...
Initialize the coded bit, using a 59-qubit repetition code that corrects bit flips but not phase errors, in IPython:
Write a decoder: Wait two hours [0]. I'll be lazy and only decode at the end of the two hours, but if I wanted error correction to get the full advantage, I would periodically run the error correction algorithm and fix detected errors. Here's the decoding! Holy cow, it worked!I'm being rather tongue-in-cheek here, of course. But it's genuinely impressive that my laptop can stick 59 bits into DRAM cells containing a handful of electrons each, and all of them are just fine after several hours. And it's really really impressive that this research group got their superconducting qubits to store classical states well enough that their rather fancy error correcting device could keep up and preserve the logical state for two hours. [1]
But this isn't quantum error correction going FOOM, per se. It's classical. A bit-flip-corrected but not phase-flip-corrected qubit is precisely a classical bit, no more, no less.
The authors did also demonstrate that they could do the same trick correcting phase flips and not bit flips, but that's a tiny bit like turning the experiment on its side and getting the same result. Combining both demonstrations is impressive, though -- regardless of whether you look at the DRAM cells in my laptop as though the level is the Z basis or the X basis, they only work in one single basis. You cannot swap the role of level and phase in DRAM and get it to still work. But the researchers did pull that off on their two-hour-half-life device, and I find that quite impressive, and the fact that it worked strongly suggests that their device is genuinely 59 qubits, whereas no one could credibly argue that my laptop contains giga-qubits of DRAM. Fundamentally, you can do classical repetition using a repetition code, but you cannot do quantum computation with it. You need fancier, and more sensitive-to-errors, codes for this, and that's what the second half of the article is about.
[0] I didn't actually wait two hours. But I could have waited a week and gotten the same result.
[1] The researchers' qubits are nowhere near as good as my DRAM. They had to run their error correction a billion times or so during the course of their two hours. (My DRAM refreshes quite a few times over the course of two hours, and one can look at DRAM refreshes as correcting something a bit like a repeptition code.)
This is perhaps not clear enough, but the title refers to a pattern. For classical bits on a quantum computer this pattern is already playing out (as shown in the cited experiments), and for quantum bits I think it's about to play out.
I should maybe also mention that arbitrarily good qubits are a step on the road, not the end. I've seen a few twitter takes making that incorrect extrapolation. We'll still need hundreds of these logical qubits. It's conceivable that quantity also jumps suddenly... but that'd require even more complex block codes to start working (not just surface codes). I'm way less sure if that will happen in the next five years.
A factor of 1000 may well be the difference between destroying Shor’s-algorithm-prone cryptography and destroying it later, though.
Quantum computers are different. It seems quite unlikely that anyone build the equivalent of, say, an XOR gate that takes two single physical qubits in and spits out two physical qubits (this is quantum land -- the standard gates neither create nor destroy qubits, so the number of inputs and outputs is the same) that works well enough to actually represent that particular operation in whatever software is being run. Instead each logical operation will turn into multiple physical operations that work like an error correcting code. The easy classical trick where your transistor is janky at 0.9V so you run it at 1.0V amounts to moving more electrons around per operation, and this approach is analogous to correcting bit flips but not phase errors, and it makes your quantum computer stop being quantum.
And here's where it gets messy. The physical qubit technologies that are best for longish-term data storage may not be the same as the technologies that are good for computation, and those may not be the same technologies that are good for communication at a distance. (For example, photons are pretty good for transmitting quantum states, but transferring a qubit from a different technology to a photon state and back is not so easy, and demonstration of computation with photons have been pretty limited.) As an extreme example, one can, in principle, store quantum states quite robustly and even compute with them if one can find the correct kind of unobtanium (materials with the appropriate type of non-Abelian anyon surface states), but, last I heard, no one had much of an idea how to get the qubit states off the chip even if such a chip existed.
So it is possible that we'll end up with a quantum computer that doesn't scale, at least for a while. There might be 20k physical qubits, and some code rate, and some number of logical quantum operations you can do on the logical qubits before they decay or you get bored, and very little ability to scale to more than one computer that can split up a computation between them. In that case, the code rate is a big deal.
Not GP but yes. I'm reasonably confident that we will have quantum computers that are large and stable enough to have a real quantum advantage, but that's mostly because I believe Moore's law is truly dead and we will see a plateau in 'classical' CPU advancement and memory densities.
> I feel like it’s going to get exponentially harder and expensive to get very small incremental gains and that actually beating a classical computer isn’t necessarily feasible (because of all the error correction involved and difficulty in manufacturing a computer with large number of qbits)
I don't think people appreciate or realize that a good chunk of the innovations necessary to "get there" with quantum are traditional (albeit specialized) engineering problems, not new research (but breakthroughs can speed it up). I'm a much bigger fan of the "poking lasers at atoms" style of quantum computer than the superconducting ones for this reason, the engineering is more like building cleaner lasers and better AOMs [0] than trying to figure out how to super cool vats of silicon and copper. It's outside my area of expertise, but I would expect innovations to support better lithography to also benefit these types of systems, though less directly than superconducting.
[0]: https://en.wikipedia.org/wiki/Acousto-optic_modulator
Classical CPUs have slowed but not stopped but more importantly quantum machines haven’t even been built yet let alone been proven possible to scale up arbitrarily. Haven’t even demonstrated they can factor 17 faster than a classical computer.
I would have thought a wide enough array of qubits could functionally do "anything" in one shot
A well studied example is that it's impossible to parallelize the steps in Grover's algorithm. To find a preimage amongst N possibilities, with only black box access, you need Ω(sqrt(N)) sequential steps on the quantum computer [1].
Another well known case is that there's no known way to execute a fault tolerant quantum circuit faster than its reaction depth (other than finding a rewrite that reduces the depth, such as replacing a ripple carry adder with a carry lookahead adder) [2]. There's no known way to make the reaction depth small in general.
Another example is GCD (greatest common divisor). It's conjectured to be an inherently sequential problem (no polylog depth classical circuit) and there's no known quantum circuit for GCD with lower depth than the classical circuits.
[1]: https://arxiv.org/abs/quant-ph/9711070
[2]: https://arxiv.org/abs/1210.4626