Quantum Bayes' Rule and Petz Transpose Map from the Minimum Change Principle
Posted4 months agoActive4 months ago
Original: Quantum Bayes' rule and Petz transpose map from the minimum change principle
arxiv.orgResearchstory
informativeneutral
Debate
0/100
PhysicsBayesian InferenceMathematical Physics
Key topics
Physics
Bayesian Inference
Mathematical Physics
Discussion Activity
Light discussionFirst comment
9m
Peak period
4
0-1h
Avg / period
4
Key moments
- 01Story posted
Aug 30, 2025 at 8:41 AM EDT
4 months ago
Step 01 - 02First comment
Aug 30, 2025 at 8:50 AM EDT
9m after posting
Step 02 - 03Peak activity
4 comments in 0-1h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 30, 2025 at 9:29 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45074143Type: storyLast synced: 11/17/2025, 8:01:19 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I feel like Bayes' rule is oversold though.
Is just Bayes' rule good enough for fighting spam email, for example?
How large of a Bayesian belief network is necessary to infer the equations of n-body gravity in a fluid, without other fields?
How large of a Bayesian belief network is necessary to extrapolate the motions of the planets?
Then also predict - with resource costs - the perihelion of Mercury; the deviation in the orbit of Mercury predicted by General Relativity and also the Gross-Pitaevskii equation which describes turbulent vortical fluids.
Then also - with Bayesians or a Bayesian belief network - predict the outcomes in (fluidic nonlinear) n-body gravity experiments.
Do Bayesian models converge at lowest cost given randomly initialized arbitrary priors? Do Bayesian models converge at lowest cost at describing nonlinear complex adaptive systems?
How do Bayesians compare to other methods for function approximation and nonlinear function approximation?
How do quantum Bayesians compare to other methods for function approximation and nonlinear function approximation?
Self attention in transformer networks is decidedly not Bayesian, but self attention doesn't model truth or truthiness either.
Transformer self attention models only model frequency of observation in the sample, not truthiness.
"LightGBM Predict on Pandas DataFrame – Column Order Matters" (2025) https://news.ycombinator.com/item?id=43088854 :
> [ LightGBM,] does not converge regardless of feature order.
> From https://news.ycombinator.com/item?id=41873650 :
>> Do algorithmic outputs diverge or converge given variance in sequence order of all orthogonal axes? Does it matter which order the dimensions are stated in; is the output sensitive to feature order, but does it converge regardless?
> Also, current LLMs suggest that statistical independence is entirely distinct from orthogonality, which we typically assume with high-dimensional problems. And, many statistical models do not work with non-independent features.
> Does this model work with non-independence or nonlinearity?
> Does the order of the columns in the training data CSV change the alpha of the model; does model output converge regardless of variance in the order of training data?
From https://news.ycombinator.com/item?id=37462132 :
> [ quantum discord ]
> TIL the separable states problem is considered NP-hard, and many models specify independence of observation as necessary.
How does (NP-hard) quantum separability relate to statistical independence as necessary for statistical models to be appropriate?
If it is so hard to determine which particles are and aren't entangled, when should we assume statistical independence of observation?
If we cannot assume statistical independence of observation, we know that Bayesian models aren't appropriate.
> In quantum information theory, a mix of quantum mechanics and information theory, the Petz recovery map can be thought of a quantum analog of Bayes' theorem