Generalized Orders of Magnitude
Posted3 months agoActive3 months ago
arxiv.orgResearchstory
calmpositive
Debate
20/100
MathematicsScientific NotationOrders of Magnitude
Key topics
Mathematics
Scientific Notation
Orders of Magnitude
A GitHub repository and accompanying arXiv paper introduce a generalized concept of orders of magnitude, sparking discussion on its potential applications and implications for scientific notation.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
51m
Peak period
9
Day 8
Avg / period
3
Comment distribution12 data points
Loading chart...
Based on 12 loaded comments
Key moments
- 01Story posted
Oct 9, 2025 at 9:08 AM EDT
3 months ago
Step 01 - 02First comment
Oct 9, 2025 at 9:59 AM EDT
51m after posting
Step 02 - 03Peak activity
9 comments in Day 8
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 18, 2025 at 3:56 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45527191Type: storyLast synced: 11/20/2025, 2:38:27 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
https://github.com/glassroom/generalized_orders_of_magnitude
I can see how it could be useful when you really need it. Thank you for sharing it on HN.
I tried the sample code for estimating Lyapunov exponents in parallel. It worked on the first try, and it was much faster than existing methods, as advertised. It's nice to come across something that works as advertised on the first try!
The high-dynamic-range RNN stuff may be interesting to others, but it's not for me. In my book, Transformers have won. Nowadays it's so easy to whip-up a small Transformer with a few lines of Python, and it will work well on anything you throw at it.
https://github.com/cjdoris/LogarithmicNumbers.jl
or
https://github.com/cjdoris/HugeNumbers.jl
(Apart from the PyTorch impl)
In particular, it feels like storing the complete complex number is a bit silly since we know, a priori, that the number exponentiates to ±1, so, wouldn't this mean that we have wasted 31 bits? (=32-1 since only one bit is needed for the sign.)
That being said, this representation is very useful for certain scenarios, of course, when you know that the dynamic range of your number is very large, but, as far as I can tell, it's not exactly super novel, unless I'm missing something!
The formal definition stops short of inducing an isomorphism between GOOMs and R, to allow for the possibility of transformations that leverage the structure of the complex plane, e.g., deep learning models that process data in C and apply a final transformation from C to GOOMs, thereby allowing the data to be exponentiated to R. The library in this repository makes implementing such a model trivial, because it ensures that backpropagation works seamlessly over C, over GOOMs, and across mappings between C, GOOMs, and floats.
Take a look at the selective-resetting algorithm in the manuscript too. To the best of our knowledge, it's a new algorithm, but we opted not to claim as much, out of an abundance of caution. You will appreciate reading about it.
I’ll take a peek at the algorithm which I did admittedly skip, but curious if it gains us something !