Comparing Integers and Doubles
Postedabout 2 months agoActiveabout 1 month ago
databasearchitects.blogspot.comTechstory
calmneutral
Debate
20/100
Floating Point ComparisonProgramming Best PracticesData Types
Key topics
Floating Point Comparison
Programming Best Practices
Data Types
The post compares integers and doubles, sparking a discussion on proper comparison techniques for floating point numbers.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
5h
Peak period
10
168-180h
Avg / period
4
Comment distribution12 data points
Loading chart...
Based on 12 loaded comments
Key moments
- 01Story posted
Nov 12, 2025 at 1:16 PM EST
about 2 months ago
Step 01 - 02First comment
Nov 12, 2025 at 6:30 PM EST
5h after posting
Step 02 - 03Peak activity
10 comments in 168-180h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 20, 2025 at 1:21 AM EST
about 1 month ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45903668Type: storyLast synced: 11/20/2025, 2:43:42 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I had the impression that the usual way to compare floats is to define a precision and check for -p < (a - b) < p. In this case 0.99997 - 1.0002 = -0.00023, which correctly tells us that the two numbers are equal at 0.001 precision and unequal at 0.0001.
The only reasonable way to compare rationals is the decimal expansion of the string.
https://en.wikipedia.org/wiki/Floating-point_arithmetic#Accu...
Why decimal? I don’t see why any other integer base wouldn’t work, and, on about any system, doing 2^n for any n > 0* will be both easier to implement and faster to run.
And that, more or less, is what the suggested solution does. It first compares the first 53 bits and, if that’s not conclusive, it compares 64 bits.
Also, of course, if your number has more than n bits, you’d only generate digits until you know the answer.
Careful, someone is liable to throw this in an LLM prompt and get back code expanding the ASCII characters for string values like "1/346".
This was one of the bigger hidden performance issues when I was working on Hive - the default coercion goes to Double, which has a bad hash code implementation [1] & causes joins to cluster & chain, which caused every miss on the hashtable to probe that many away from the original index.
The hashCode itself was smeared to make values near Machine epsilon to hash to the same hash bucket so that .equals could do its join, but all of this really messed up the folks who needed 22 digit numeric keys (eventually Decimal implementation handled it by adding a big fixed integer).
Databases and Double join keys was one of the red-flags in a SQL query, mostly if you see it someone messed up something.
[1] - https://issues.apache.org/jira/browse/HADOOP-12217
1 more comments available on Hacker News