Hyb Error: a Hybrid Metric Combining Absolute and Relative Errors (2024)
Posted4 months agoActive4 months ago
arxiv.orgResearchstory
calmmixed
Debate
20/100
Error MetricsMachine LearningStatistics
Key topics
Error Metrics
Machine Learning
Statistics
The paper 'Hyb Error: A Hybrid Metric Combining Absolute and Relative Errors' proposes a new error metric, sparking discussion on its limitations and interpretability compared to traditional absolute and relative error metrics.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
3h
Peak period
1
2-3h
Avg / period
1
Key moments
- 01Story posted
Sep 23, 2025 at 6:30 AM EDT
4 months ago
Step 01 - 02First comment
Sep 23, 2025 at 9:16 AM EDT
3h after posting
Step 02 - 03Peak activity
1 comments in 2-3h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 23, 2025 at 9:35 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45345148Type: storyLast synced: 11/20/2025, 6:24:41 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
In any case, what I usually want from an error metric is a clear interpretation of what it means, apart from just looking nice. Absolute error is good for measurements where the major error sources are independent of the value, while relative error is good for accuracy loss in floating-point arithmetic (though it gets a bit involved with catastrophic cancellation, where you want to take everything relative to the original input scale). Without a principled reason to do so, I wouldn't want to clump together absolute and relative thresholds and distort their meaning like this.