Hierarchical Reasoning Model Outperforms Llms at Reasoning Tasks
Posted4 months agoActive4 months ago
livescience.comNewsstory
informativepositive
Debate
40/100
Artificial IntelligenceMachine LearningCognitive Architectures
Key topics
Artificial Intelligence
Machine Learning
Cognitive Architectures
Discussion Activity
Light discussionFirst comment
10m
Peak period
1
0-1h
Avg / period
1
Key moments
- 01Story posted
Aug 27, 2025 at 9:00 AM EDT
4 months ago
Step 01 - 02First comment
Aug 27, 2025 at 9:10 AM EDT
10m after posting
Step 02 - 03Peak activity
1 comments in 0-1h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 27, 2025 at 9:10 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45039053Type: storyLast synced: 11/18/2025, 12:10:03 AM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
https://arcprize.org/blog/hrm-analysis
...we made some surprising findings that call into question the prevailing narrative around HRM:
1. The "hierarchical" architecture had minimal performance impact when compared to a similarly sized transformer.
2. However, the relatively under-documented "outer loop" refinement process drove substantial performance, especially at training time.
3. Cross-task transfer learning has limited benefits; most of the performance comes from memorizing solutions to the specific tasks used at evaluation time.
4. Pre-training task augmentation is critical, though only 300 augmentations are needed (not 1K augmentations as reported in the paper). Inference-time task augmentation had limited impact.
Findings 2 & 3 suggest that the paper's approach is fundamentally similar to Liao and Gu's "ARC-AGI without pretraining".