Solving the Problems of Hbm-on-Logic
Posted23 days agoActive17 days ago
morethanmoore.substack.comTech Discussionstory
informativepositive
Debate
20/100
Hbm-on-LogicSemiconductor Manufacturing3d Stacked Integration
Key topics
Hbm-on-Logic
Semiconductor Manufacturing
3d Stacked Integration
Discussion Activity
Light discussionFirst comment
6d
Peak period
2
132-144h
Avg / period
2
Key moments
- 01Story posted
Dec 17, 2025 at 11:54 AM EST
23 days ago
Step 01 - 02First comment
Dec 23, 2025 at 10:55 AM EST
6d after posting
Step 02 - 03Peak activity
2 comments in 132-144h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 23, 2025 at 3:03 PM EST
17 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 46302002Type: storyLast synced: 12/23/2025, 4:55:33 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
From what I understand, in a typical gpu core you put logic and connectors on one side and innert silicon on the other. So unless you drill through silicon you don't get shorter routing.
Why not put GPU one one side and HBM on the other side of the PCB? This would fix the cooling problem?
Hyperscalers are dealing with a pretty complex Pareto envelope that includes power (total), power (density), volume of space available, token throughput and token latency.
My guess is that there’s going to be some heterogenous compute deployed possibly forever, but likely for at least the next six to ten years, and exotic fragile underclocked highly dense compute as imagined in the paper is likely to be part of that. But probably not all of it.
Either way as a society we’ll get the benefits of at least a trillion dollars of R&D and production on silicon, which is great.
Still it's a good article and nice to see the old anandtech crew together. The random grammatical errors are still there but these days they are a reassuring sign the article was written by hand.