Kioxia's 5tb, 64 Gb/s Flash Module Puts Nand Toward the Memory Bus for AI GPU
Posted4 months agoActive4 months ago
tomshardware.comTechstory
calmpositive
Debate
20/100
AI HardwareFlash StorageGPU Technology
Key topics
AI Hardware
Flash Storage
GPU Technology
Kioxia has developed a 5TB, 64 GB/s flash module that could be used to enhance AI GPU performance, sparking discussion about its potential applications beyond GPU usage, such as in databases.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
4m
Peak period
1
0-1h
Avg / period
1
Key moments
- 01Story posted
Aug 24, 2025 at 12:11 AM EDT
4 months ago
Step 01 - 02First comment
Aug 24, 2025 at 12:15 AM EDT
4m after posting
Step 02 - 03Peak activity
1 comments in 0-1h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 24, 2025 at 2:36 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45001322Type: storyLast synced: 11/18/2025, 12:03:57 AM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
And 80TB with 1TB/s? Thanks to AI hardware is getting interesting again.
Consolidation has gone from having a data center with separate servers for all functions, to consolidating in a couple of racks, and will go on to being single server plus redundancy for most workloads you can imagine at some point in the future. Unless AI manages to convince us that we need the performance and cooling of 10:s of kWs per rack.
Some times I imagine that the IT of most companies, the part that is not ”in the cloud” that is, could run on a single server already. And maybe could even host the cloud functions if the admin know how hadn’t been lost to time.
Scaling to PCIe 6 is more bandwidth yes. But HBF is supposed to be going far faster, is the perception.