Amd Officially Confirms Fresh Next-Gen Zen 6 CPU Details
Key topics
The tech world is abuzz with AMD's latest confirmation of their next-gen Zen 6 CPU details, sparking a lively debate about the future of CPU upgrades and RAM prices. As commenters weigh in, it becomes clear that the ongoing RAM price hikes are a major concern, with some users considering skipping Zen 6 upgrades altogether, while others see it as an opportunity to upgrade their existing CPUs. The discussion takes an interesting turn with the revelation that Zen 7 will reportedly also be on the AM5 socket, potentially extending its lifespan. With some users questioning whether AMD can double memory lanes without switching sockets, the conversation highlights the complex trade-offs between upgradeability, performance, and repairability.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
21m
Peak period
72
Day 1
Avg / period
17.2
Based on 86 loaded comments
Key moments
- 01Story posted
Dec 19, 2025 at 9:51 AM EST
15 days ago
Step 01 - 02First comment
Dec 19, 2025 at 10:11 AM EST
21m after posting
Step 02 - 03Peak activity
72 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 31, 2025 at 12:53 AM EST
3d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
(Ignore my AM5 workstation with 192GB RAM in the corner)
Considering PC desktops. DDR4 is 3200 MT/s max JEDEC. DDR5 is available on AMD since 3 years and is 5600. DDR6 specification is almost finished. It looks like DDR5 will double performance just right before new DDR6 DIMMs appear. Thus I'd expect DDR6 to double the bandwidth just as late when the new memory standard arrives.
Strange, I bought 64GB DDR5 6400MHz last year and apparently my motherboard can handle up to 7200MHz (or more with overclocking).
Intel's desktop CPUs from last year support up to 5600 MT/s with regular DDR5 DIMMs, or 6400 MT/s for CUDIMMs. Speeds higher than this are achievable, but are overclocking.
If your memory modules are rated for 6400 MT/s, they are most likely advertising the speed when using an Intel XMP or AMD EXPO profile to overclock the memory (and the CPU's memory controller). The JEDEC standard profile likely is no faster than 5600 MT/s. It's also possible that you bought last year a kit of CUDIMMs rated for 6400 MT/s without overclocking, brand new to the market at that time, and of no help whatsoever with any CPU that isn't an Intel Arrow Lake.
The singaling is double the transfers but the bandwidth does not double.
We've seen what ... 5 ... itterations now and people still get this wrong.
The doubling of transfers comes at the cost of latency and processing overhead, always, until the new standard matures and might match the predecessor latency.
Did dual channel DDR5 double dual channel DDR4 bandwidth? Short answer ... nope, not even close, and DDR4 4000 remains viable and trades blows with DDR5 to this day in application performance despite "losing" in sulunthetics.
In fact my wife is still rocking that machine - although her gaming needs are much less equipment intense than mine. After a small refurb I gave it (new case, new air cooler, new PSU) - I expect it to last another 5 years for her.
My new one is a 9700X. Didn't feel the need to spring for higher power budget for a marginal gaming performance bump. But I suppose that also means it's much more practical for me to jump to a newer CPU later.
It's faster than the prior machine, but it sure does not feel like it does things the previous one didn't
I think it's very telling so many people upgrading now are coming from Haswell chips, they are a legendary chip generation, and arguably the last time anyone needed a CPU upgrade short of operating system support or warranty concerns.
Notably, Haswell makes the cut for Win11.
Not that I'd use that over Linux. (I run Arch, btw)
6000 C28 at 256GB using 4x64 is not at all on the bandwidth high end these days, but it's way more than DDR4 could provide.
Most boards support 4x64 at 5600+ now, some go to 6400 with it if you tune voltages and terminations. .
I'm a gamer, often playing games that need a BEEFY CPU, like MS Flight Simulator. My upgrade from an i9-9900K to a Ryzen 9800X3D was noticeable.
Only if they overestimate demand and overproduce CPUs. Otherwise it will lead to higher prices because there's less economy of scale.
https://overclock3d.net/news/cpu_mainboard/amd-extends-am5-l...
They stumbled into the right direction with strix halo but I have a feeling they won't recognize the win/follow up.
They could in theory do on package dram as faster first level memory, but I doubt we'll see that anytime soon on desktop and it probably wouldn't fit under the heat spreader
You won’t be able to add RAM to the die itself there no room on the interposer really.
When you go to the desktop it becomes harder to justify including beefed up memory controllers just for the CPU vs putting that towards beefing some other part of the CPU up that has more of an impact in cost or performance.
Even when feeding all cores, the max bandwith used by the CPU is less than 200GB/s, in fact it is quite comparable to Intel/AMD CPUs and even less than their high-end ones (x86 still rules on the multi-core front in any case).
I actually see this as a weakness of Apple Silicon, because it doesn't scale that well. It's basically the problem of their Ultra chip: doesn't allow doubling of the compute and doesn't allow faster RAM bandwith, you only get higher RAM capacity in exchange for slower GPU compute.
They just scaled up their mobile architecture and it has its limit.
Sure. Keep the DIMM sockets and add HBM to the CPU package.
Actually probably the best possible architecture. You can choose to have both or only one, backward compatible and future proof.
Yes, it adds another level to the memory hierarchy but that can be fine tuned.
You are also overestimating how much room there is on the interposer.
As someone with a 9950x3d with direct die cooling setup I can tell you there is no room.
So to say that Zen 6/7 supports AM5 on desktop, doesn't necessarily exclude that Zen 6/7 product family in general doesn't support other new/interesting sockets on desktop (or mobile) also. Maybe products for AM6 and AM5 from the same zen family.
Medusa Halo and the Zen7 based 'Grimlock Halo' version might be the interesting ones to watch (if you like efficient Apple-stlyle big APUs with all the memory bandwidth)
I'd love to build a new desktop soon but I couldn't justify the cost and am instead building out a used desktop that's still on ddr4 / lga1151.
I just checked how much the 64 Gb ddr4 in my desktop would cost now... it starts at 2.5 times what i paid in 2022.
Sorry AMD, I would maybe like a new desktop but not now.
Something like 5900x on 2nm or 4nm
The PCI-Express bus is actually rather slow. Only ~63 GB/s, even with PCIe 5 x16!
PCIe is simply not a bottleneck for gaming. All the textures and models are loaded into the GPU once, when the game loads, then re-used from VRAM for every frame. Otherwise, a scene with a lowly 2 GB of assets would cap out at only ~30 fps.
Which is funny to think about historically. I remember when AGP first came out, and it was advertised as making it so GPUs wouldn't need tons of memory, only enough for the frame buffers, and that they would stream texture data across AGP. Well, the demands for bandwidth couldn't keep up. And now, even if the port itself was fast enough, the system RAM wouldn't be. DDR5-6400 running in dual-channel mode is only ~102 GB/s. On the flip side the RTX 5050, a current-gen budget card, has over 3x that at 320 GB/s, and on the top end, the RTX 5090 is 1.8 TB/s.
Ah, not really these days, textures are loaded in/out on demand, at multiple different mipmap levels, same with model geometry and LOD's. There is texture and mesh data frequently being cached in and out during gameplay.
Not arguing with your points around bus speeds, and I suspect you knew the above and were simplyifing anyway.
I wish it was possible to put several M.2 drives in a system and RAID them all up, like you can with SATA drives on any above-average motherboard. Even a single lane of PCIe 5.0 would be more than enough for each of those drives, because each drive won't need to work as hard. Less overheating, more redundancy, and cheaper than getting a small number of super fast high capacity drives. Alas, most mobos only seem to hand out lanes in multiples of 4.
SATA was a cabling nightmare, sure, but cables let you relocate bulk somewhere else in the case, so you can bunch all the connectors up on the board.
Frankly, given that most advertised M.2 speeds are not sustained or even hit most of the time, I could deal with some slower speeds due to cable length if it meant I could mount my SSDs anywhere but underneath my triple slot GPU.
Observing server mainboards reveals many PCIe 5.0 connectors for cables to attach PCIe-SSDs looking similar to SATA ones.
AFAIK, the cpu lanes can't be broken up beyond x4; it's a limitation of the pci-e root complex. The Promontory 21 chipset that is mainstream for AM5 does two more x4 and four choose sata or pci-e x1. I don't think you can bifurcate those x4s, but you might be able to aggregate two or four of the x1s. And you can daisy chain a second Prom21 chipset to net one more x4 and another 4 x1.
Of course, it's pretty typical for a motherboard to use some of those lanes for onboard network and what nots. Nobody sells a bare minimum board with an x16 slot, two cpu based x4 slots, two chipset x4 slots, and four chipset x1 slots and no onboard perhipherals, only the USB from the cpu and chipset. Or if they do, it's not sold in US stores anyway.
If pci-e switches weren't so expensive, you might see boards with more slots behind a switch (which the chipsets kind of are, but...)
There are some exceptions, but I haven't seen one with for example four x16 slots that support PCIe 5.0 x4 lanes with bifurcation.
E.g. https://www.ebay.co.uk/itm/126656188922
Most motherboards don’t go beyond 2x8 with 2x16 physical slots because there is little actual use for it and it costs quite a bit of money.
Asus 5.0 Hyper M.2 with yield 4x4x4x4 in a board that is setup for bifurcation of PCIE slot 1 like that.
I have run 4 GPUs this way and it works very well.
Lossless scaling across 2 8x slots rocks.
When did the GHz race start again?
Now, it's either a fancy term for "announcement", or people use it synonymously with "rumor".
Just takes backwards steps from time to time with major architectural innovations that deliver better performance at significantly lower clock speeds. Intel's last backwards step was from Pentium 4 to Core all the way back in ~2005. AMD's last backwards step was from Bulldozer (and friends) to Zen in 2017.
7GHz is ridiculous and probably just a false rumour, but IMO; Intel and AMD are probably due for another backwards step, they are exceeding the peek speeds from the P4/Bulldozer eras. And Apple has proved that you can get better performance at lower clock speeds.
You can really see where the industry hit the wall with Dennard scaling.
https://chipsandcheese.com/p/telum-ii-at-hot-chips-2024-main...
https://www.eecg.utoronto.ca/~moshovos/ACA07/projectsuggesti...
(if you do ML things you might recognize Doug Burger's name on the authors line of the second one)
I'd say the amount of L3 is not increased but adapted/scaled to the increased core count, since per each core there is still the same amount of cache available as before.
We get faster cores, so we need to get from 5600 to e.g. 6000 DDR5. Since core count is increased by 50%, we'd need 9000... DDR5^W, well yes, we'd need actually as planed before AM6 and DDR6!
x86 releases will never again be as interesting.
Makes a massive difference at high density and utilisation, with the standard cache/core performance can really degrade under load.
[0] https://www.amd.com/en/products/processors/technologies/3d-v...
[1] https://www.amd.com/en/products/processors/server/epyc/4th-g...