I Am Giving Up on Intel and Have Bought an Amd Ryzen 9950x3d
Posted4 months agoActive4 months ago
michael.stapelberg.chTechstoryHigh profile
heatedmixed
Debate
80/100
CPU StabilityAmd vs IntelHardware Issues
Key topics
CPU Stability
Amd vs Intel
Hardware Issues
The author shares their experience switching from Intel to AMD Ryzen 9950X3D, sparking a discussion on CPU stability, power consumption, and the pros and cons of each brand.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
44m
Peak period
87
0-12h
Avg / period
22.9
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 7, 2025 at 2:54 AM EDT
4 months ago
Step 01 - 02First comment
Sep 7, 2025 at 3:38 AM EDT
44m after posting
Step 02 - 03Peak activity
87 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 14, 2025 at 3:23 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45155986Type: storyLast synced: 11/20/2025, 8:14:16 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
When you do not have a bunch of components ready to swap out it is also really hard to debug these issues. Sometimes it’s something completely different like the PSU. After the last issues, I decided to buy a prebuilt (ThinkStation) with on-site service. The cooling is a bit worse, etc., but if issues come up, I don’t have to spend a lot of time debugging them.
Random other comment: when comparing CPUs, a sad observation was that even a passively cooled M4 is faster than a lot of desktop CPUs (typically single-threaded, sometimes also multi-threaded).
On what metric am I ought to buy a CPU these days? Should I care about reviews? I am fine with a middle-end CPU, for what it is worth, and I thought of AMD Ryzen 7 5700 or AMD Ryzen 5 5600GT or anything with a similar price tag. They might even be lower-end by now?
Intel is just bad at the moment and not even worth touching.
https://news.ycombinator.com/item?id=45043269
https://youtu.be/OVdmK1UGzGs
https://youtu.be/oAE4NWoyMZk
https://www.cpubenchmark.net/cpu_value_alltime.html
CPUs like Intel Core Ultra 7 265K are pretty close to top Ryzens
If your workload is pointer-chasing intel's new CPUs aren't great though, and the X3D chips are possibly a good pick (if the workload fits in cache) which is why they get a lot of hype from reviewers who benchmark games and judge the score 90% based on that performance.
And it's no bad power quality on mains as someone suggested (it's excellent here) or 'in the air' (whatever that means) if it happens very quickly after buying.
I would guess that a lot of it comes from bad firmware/mainboards, etc. like the recent issue with ASRock mainboards destroying Ryzen 9000-series GPUs: https://www.techspot.com/news/108120-asrock-confirms-ryzen-9... Anyone who uses Linux and has dealt with bad ACPI bugs, etc. knows that a lot of these mainboards probably have crap firmware.
I should also say that I had a Ryzen 3700X and 5900X many years back and two laptops with a Ryzen CPU and they have been awesome.
My belief is that it is in the memory controllers and the XMP profiles provided with RAM. It’s very easy for the XMP profiles to be overly optimistic or for the RAM to degrade overtime and fall out of spec.
Meanwhile, my intel systems are solid. Even the 9900k hand me down I have to my partner. There is an advantage to using very old tech. And they’re not even slower for gaming: everything is single core bottlenecked anyways. Only in the past year or so that AMD had surpassed in single core performance, but we are talking single digit percentage differences for gaming.
I’m glad AMD has risen, but the dialogue about AMD vs intel in the consumer segment is tainted by people who can’t disconnect their stock ownership from reality.
The only issues are with an intel Bluetooth chipset, and bios auto detection bugs. Under Linux, the hardware is bug for bug compatible with Windows, and I’m down to zero known issues after doing a bit of hardware debugging.
[1] Well, most non-servers are probably laptops today, but the same reasoning applies.
My home server is on a 5600G. I turned it on, installed home assistant and jellyfin etc... , and since it has not been off. It's been chugging along completely unattended, no worries.
Yes, it's in a basement where temperature is never above 21C, and it's almost never pushed to 100%, and certainly never for extended periods of time.
But it's the stock cooler, cheap motherboard, cheap RAM and cheap SSD (with expensive NAS grade mechanical hard drives).
Definetly not that one if you plan to pair with a dedicated GPU! The 5700X has twice the L3 cache. All Ryzen 5000 with a GPU have only 16MB, 5700 has the GPU deactivated.
But see, this is why it is so difficult. I would have never guessed. I would have to research this A LOT, which I am fine with, but you know.
Yea, but unfortunately it comes attached to a Mac.
An issue I've encountered often with motherboards, is that they have brain damaged default settings, that run CPU's out of spec. You really have to go through it all with a fine toothed comb and make sure everything is set to conservative stock manufacturer recommended settings. And my stupid MSI board resets everything (every single BIOS setting) to MSI defaults when you upgrade its BIOS.
It looks completely bonkers to me. I overclocked my system to ~95% of what it is able to do with almost default voltages, using bumps of 1-3% over stock, which (AFAIK) is within acceptable tolerances, but it requires hours and hours of tinkering and stability testing.
Most users just set automatic overclocking, have their motherboards push voltages to insane levels, and then act surprised when their CPUs start bugging out within a couple of years.
Shocking!
I'd rather run everything at 90% and get very big power savings and still have pretty stellar performance. I do this with my ThinkStation with Core Ultra 265K now - I set the P-State maximum performance percentage to 90%. Under load it runs almost 20 degrees Celsius cooler. Single core is 8% slower, multicore 4.9%. Well worth the trade-off for me.
(Yes, I know that there are exceptions.)
You can always play with the CPU governor / disable high power states. That should be well-tested.
I think you are confusing with undervolting.
It turned out during the shitcoin craze and then AI craze that hardcore gamers, aka boomers with a lot of time and retirement money on their hands and early millennials working in big tech building giant-ass man caves, are a sizeable demographic with very deep pockets.
The wide masses however, they gotta live with the scraps that remain after the AI bros and hardcore gamers have had their pick.
https://www.pugetsystems.com/blog/2024/08/02/puget-systems-p...
to;dr: they heavily customize BIOS settings, since many BIOSes run CPUs out-of-spec by default. With these customizations there was not much of a difference in failure rate between AMD and Intel at that point in time (even when including Intel 13th and 14th gen).
I had the same issue with my MSI board, next one won't be a MSI.
Yeah. If Asahi worked on newer Macs and Apple Silicon Macs supported eGPU (yes I know, big ifs), the choice would be simple. I had NixOS on my Mac Studio M1 Ultra for a while and it was pretty glorious.
I think a lot of it boils down to load profile and power delivery. My 2500VA double conversion UPS seems to have difficulty keeping up with the volatility in load when running that console app. I can tell because its fans ramp up and my lights on the same circuit begin to flicker very perceptibly. It also creates audible PWM noise in the PC which is crazy to me because up til recently I've only ever heard that from a heavily loaded GPU.
For a long time, my Achille's heel was my Bride's vacuum. Her Dyson pulled enough amps that the UPS would start singing and trigger the auto shutdown sequence for the half rack. Took way too long to figure out as I was usually not around when she did it.
But if your UPS (or just the electrical outlet you're plugged into) can't cope - dunno if I'd describe that as cratering your CPU.
You said the right words but with the wrong meaning! On Gigabyte mobo you want to increase the "CPU Vcore Loadline Calibration" and the "PWM Phase Control" settings, [see screenshot here](https://forum.level1techs.com/t/ddr4-ram-load-line-calibrati...).
When I first got my Ryzen 3900X cpu and X570 mobo in 2019, I had many issues for a long time (freezes at idle, not waking from sleep, bios loops, etc). Eventually I found that bumping up those settings to ~High (maybe even Extreme) was what was required, and things worked for 2 years or so until I got a 5950X on clearance last year.
I slotted that in to the same mobo and it worked fine, but when I was looking at HWMon etc, I noticed some strange things with the power/voltage. After some mucking about and theorising with ChatGPT (it's way quicker than googling for uncommon problems), it became apparent that the ~High LLC/power settings I was still using were no good. ChatGPT explained that my 3900X was probably a bit "crude" in relative quality, and so it needed the "stronger" power settings to keep itself in order. Then when I've swapped to 5950X, it happens to be more "refined" and thus doesn't need to be "manhandled" — and in fact, didn't like being manhandled at all!
I have an M1 Max, a few revisions old, and the only thing I can do to spin up the fans is run local LLMs or play Minecraft with the kids on a giant ultra wide monitor at full frame rate. Giant Rust builds and similar will barely turn on the fan. Normal stuff like browsing and using apps doesn’t even get it warm.
I’ve read people here and there arguing that instruction sets don’t matter, that it’s all the same past the decoder anyway. I don’t buy it. The superior energy efficiency of ARM chips is so obvious I find it impossible to believe it’s not due to the ISA since not much else is that different and now they’re often made on the same TSMC fabs.
One of the many reasons why snapdragon windows laptops failed was both amd and Intel (lunar lake) was able to reach the claimed efficiency of those chips. I still think modern x86 can match arm ones in efficiency if someone bothered to tune the os and scheduler for most common activities. M series was based on their phone chips which were designed from the ground up to run on a battery all these years. AMD/Intel just don't see an incentive to do that nor do Microsoft.
There is one exception: If I run an idle Windows 11 ARM edition VM on the mac, then the fans run pretty much all the time. Idle Linux ARM VMs don’t cause this issue on the mac.
I’ve never used windows 11 for x86. It’s probably also an energy hog.
This anecdote perfectly describes my few generation old Intel laptop too. The fans turn on maybe once a month. I dont think its as power efficient as an M-series Apple CPU, but total system power is definitely under 10W during normal usage (including screen, wifi, etc).
This isn't really true. On the same process node the difference is negligible. It's just that Intel's process in particular has efficiency problems and Apple buys out the early capacity for TSMC's new process nodes. Then when you compare e.g. the first chips to use 3nm to existing chips which are still using 4 or 5nm, the newer process has somewhat better efficiency. But even then the difference isn't very large.
And the processors made on the same node often make for inconvenient comparisons, e.g. the M4 uses TSMC N3E but the only x86 processor currently using that is Epyc. And then you're obviously not comparing like with like, but as a ballpark estimate, the M4 Pro has a TDP of ~3.2W/core whereas Epyc 9845 is ~2.4W/core. The M4 can mitigate this by having somewhat better performance per core but this is nothing like an unambiguous victory for Apple; it's basically a tie.
> I have an M1 Max, a few revisions old, and the only thing I can do to spin up the fans is run local LLMs or play Minecraft with the kids on a giant ultra wide monitor at full frame rate. Giant Rust builds and similar will barely turn on the fan. Normal stuff like browsing and using apps doesn’t even get it warm.
One of the reasons for this is that Apple has always been willing to run components right up to their temperature spec before turning on the fan. And then even though that's technically in spec, it's right on the line, which is bad for longevity.
In consumer devices it usually doesn't matter because most people rarely put any real load on their machines anyway, but it's something to be aware of if you actually intend to, e.g. there used to be a Mac Mini Server product and then people would put significant load on them and then they would eat the internal hard drives because the fan controller was tuned for acoustics over operating temperature.
My modern CPU problems are DDR5 and the pre-boot timing thing never completing. So a build of a 9700x that I did that WAS supposed to be located remotely from me has to sit in my office and have its hand held thru every reboot cuz you never know quite know when its doing to decide it needs to retime and randomly never come back. Requires pulling the plug from the back and waiting a few minutes then powering back, then waiting 30 minutes for 64gb of ddr5 to do its timing thing.
I also have this issue.
A common approach is to go into the BIOS/UEFI settings and check that c6 is disabled. To verify and/or temporarily turn c6 off, see https://github.com/r4m0n/ZenStates-Linux
If I enable virtualisation, the issue can be replicated within 15 minutes of boot.
But with basically half the CPU set to do nothing, and all features disabled its once a week max.
Which sucks because I basically live in WSL.
Twice the memory bandwidth, twice the CPU core count... It's really wacky how they've decided to name things
The Ultra is a pair of Max chips. While the core counts didn't increase from M3 to M4 Max, overall performance is in the neighborhood of 5-25% better. Which still puts the M3 Ultra as Apple's top end chip, and the M5 Max might not dethrone it either.
The uplift in IPC and core counts means that my M1 Max MBP has a similar amount of CPU performance as my M3 iPad Air.
Of course, each generation has some single-core improvements and eventually that could catch up, but it can take a while to catch up to… twice as much silicon.
I have always run B series because I've never needed the overclocking or additional peripherals. In my server builds I usually disable peripherals in the UEFI like Bluetooth and audio as well.
It is cheaper and more stable. Performance difference doesn’t matter that much too
My system would randomly freeze for ~5 seconds, usually while gaming and having a video in the browser running a the same time. Then, it would reliably happen in Titanfall 2 and I noticed there were always AHCI errors in the Windows logs at the same time so I switched to an NVMe drive.
The system would also shut down occasionally (~ once every few hours) in certain games only. Then, I managed to reproduce it 100% of the time by casting lightning magic in Oblivion Remastered. I had to switch out my PSU, the old one probably couldn't handle some transient load spike, even though it was a Seasonic Prime Ultra Titanium.
And if we are talking about a passively cooled M4 (MacBook Air basically) it will quite heavily throttle relatively quickly, you lose at the very least 30%.
So, let's not misrepresent things, Apple CPUs are very power efficient but they are not magic, if you hit them hard, they still need good cooling. Plenty of people have had the experience with their M4 Max, discovering that actually, if they did use the laptop as a workstation, it will generate a good amount of fan noise, there is no other way around.
Apple stuff is good because most people actually have bursty workload (especially graphic design, video editing and some audio stuff) but if you hammer it for hours on end, it's not that good and the power efficiency point becomes a bit moot.
For example, various brands of motherboards are / were known to basically blow up AMD CPUs when using AMP/XMP, with the root cause being that they jacked an uncore rail way up. Many people claimed they did this to improve stability, but overclockers now that that rail has a sweet spot for stability and they went way beyond it (so much so that the actual silicon failed and burned a hole in itself with some low-ish probability).
That sounds terrible.
I've never overclocked anything and I've never felt I've missed out in any way. I really can't imagine spending even one minute trying to squeeze 5% or whatnot tweaking voltages and dealing with plumbing and roaring fans. I want to use the machine, not hotrod it.
I would rather Intel et al. leave a few percent "on the table" and sell things that work, for years on end without failure and without a lot of care and feeding. Lately it looks like a crapshoot trying to identify components that don't kill themselves.
Well, that's the issue, isn't it? Both Intel and AMD (resp. their board partners) had issues in recent times stemming from the increasingly aggressive push to the limit for those last few %.
This is about sane, stable defaults. If you want the extra performance far beyond the CPUs sweet-spot it should be made explicit you're forfeiting the stability headrooms.
Actually almost everything what you wrote is not true, and commenter above already sent you some links.
7800X3D is the GOAT, very power efficient and cool.
And even if could push it higher, they run very hot compared to other CPUs at the same power usage as a combination of AMD's very thick IHS, the compute chiplets being small/power dense and 7000 series X3D cache being on top of the compute chiplet unlike 9000 series that has it on the bottom.
The 9800x3d limited in the same way will be both mildly more power efficient from faster cores and run cooler because of the cache location. The only reason it's hotter is that it's allowed to use significantly more power, usually up to 150w stock, for which you'd have to remove the IHS on the 7800X3D if you didn't want to see magic smoke
https://www.computerbase.de/artikel/prozessoren/amd-ryzen-79...
If anyone thinks competition isn't good for the market or that also-rans don't have enough of an effect, just take note. Intel is a cautionary tale. I do agree we would have gotten where we are faster with more viable competitors.
M4 is neat. I won't be shocked if x86 finally gives up the ghost as Intel decides playing in Risc V or ARM space is their only hope to get back into an up-cycle. AMD has wanted to do heterogeneous stuff for years. Risc V might be the way.
One thing I'm finding is that compilers are actually leaving a ton on the table for AMD chips, so I think this is an area where AMD and all of the users, from SMEs on down, can benefit tremendously from cooperatively financing the necessary software to make it happen.
An ideal ambient (room) temperature for running a computer is 15-25 celcius (60-77 Fahrenheit)
Source: https://www.techtarget.com/searchdatacenter/definition/ambie...
Maybe today's CPUs would not be able to handle it, I am not sure. One would expect these things to only improve, but seems like this is not the case.
Edit: I misread it, oops! Disregard this comment.
using to much airconditioning is also not comfortable. i used to live in singapore. we used to joke that singapore has two seasons: indoors and outdoors. because the airconditioning is powered so high that you had to bring jacket to wear inside. i'd frequently freeze after entering a building. i don't know why they do it, because it doesn't make sense. when i did turn on airconditioning at home i'd go barely below 30. just a few degrees cooler than the outside so it feels more comfortable without making the transition to hard.
Seattle was like this a couple of decades ago when I moved there. People sneered at me when I talked about having air conditioning installed at my house. Having moved from a warmer part of the country, I ignored their smug comments and did it anyway. The next few years I basked in the comfort of my climate-controlled home while my coworkers complained about not being able to sleep due to the heat.
It is actually 2.9999, precisely.
Nuc 9 averaged 65-70W power usage, while the m4 is averaging 6.6W.
The Mac is vastly more performant.
The hardware is impressive - tiny, metal box, always silent, basic speaker built-in and it can be left always on with minimal power consumption.
Drive size for basic models is limited (512gb) - I solved it by moving photos to NAS. I don't use it for gaming, except Hello Kitty Island Adventure. I would say it's a very competitive choice for a desktop PC in 2025 overall.
Pass -fuse=mold when building.
Yet I also use a 7840U in a gaming handheld running Windows, and haven't had any issues there at all. So I think this is related to AMD Linux drivers and/or Wayland. In contrast, my old laptop with an NVIDIA GPU and Xorg has given me zero issues for about a decade now.
So I've decided to just avoid AMD on Linux on my next machine. Intel's upcoming Panther Lake and Nova Lake CPUs seem promising, and their integrated graphics have consistently been improving. I don't think AMD's dominance will continue for much longer.
Make sure it matches the min of the actual spec of the ram that you bought and what the CPU can do.
I used to get crashes like you are describing on a similar machine. The crashes are in the GPU firmware, making debugging a bit of a crap shoot. If you can run windows with the crashing workload on it, you’ll probably find it crashes the same ways as Linux.
For me, it was a bios bug that underclocked the ram. Memory tests, etc passed.
I suspect there are hard performance deadlines in the GPU stack, and the underclocked memory was causing it to miss them, and assume a hang.
If the ram frequency looks OK, check all the hardware configuration knobs you can think of. Something probably auto-detected wrong.
That gave me solid ground for debugging.
But I'll play around with this and the timings, and check if there's a BIOS update that addresses this. Though I still think that AMD's drivers and firmware should be robust enough to support any RAM configuration (within reason), so it would be a problem for them to resolve regardless.
Thanks for the suggestion!
Don't know about transcoding though.
Besides AMD CPUs of the early 2000s going up in smokes without working cooling, they all throttle before they become temporarily or permanently unstable. Otherwise they are bad.
I've never had a desktop part fail due to max temperatures, but I don't think I've owned one that advertises nor allows itself to reach or remain at 100c or higher.
If someone sells a CPU that's specified to work at 100 or 110 degrees and it doesn't then it's either defective or fraudulent, no excuses.
Max Operating Temperature: 105 °C
14900k: https://www.intel.com/content/www/us/en/products/sku/236773/...
Max Operating Temperature: 100 °C
Different CPUs, different specs.
And any CPU from the last decade will just throttle down if it gets too hot. That's how the entire "Turbo" thing works: go as fast as we can until it gets too hot, after which it throttles down.
Threadripper is built for this. But I am talking about the consumer options if you are on a budget. Intel has significantly more memory bandwidth than AMD in the consumer end. I don't have the numbers on hand, but someone at /r/localllama did a comparison a while ago.
I can't see how that supports your conclusion.
> AMD 7900X - 68.9 GB/sec
> Intel 13900K - 93.4 GB/sec
That's 35% better.
Smartphones have no active cooling and are fully dependent on thermal throttling for survival, but they can start throttling at as low as 50C easily. Laptops with underspecced cooling systems generally try their best to avoid crossing into triple digits - a lot of them max out at 85C to 95C, even under extreme loads.
If nothing else, it very clearly indicates that you can boost your performance significantly by sorting out your cooling because your cpu will be stuck permanently emergency throttling.
200 more comments available on Hacker News