Amd's Chiplet Apu: an Overview of Strix Halo
Posted3 months agoActive2 months ago
chipsandcheese.comTechstoryHigh profile
calmpositive
Debate
40/100
Amd Strix HaloApuChiplet ArchitectureAI HardwareGaming Performance
Key topics
Amd Strix Halo
Apu
Chiplet Architecture
AI Hardware
Gaming Performance
The discussion revolves around AMD's new Strix Halo chiplet APU, its performance, and potential applications, with users expressing interest and some concerns about its availability and capabilities.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
3h
Peak period
67
Day 1
Avg / period
19.8
Comment distribution79 data points
Loading chart...
Based on 79 loaded comments
Key moments
- 01Story posted
Oct 18, 2025 at 12:26 AM EDT
3 months ago
Step 01 - 02First comment
Oct 18, 2025 at 3:53 AM EDT
3h after posting
Step 02 - 03Peak activity
67 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 31, 2025 at 5:28 AM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45624888Type: storyLast synced: 11/20/2025, 4:53:34 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Can someone confirm/refute that?
[1]: https://www.youtube.com/watch?v=v7HUud7IvAo
The Hardware Canucks video didn't seem to do any such investigation, where did you get that number from?
Meanwhile under the heavy loads that actually tax the processor the M4 somehow has worse battery life even with the larger battery and a nominally lower TDP.
Is the infamous efficiency not the processor at all and they're just winning on the basis of choosing more efficient displays and wireless chips?
The M4 mac mini at $599 comes with 16GB RAM and 256GB SSD - see https://www.apple.com/shop/buy-mac/mac-mini/apple-m4-chip-wi...
The M4 Pro mac mini starts at $1399 with 24GB RAM and 512GB SSD - see https://www.apple.com/shop/buy-mac/mac-mini/apple-m4-pro-chi...
I've also seen quite a few mini PCs with Oculink port and Strix Halo CPUs.
https://www.youtube.com/watch?v=TvNYpyA1ZGk
There are ways to manage BAR better in linux or with UEFI preboot environments for windows as hobbyists have been doing for ages due to bad BIOS support https://github.com/xCuri0/ReBarUEFI
The chip itself should accept higher power draws, and ASUS usually isn't shy on feeding 130+W to a laptop, so the 75W figure was quite a surprise to me.
Compared to discrete GPUs (mobile or not), the advantage of a dGPU is memory bandwidth. The disadvantage of a dGPU is power draw and memory capacity—if we set aside CUDA, which I grant is a HUGE thing to just "set aside".
If we mix in the small DGX Spark desktops, then those have an additional advantage in the dual 200Gb network ports that allow for RDMA across multiple boxes. One could get more from of a small stack (2, 3 or 4) of those than from the same number of Strix Halo 395 boxes. However, as sexy as my homelab-brain finds a small stack of DGX Spark boxes with RDMA, I would think that for professional use, I would rather have a GPU server (or Threadripper GPU workstation) than four DGX Spark boxes?
Because the DGX Spark isn't being sold in a laptop (AFAIK, CMIIW), that is another differentiator in favor of the Strix Halo. Once again, it points to this being a weird, emerging market segment, and I expect the next generation or two will iterate towards how these capabilities really ought to be packaged.
Strix Halo is also being marketed for gaming but the performance profile is all wrong for that. The CPU is too fast and the iGPU still not strong enough.
I am sure it’s amazing at matmul though.
https://www.techspot.com/news/106835-amd-ryzen-strix-halo-la...
Especially if the different SKUs have different power budgets. Laptop GPU naming and performance is a bit of a mess, as in the example shown (the 4060 on the Asus TUF Gaming A16 has a limit of 140w GPU+CPU, while the 4070 on the Asus Proart PX13 has 115w GPU+CPU - and even that is a "custom" non-default mode with 95w being the actual out-of-the-box limit).
With wildly varying power profiles laptop graphics need to be compared by chassis (and the cooling/power supply that implies) as much as by GPU SKU.
It's fine for 1440p gaming. I don't use it for that, but it would not be a bother if that was all I had.
The CPU power of it and the high bandwidth integrated RAM aren’t the right performance trade offs for a gaming workload. Does it work for it? Sure. But you also have a bunch of extra hardware you don’t really need for it.
I also agree that they aren't for gaming (something I know little about). My comment was with respect to compute workloads, but I never specified that. Apologies.
Edit: it does feel like a rtx4060 performance wise so it's not far from some discrete GPUs.
I have a laptop with an Nvidia GPU. Ruins battery life and makes it run very hot. I'd pay a lot for a powerful iGPU.
Everything I've seen says it's 2x 200GbE.
One of many examples: https://www.storagereview.com/review/nvidia-dgx-spark-review...
> ConnectX-7 Smart NIC – 2x 200G QSFP
and:
> what makes this unit interesting is the dual 200 GbE QSFP56 interfaces driven by an integrated NVIDIA ConnectX-7 SmartNIC.
---
Let's try a manufacturer's page then for confirmation:
https://www.dell.com/en-us/shop/desktop-computers/dell-pro-m...
In the parts labelling diagram, it has this:
> ConnectX-7 Smart NIC (2x 200G QSFP ...
---
That being said, the Storage Review one does point out PCI-E bandwidth being a limiter anyway:
> At first glance, you might deduce that the Spark allows for 400G of connectivity; unfortunately, due to PCIe limitations, the Spark is only able to provide 200G of connectivity.
For me, the Strix Halo is the first nail in the coffin of discrete GPUs inside laptops for amd64. I think Nvidia knows this, which is why they're partnering with Intel to make an iGPU setup.
It feels like nVidia spent a ton of money here on a piece of infrastructure (the big network pipes) that very few people will ever leverage, and that the rest of the infrastructure constrains somewhat.
I think with the success of Strix Halo as an inference platform, this market segment is here to stay.
And that's after half a year after the first machines to come to the market.
I love the Z13, but it's clearly a niche machine, so I'm assuming they are having a really hard time manufacturing the chips ? All the capacity is getting eaten by Apple ?
For instance they went for the standard lower resolution display (1920x1440 for 14" vs 2560x1600 for 13" on the Z13). The thermals also looked better on the Z13, which comes partly with the form factor, partly with Asus optimizing for that for so many years.
Of cours the Z13 keyboard is meh, I expect most owners to have it detached 90% of the time and handle the machine more like a standalone screen/touch/pen input.
The vapor chamber cooling on the HP seems efficient, but the back side venting on the flow is clearly better.
https://h20195.www2.hp.com/v2/getpdf.aspx/c09119722.pdf
Outside of laptops, Beelink and co. are making NUCs with them which are relatively affordable!
I do agree, the scarcity has limited their opportunity to assess the growth opportunity.
I’m annoyed that I’ll probably have to pick a Framework 13 with less CPU and much less GPU merely because of availability.
My main PC is a 7950X3D which has the same core count/threads as the Strix unit, and the Strix benches within margin of error as the 7950X3D. Which is to say the performance is the same.
That you can get the same computer power in a laptop is crazy.
Still, I think it’ll be quite equivalent soon.
I think one of the best AI systems in terms of price/performance is still just to build a desktop with dual RTX 3090’s (of course you’ll need an board that supports dual cards) and toss it in a closet.
is the progress in the room with us?
[1]: https://www.corsair.com/eu/en/c/ai-workstations
mini pc with that much compute power is only enthusiast home lab mainly audience
any enterprise or regular homelab wouldn't even need it hence why its hard to have it available
Was it on sale or something?
https://www.bosgamepc.com/products/bosgame-m5-ai-mini-deskto...
And get pretty reasonable local LLM performance on some of the larger models for hobbyist use?
Edit: I don’t have a good grasp on this but I’m thinking I can only do shared memory when I’m using an APU and not a discrete GPU. Is this correct?
And it’s worth noting, AMD has always matched up with Nvidia hardware wise for decades, plus or minus. They are an interesting company in that they took on both Nvidia and Intel, and is still continuing to do so.