Amd Could Enter Arm Market with Sound Wave Apu Built on Tsmc 3nm Process
Posted2 months agoActiveabout 2 months ago
guru3d.comTechstoryHigh profile
excitedmixed
Debate
70/100
AmdArmApuCPU Architecture
Key topics
Amd
Arm
Apu
CPU Architecture
AMD is rumored to be entering the ARM market with a new APU called Sound Wave, sparking discussion about its potential applications, advantages, and AMD's strategy in the CPU market.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
149
Day 1
Avg / period
53.3
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 30, 2025 at 11:07 PM EDT
2 months ago
Step 01 - 02First comment
Oct 31, 2025 at 12:24 AM EDT
1h after posting
Step 02 - 03Peak activity
149 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 8, 2025 at 3:05 PM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45767916Type: storyLast synced: 11/20/2025, 8:28:07 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
ARM isn't nearly as interesting given the strides both Intel and AMD have made with low power cores.
Any scenario where SoundWave makes sense, using Zen-LP cores align better for AMD.
Apple isn’t going to switch back to AMD64 any time soon. Cloud providers will switch faster if X64 chips become really competitive again.
I've always heard it's cooling capacity. I'm also pretty confident that's true
Clearly, they want them, because there's demonstrated power savings.
The limit is power capacity and quite often thermal. Newer DCs might be designed with larger thermal envelopes, however rack space is nearly meaningless once you exhaust thermal capacity of the rack/isle.
Performance within thermal envelope is a very important consideration in datacenters. If a new server offers double performance at double power it is a viable upgrade path only for DCs that have that power reserve in the first place.
EDIT: Haha, I was going off our workloads but hilariously there are some HPC-like workloads where benchmarks show the Graviton 4 smoking a 9654 https://www.phoronix.com/review/graviton4-96-core/4
I suppose ours must have been more like the rest of the benchmarks (which show the 9654 faster than the Epyc).
“IT Home News on October 13, @Olrak29_ found that the AMD processor code-named "Sound Wave" has appeared in the customs data list, confirming the company's processor development plan beyond the x86 architecture”
I think that means they are planning to export parts.
I think there still is some speculation involved as to what those parts are, and they might export them only for their own use, but is that likely?
AMD does not have any product that can compete with Intel's N-series or industrial Atom CPUs, which are designed for power consumptions of 6 W or of 10 W and AMD never had any Zen CPU for this power range.
If the rumors about this "Sound Wave" are true, then AMD will finally begin to compete again in this range of TDP, a market that they have abandoned many years ago (since the AMD Jaguar and Puma CPUs), because all their resources were focused on designing Zen CPUs for higher TDPs.
For cheap and low-power CPUs, the expensive x86-64 instruction decoder may matter, unlike for bigger CPUs, so choosing the Aarch64 ISA may be the right decision.
Zen compact cores provide the best energy efficiency for laptops and servers, especially for computation-intensive tasks, but they are not appropriate for cheap low-power devices whose computational throughput is less important than other features. Zen compact cores are big in comparison with ARM Cortex-X4, Intel Darkmont or Qualcomm cores and their higher performance is not important for cheap low-power devices.
A cursory search shows that the AMD APU used in the Valve Steam Deck draws 3-15W. Limiting the TDP to 6W on a Steam Deck is fine for Linux in desktop mode.
It is not a device that AMD sells on the open market, so it does not compete with the ubiquitous Intel N-series CPUs or with the Arm-based CPUs from various vendors.
Like I have said, since Jaguar and Puma, which are older than the first Zen, AMD has never sold on the open market any CPU/APU designed for a TDP of 10 W or less.
While for some AMD APUs, like Ryzen Z1, which are designed for a TDP of 15 W, their specification says that they have a TDP that is configurable down to 9 W, when such CPUs are configured for a lower TDP than they are optimized for, they become inefficient, by having a bigger die area, i.e. a higher cost, and a lower energy efficiency, in comparison with the CPUs that have been specifically designed for that lower power.
>> AMD does not have any product that can compete with Intel's N-series or industrial Atom CPUs, which are designed for power consumptions of 6 W or of 10 W and AMD never had any Zen CPU for this power range
Any pointers regarding that? How does the computing power to watts ratio look these days across major CPU architectures?
[1]: https://learn.microsoft.com/en-us/windows/arm/arm64ec-abi
imho. (!)
i think this would be great!!
personally i totally understood why AMD gave up on its last attempt - the A1100 opterons - about 10 years ago in favor of the back then new ryzen architecture:
* https://en.wikipedia.org/wiki/List_of_AMD_Opteron_processors...
but what i would really like to see: an ARM soc/apu on an "open"*) (!) hardware-platform similar to the existing amd64 pc hardware.
*) "open" as in: i'm able to boot whatever (vanilla) arm64 linux-distribution or other OS i want ...
i have to add: i'm personally offended by the amount of tinkering of the firmware/boot-process which is necessary to get for example the raspberry pi 5 (or 4) to boot vanilla debian/arm64 ... ;)
br, a..z
ps. even if its a bit o.T. in this context, as a reminder a link to a slightly older article about an interview with jim keller about how ISA no longer matters that much ...
"ARM or x86? ISA Doesn’t Matter"
* https://chipsandcheese.com/p/arm-or-x86-isa-doesnt-matter
> * https://chipsandcheese.com/p/arm-or-x86-isa-doesnt-matter
Some people, for some strange reason, want to endlessly relitigate the old 1980'ies RISC vs CISC flamewars. Jim Kellers interview above is a good antidote for that. Yes, RISC vs CISC matters for something like a simple in-order core you might see in embedded systems. For a big OoO core, much less so.
That doesn't mean you'd end up with x86 if you'd design a clean sheet 'best practices' ISA today. Probably it would indeed look something like aarch64 or RISC-V. So certainly in that sense RISC won. But the win isn't so overwhelming that it overcomes the value of the x86 software ecosystem in the markets where x86 plays.
Could be a revival but for different purposes
[0] https://web.archive.org/web/20210622032535/https://www.anand...
I mean Keller is talking about a decision to not pursue an ARM chip that he’d apparently been working on after(?) Zen 2 (or maybe in parallel). So AMD was already back on a good path at that point.
Would make much more sense to compare with Qualcomm trajectory here as they dominate the high end ARM SoC market.
Basically AMD missed the opportunity to be first mover on a market which is now huge with a project Apple proved to be viable three years after the planned AMD release. Any way you look at it, it seems like a major miss.
The fact that other good decisions in other segments were made at the same time doesn’t change that.
AMD cannot go and tell its customers "hey we are changing ISA, go adjust.". Their customers would run to Intel.
Apple could do that and forced its laptops to use it. Developers couldnt afford losing those users, so they adjusted.
Nobody supports the new ISA because there is no SoC and nobody makes the new SoC because there is no support. But in this case, that’s not really true because Linux support was ready.
More than forcing volumes, Apple proved it was worth it because the efficiency gains were huge. If AMD had release a SoC with numbers close to the M1 before Apple targeting the server market, they had a very good shot at it being a success and leveraging that to success in the laptop markets where Microsoft would have loved to have a partner ready to fight Apple and had to wait for Qualcomm for ages.
Anyway, I stand that looking at how the stock moved tells us nothing about if the cancellation was a good or a bad decision.
Apple proved that creating your own high end consumer SoC was doable and viable idea due to TSMC and could result in better chips due to designing them around your needs.
And which ISA they could use? X86? Hard to say, probably no. So they had RISCV and ARM
Also about Windows...
If PantherLake on 18A actually performs as good as expected, then why would anyone move to ARM on Windows when viable energy eff. cpus like lnl and ptl are available
Well yes, exactly, that’s the issue with arriving 10 years later instead of being first mover. The rest of the world doesn’t remain unmoving.
Thing is, those efficiency gains are both in hardware and software.
Will a Linux laptop running the new AMD SoC use 5 W while browsing HN like this M3 pro does?
A huge amount of Apple's competitive edge is in the "other 90%", but they don't seem to get the headlines.
Does Windows have working sleep now? I hear it's dangerous to throw a wintelmd laptop in a backpack without shutting it down.
Data centers and hosting companies are probably the biggest customers buying AMD CPUs, no?
If those companies could lower their energy and cooling costs that could be a strong incentive to offer ARM servers.
1% 3% 6% 10% 30%?
But the newer ones use even less and they're faster.
But all of this is a decade before what we are discussing here. I didn’t even remember XScale existed at Intel while writing my first comment.
From 2:03:30 he points out that the only purpose of the DEC lawsuit was to facilitate the sale to Compaq without the microelectronics group.
I don't think this is a fair position. It could as well be that focusing in K12 would have delayed Zen, maybe delaying it enough that it could have become irrelevant by the time it got to market.
Remember that while Zen was a good CPU, the only reason it made as much impact as it did was because it also was released in a good time (when Intel was stumbling with 10nm and releasing Skylake refresh after Skylake refresh).
Agree. AMD stock was under $2 prior to Zen. Buying was a bet that Zen would be competitive with Intel in which case the stock would come back, otherwise they were doomed. The first Zen chips were in fact competitive but beat Intel in some benchmarks and lost in others. That would have brought back competition, but who knew Intel would flounder for many more years while Zen got a nice uplift with each generation! Delaying Zen would have been bad for AMD, but in hindsight that wouldn't have mattered so long as they could stay afloat til it launched.
The thing about being broke is you may know about good opportunities but not have the resources to actually make use of them.
SoC market is mcdonalds. its huge in the same way the soybean industry is huge. zero margin commodity.
But, don't get me wrong, I wouldn't spit on McDonalds 6 billions either and the soybean market is one of the fastest growing in the agrifood business, with huge volume traded, probably one of the most profitable commodity at the moment.
How much of Qualcomm's profit comes from providing yet another ARM chip vs. all the value-added parts they provide in the ARM SoC's, like all the radio modem stuff necessary for mobile phones?
Now that's kind of a rhetorical question, not sure a clear answer exists, at least not outside Qualcomm internal finance figures. Food for thought, though.
(That's sort of the logic behind RISC-V as well. The basic ISA and the chip that implements it is a commodity, the value comes from all the application specific extra stuff tacked on to the SoC.)
The SoC is the SoC.
You can’t magically say "Qualcomm doesn’t make money from SoC which are commodities" and then argue "but actually they make money with the non commodity part because I want to somehow magically split in two something which isn’t splittable".
There is no real food for thought here. It is just a profitable market.
I think Apple would have switched anyway though. They designed Apple Silicon for their mobile devices first (iPhone, iPad) which I doubt they would have made x86. The laptops and desktops are the same ISA as the iPhone (strategically).
Sure they Apple and Arm worked together but it wasn’t developed by Apple and given to Arm.
No man, apple basically had the power to frog march it's app devs to a new cpu arch. That absolutely would not have happened in the windows ecosystem given the amount of legacy apps and (arguably more importantly) games. For proof of this you need look no further than Itanium and windows arm
Microsoft's ARM transition execution has been poor.
Apple's Rosetta worked on day one.
Microsoft's Prism still has some issues, but at release its compatibility with legacy x86 software was abysmal.
Apple's first party apps and developer IDE had ARM versions ready to go on day one.
Not so for Microsoft.
Apple released early Dev Kit hardware before the retail hardware was ready to go (at very low cost).
Microsoft did not.
https://en.wikipedia.org/wiki/FX!32
If most Intel hardware makers had gone full ARM, they would simply have lost market share. Apple customers are going to buy Apple hardware—whatever it has inside.
But of course Apple controls not just the hardware but the OS. So ya, if only Apple hardware will run your application, you are going to port to that hardware.
Apple has a massive advantage in these transitions for sure.
Apple had already switched cpus in Macs twice, it's not surprising that they could do it again, but would they have switched from Intel x86 to AMD ARM when they never used any AMD x86? Seems unlikely.
Focusing on a product that would sell on day one rather than one that would need years to build sales makes sense for a company that was struggling for relevance and continued operations.
Today? Sure, they could probably sell some arm cpus; in 2017, doesn't seem likely.
I think you can get 95% of compatibility but the 5% of apps not running, even though they might be used once every full moon and there are alternatives, might be seen as a major blocker for a potential customer if he can still buy another computer with 100% compatibility.
I don't think AMD should be following Intel in markets outside x86. I want to see them go RISC-V with a wide vector unit. I'd like to see Intel try that too, but they're kind of busy fixing fabs right now.
Maybe the folks at Intel just didn't want to StrongARM their competitors?
https://en.wikipedia.org/wiki/Jim_Keller_(engineer)
Look at intel's various arm or embedded offerings it keep canceling. It can't find buyers. Qualcomm and Samsung other vendors just keep eating up sales in ARM.
Now I imagine AMD sees ARM servers as the future and wants to make sure not to be left behind, on top of ARM desktop/laptop and further embedded.
I think this mostly a sign the world is now moving away from the old x86/64 system that ruled technology for so long. AMD is needs to stay competitive here.
I believe Jim Keller is now working on RISC-V which could take the server market by storm in the next 5 years or so.
There are already RISC-V server offerings:
https://labs.scaleway.com/en/em-rv1/
So I went out looking for an ARM-based server of equivalent strength to a Mac Mini that I could find and there's really not that much out there. There's the Qualcomm Snapdragon X Elite which is in only really one actual buyable thing (The Lenovo Ideacentre) and some vaporware Geekom or something product. But this thing doesn't have very good Linux support (it's built for ARM Windows apparently) and it's much costlier than some Apple Silicon running Asahi Linux.
So I'm eventually going to end up with some M1 Ultra Studio or an M4 Mini running Asahi Linux, which seems like such a complete inversion of the days when people would make Hackintoshes.
- Low power when only idling through events from the radio networks
- Low power and reasonable performance when classifying objects in a few video feeds.
- Higher power and performance when occasionally doing STT/TTS and inference on a small local LLM
It looks like it is intended to run Windows Arm.
But, wouldn't it make more sense for amd to go into risc-v at this point of time?
But.. ..why? Of all things, I would have expected the webcam to not be cpu-related..
Cameras used on x86-64 usually just work using that usb webcam standard driver (what is that called again? uvcvideo?). But these smartphone-land cameras don't adhere to that standard, they probably don't connect using USB. They are designed to be used with the SoC vendor's downstream fork of Android or whatever, using proprietary blobs.
The documentation says audio doesnt work as well. did you find a way to solve it or it came solved when you installed ?
See here: https://github.com/velvet-os/imagebuilder/discussions/240#di...
Performance per watt is increasing due to the lithography.
Also, Devon’s paradox.
... the rest is history.
Acorn won the bid to make the original BBC home computer, with a 6502-based design.
Acorn later designed their own 32-bit chip, the ARM, to try to leapfrog their competitors who were moving to the 68000 or 386, and later spun off ARM as a separate company.
Traditionally x86 has been built powerful and power hungry and then designers scaled the chips down whereas it's the opposite for ARM.
For whatever reason, this also makes it possible to get much bigger YoY performance gains in ARM. The Apple M4 is a mature design[0] and yet a year later the M5 is CPU +15% GPU +30% memory bandwidth +28%.
The Snapdragon Elite X series is showing a similar trajectory.
So Jim Keller ended up being wrong that ISA doesn't matter. Its just that it's the people in the ISA that matter, not the silicon.
[0] its design traces all the way back to the A12 from 2018, and in some fundamental ways even to the A10 from 2016.
I would need some strong evidence to make me think it isn't the ISA that makes the difference.
We will see how big improvement is it's successor panther lake in January on 18A node
>I would need some strong evidence to make me think it isn't the ISA that makes the difference.
It is like saying that Java syntax is faster than C# syntax.
Everything is about the implementation: compiler, jit, runtime, stdlib, etc
If you spent decades of effort on peformance and ghz then don't be shocked that someone who spent decades on energy eff is better in that category
I love the saying "i dont trust benchmarks that i didn't fake myself"
It might be the same with x86 and power-efficiency (semantics being the issue), but there doesn’t seem to be a consensus on that.
Not by a long shot.
Over a decade ago, one of my college professors was an ex-intel engineer who worked on Intel's mobile chips. He was even involved in an Intel ARM chip that ultimately never launched (At least I think it never launched. It's been over a decade :D).
The old conroe processors were based on Intel's mobile chips (Yonah). Netburst didn't focus on power efficiency explicitly so and that drove Intel into a corner.
Power efficiency is core to CPU design and always has been. It's easy create a chip that consumes 300W idle. The question is really how far that efficiency is driven. And that may be your point. Lunar Lake certainly looks like Intel deciding to really put a lot of resource on improving power efficiency. But it's not the first time they did that. The Intel Atom is another decades long series which was specifically created with power in mind (the N150 is the current iteration of it).
Java and C# are very similar so that analogy might make sense if you were comparing e.g. RISC-V and MIPS. But ARM and x86 are very different, so it's more like saying that Go is faster than Javascript. Which... surprise surprise it is (usually)! That's despite the investment into Javascript implementation dwarfing the investment into Go.
Basically, x86 uses op caches and micro ops which reduces instruction decoder use, the decoder itself doesn't use significant power, and ARM also uses op caches and micro ops to improve performance. So there is little effective difference. Micro ops and branch prediction is where the big wins are and both ISAs use them extensively.
If the hardware is equal and the designers are equally skilled, yet one ISA consistently pulls ahead, that leads to the likely conclusion that the way the chips get designed must be different for teams using the winning ISA.
For what it's worth, the same is happening in GPU land. Infamously, the M1 Ultra GPU at 120W equals the performance of the RTX 3090 at 320W (!).
That same M1 also smoked an Intel i9.
I'm not saying the skill of the design team makes zero difference, but it's ludicrous to say that the ISA makes no difference at all.
The claims about the M1 Ultra appear to be marketing nonsense:
https://www.reddit.com/r/MachineLearning/comments/tbj4lf/d_a...
That's not true.
ISA is just ISA
Had been arm so weighted by backwards compatibility i doubt it would be so good as it is.
I really think intel/amd should draw a line somewhere around late 2000 and drop compatibility with stuff that slow down their processors.
That’s a blast from the past; native Java bytecode! Did anyone actually use that? Some J2ME phones maybe? Is there a more relevant example?
AFAIK you needed to pay a license fee to write programs using Jazelle instructions (so you needed to weigh whether the speedup of Jazelle was cheaper than just buying a more powerful CPU), and the instruction set itself was also secret, requiring an NDA to get any documentation (so no open source software could use it, and no open toolchains supported it).
I remember being very disappointed when I found out about that
Nvidia can design super clean solution fron scratch - i can bet 50$ that its gonna be more efficient in MIPS/watt
The efficiency came solely from the frontend which is a lot heavier on x86, and stay up longer because decoding is way more complex. The execution units were the same (at least mostly, I think, might be misremembering) so once you are past the frontend there's barely any difference in power efficiency.
https://chipsandcheese.com/p/evaluating-the-infinity-cache-i...
76 more comments available on Hacker News