Leaked Apple M5 9 Core Geekbench Scores
Posted3 months agoActive3 months ago
browser.geekbench.comTechstoryHigh profile
excitedmixed
Debate
70/100
Apple M5 ChipIpad ProArm Processors
Key topics
Apple M5 Chip
Ipad Pro
Arm Processors
The leaked Geekbench scores of Apple's M5 chip show significant performance improvements over the M4, sparking discussions about its potential applications and limitations in iPadOS and macOS.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2m
Peak period
68
12-24h
Avg / period
16
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 30, 2025 at 12:00 PM EDT
3 months ago
Step 01 - 02First comment
Sep 30, 2025 at 12:02 PM EDT
2m after posting
Step 02 - 03Peak activity
68 comments in 12-24h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 7, 2025 at 9:02 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45427197Type: storyLast synced: 11/20/2025, 8:09:59 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Leaked unboxing video reveals unannounced M5 iPad Pro in full - https://9to5mac.com/2025/09/30/leaked-unboxing-video-reveals...
https://x.com/markgurman/status/1973048229932507518 | https://xcancel.com/markgurman/status/1973048229932507518
Exclusive! Unboxing the iPad Pro with the M5 before Apple! - https://www.youtube.com/watch?v=XnzkC2q-iGI
Big boy is bitching about a meager 10% increase in CPU and 30% increase in GPU as a nothing burger. "Who would upgrade from M4 to M5?" exactly. The difference is when you upgrade from older to latest. Most people do not upgrade annually. I'm looking to replace my 6th gen tablet, but now I might just get an m4 after the m5 is official and get a nice discount on what will be a helluva upgrade for me.
Some of the comments in the threads you linked also suggest Russia has infiltrated Apple, but my guess would be some where in the Chinese side of the supply chain.
[edit] typo
Single thread MacBook progression on Geekbench:
M1: 2350
M2: 2600
M3: 3100
M4: 3850
M5: 4400 (estimated)
https://browser.geekbench.com/mac-benchmarks
The actual IPC increase and perf/clock of these chips excluding SME specific acceleration is MUCH smaller.
AMX has been present in every M series chip and the A series chips starting with the A13. If you are comparing M series chip scores in Geekbench 6 they are all using it, not just the latest ones.
Any app using Apple's Accelerate framework will take advantage of it.
But ultimately with a benchmark like Geekbench, you're trusting them to pick a weighting. Geekbench 6 is not any different in that regard to Geekbench 5 – it's not going to directly reflect every app you run.
I was really just pointing out that the idea that "no" apps use SME is wrong and therefore including it does not invalidate anything – it very well could speed up your apps, depending on what you use.
M1: 8350
M2: 9700
M3: 11650
M4: 14600
M5: 16650 (estimated)
This is assuming an 8% uplift as mentioned. Also nice.
The above summary also excludes the GPU, which seems to have gotten the most attention this generation (~+30%, even more in AI workloads).
Also, the size numbers are lies and aren't the actual size of anything.
Suppose I'm trying to decide whether to buy a 32-core system with a lower base clock or a 24-core system with a higher base clock. What good is it to tell me that both of them are the same speed as the 8-core system because they have the same boost clock and the "multi-core" benchmark doesn't actually use most of the cores?
Suppose I run many different kinds of applications and am just looking for an overall score to provide a general idea of how two machines compare with one another. That's supposed to be the purpose of these benchmarks, isn't it? But this one seems to be unusually useless at distinguishing between various machines with more than a small number of cores.
Your analysis is also incorrect for many of these systems. Each core may have its own L2 cache and each core complex may have its own L3, so systems with more core complexes don't inherently have more contention for caches because they also have more caches. Likewise, systems with more cores often also have more memory bandwidth, so the amount of bandwidth per core isn't inherently less than it is in systems with fewer cores, and in some cases it's actually more, e.g. a HEDT processor may have twice as many cores but four times as many memory channels.
But in your example, deciding between 24 cores with somewhat higher frequency or 32 cores with somewhat lower frequency based on some general-purpose benchmark is essentially pointless. The difference will be small enough that only the real application benchmark can tell you what you need to know. A general purpose benchmark will be no better than a coin toss, because the exact workings of the benchmark, the weightings of it's components into a score and the exact hardware you are running on will have interactions that will determine the decision to a far greater amount. You are right that there could be shared or separate caches, shared or separate memory channels. The benchmark might exercise those, or it might not. It might heat certain parts of the die more than others. It might just be the epitome of embarassingly parallel benchmarks, BogoMIPS, which is a loop executing NOPs. The predictive value of the general purpose benchmark is nil in those cases. The variability from the benchmark maker's choices will always necessarily introduce a bias and therefore a measurement uncertainty. And what you are trying to measure is usually smaller than that uncertainty. Therefore: No better than a coin toss.
And a benchmark can then provide a reasonable cross-section of different applications. Or it can yield scores that don't reflect real-world performance differences, implying that it's poorly designed.
Many of the systems claiming to have that CPU were actually VMs assigned random numbers of cores less than all of them. Moreover, VMs can list any CPU they want as long as the underlying hardware supports the same set of instructions, so unknown numbers of them could have been running on different physical hardware, including on systems that e.g. use Zen4c instead of Zen4 since they provide the same set of instructions.
If they're just taking all of those submissions and averaging them to get a combined score it's no wonder the results are nonsense. And VMs can claim to be non-server CPUs too:
https://browser.geekbench.com/v6/cpu/search?utf8=%E2%9C%93&q...
Are they actually averaging these into the results they show everyone?
https://browser.geekbench.com/v6/cpu/6807094
https://browser.geekbench.com/v6/cpu/9507365
The ones on actual hardware with lower scores typically have comments like "Core Performance Boost Off":
https://browser.geekbench.com/v6/cpu/1809232
And that's still a higher score than the one listed on the main page.
The only real distinction is between high end systems and low end systems, but that's exactly what a benchmark should be able to usefully compare because people want to know what a higher price tag would buy them.
Most people looking to optimize Epyc compile or render performance care about running inside VMs, all IO to SANs, assuming the is enough work you can yield to other jobs to increase throughput, and ideally near thermal equilibrium.
Faster hardware doesn’t exclusively make developers lazy, it also opens up capability.
Like, if I were buying a new workstation right now, I’d want to be shelling out $2000 so that I could get something like a Ryzen AI 395+ with 128GB of fast RAM for local AI, or an equivalent Mac Studio.
That’s definitely not because I’m “lazy,” it’s because I can’t run a decent model on a raspberry pi
I mention this because I'm guessing you might be using Desktop Docker which is kinda slow.
If you fail at these, you can even trash your SSD and need replacing the whole laptop due to it being soldered in.
I will say this - and most will not like this - that I'd go out and buy a M* MacBook if they still kept Boot Camp around and let me install Windows 11 ARM on it. I've heard Linux is pretty OK nowadays, but I have some... ideological differences with the staff behind Asahi and it is still a wonky hack that Apple can put their foot down on any day.
Benchmarking how long people could sit with a laptop on their lap while running Geekbench could be an interesting metric though.
Yes, they've done some nice things to get the performance and energy efficiency up, but it's not like they've got some magic bullet either. From what I've seen in reviews, Intel is not so far off with things like Ultra 7 258V. If they caught up to TSCM on the process node, they would probably match Apple too.
This was a reply to "never having had a computer that felt fast for so long".
For some tasks, this CPU with the GTX 970 still feels faster than MacBook M2 or recent Ryzen laptop APU.
Which is not to say that the Air is a bad device, its an amazing laptop (especially for the price, I have not seen a single Windows laptop with this build quality even at 2x price) and the performance is good - that if I was doing something like VSCode and node/frontend only it would be more than enough.
But also people here oversell its capabilities, if you need anything more CPU/Memory intensive PRO is a must, and the "Apple needs less ram because of the fast IO/memory" argument is a myth.
But even when I kill all processes and just run a build you can see lack of cores slow the build down enough that it is noticeable. Investing into a 48gb ram/pro version will definitely be worth it for the improved experience, I can get by in the meantime by working more on my desktop workstation.
A car that does this in 4 seconds is still fast (though twice as slow)
>>In the context of cars:
>>"Fast" refers to top speed. A fast car has a high maximum velocity. It can cover a great distance in a sustained manner once it reaches its peak speed. Think of the Bugatti Chiron or a Koenigsegg, which are famous for their incredibly high top speeds.
>>"Quick" refers to acceleration. A quick car can get from a standstill to a certain speed (often 0 to 60 mph or 0 to 100 km/h) in a very short amount of time. This is about how rapidly the car can change its velocity. Modern electric vehicles, like the Tesla Model S Plaid or the Lucid Air Sapphire, are prime examples of exceptionally quick cars due to the instant torque of their electric motors.
Moore's law was never about single threaded performance, it was about transistor count and transistor cost, but people misunderstood it when single threaded performance was increasing exponentially.
https://news.ycombinator.com/item?id=45434910
Your reply here is not directly related or an answer to the comment you replied to.
So I guess we've caught up with the desktop now.
Actually I assume we caught up awhile ago if I used the beefy multi core MX-Ultra variants they released, really just the base model has caught up. On the other hand I could have spent four times as much for twice as many cores on my desktop as well.
On the move laptops will always be a bit slow because all the tricks to save idle usage don't help much when you're actually putting them to work.
Compared to every Intel MBP I went through, where they would show their age after about 2-3 years, and every action/compile required more and more fans and throttling, the M1 is still a magical processor.
I've switched now to a desktop Linux, using an 8C/16T AMD Ryzen 7 9700X with 64GB. it's like night and day. but it is software related. Apple just slows everything down with their animations and UI patterns. Probably to nudge people to acquire faster newer hardware.
The change to Linux is a change in lifestyle, but it comes with a lot of freedom and options.
The only place I feel it is when I am running a local llm - I do get appreciably more tokens per second.
https://browser.geekbench.com/v6/cpu/compare/14173685?baseli...
About 10% faster for single-core and 16% faster for multi-core compared to the M4. The iPad M5 has the same number of cores and the same clock speed as the M4, but has increased the RAM from 8GB to 12GB.
M1 (16Gb)
M1 Pro (16Gb)
M2 Pro (16Gb)
M3 Pro (32Gb)
M4 Air (24Gb)
Currently switch between the M2 Pro and the M4 Air, and the Air is noticeable snappier in everyday tasks. The 17' M3 Pro is the faster machine, but I prefer not to lug it around all day, so it gets left home and occasionally used by the wife.
Not the 17 inch tall one they built and put on stage for the MacWorld keynote speech - that was in danger of being trod upon by a dwarf.
TL:DR I expect a smaller MBP M4 to M5 pop compared to iPads M4 vs M5 because the latter are benefiting from new cooling tech.
For $800 the M4 Air just seems like one of the best tech deals around.
Only if you don't mind macOS.
Still better than all the alternatives for someone like me that has to straddle clients expecting MS Office, gives me a *nix out of the box, and can run logic, reaper , MainStage.
Reaper has a native Linux client. Logic and MainStage... are you serious? :D
Windows on ARM performance is near native when run under macOS. `virtiofs` mounts aren't nearly as fast as native Linux filesystem access, but Colima/Lima can be very fast if you (for example) move build artifacts and dependency caches inside the VM.
Except when you need something like UDP ports, for example. I tried it for 2-3 weeks, but I always encountered similar issues. At the end I just started to use custom Alpine VMs with UTM, and run Docker inside them. All networking configured with pf.
See, that's where the MacOS shitshow begins: Parallels costs €189.99 and it looks like they are pushing towards subscriptions. I am not in the ecosystem, but Parallels is the only hypervisor I've ever seen recommended.
Another example is Little Snitch. Beloved and recommended Firewall. Just 59€! (IIRC, MacOS doesn't even respect user network configuration, when it comes to Apple services, e.g. bypassing VPN setups...)
Now, don't get me wrong, I am certain there are ways around it, but Apple people really need to introspect what it commonly means to run a frictionless MacOS. It's pretty ridiculous, especially coming from Linux.
I mean c'mon... paying for a firewall and hypervisor? Even running proprietary binaries for these kind of OS-level features seems moderately insane.
Don't get me wrong, I really admire what apple has done with the M CPUs, but I personally prefer the freedom of being able to install linux, bsd, windows, and even weirder OSes like Haiku.
394 more comments available on Hacker News