Read This but Think of MAC GPU Universal Memory Running LLM
Posted3 months agoActive3 months ago
apple.comTechstory
calmpositive
Debate
0/100
Apple SiliconLLMGPU Architecture
Key topics
Apple Silicon
LLM
GPU Architecture
The post links to an Apple newsroom article about the M5 chip, sparking discussion about its potential for running Large Language Models (LLMs) on Mac GPU universal memory.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
2m
Peak period
3
0-1h
Avg / period
3
Key moments
- 01Story posted
Oct 17, 2025 at 10:19 PM EDT
3 months ago
Step 01 - 02First comment
Oct 17, 2025 at 10:21 PM EDT
2m after posting
Step 02 - 03Peak activity
3 comments in 0-1h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 17, 2025 at 11:01 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45624355Type: storyLast synced: 11/17/2025, 9:03:40 AM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Now is like IBM mainframe vs apple mac/microsoft DOS/mac/windows time. Nvidia era is peaking and we are going back.
Just who is going to be the os or llm provider(s) in the new era I wonder.
As in the past we are seeing the chasm. One has to remember there are two different market type. The infra and the app.
Not THINK … but “Think Different” …
We've already seen that question answered. TSMC has two customers for cutting edge nodes; Apple and Nvidia. Apple refuses to address the datacenter market, so they're missing out on the trillions of dollars to be made in high-margin compute.
Keep in mind, Apple could just drop their pride and support Nvidia drivers again. macOS can run CUDA if Apple just signs the installer, they could be selling rackmount Macs as high-performance ARM clusters. But nope! Apple insists on losing.