Nvidia Sells Tiny New Computer That Puts Big AI on Your Desktop
Posted3 months agoActive3 months ago
arstechnica.comTechstory
excitedpositive
Debate
20/100
Artificial IntelligenceNvidiaEdge Computing
Key topics
Artificial Intelligence
Nvidia
Edge Computing
Nvidia has released a tiny computer that brings powerful AI capabilities to desktop devices, sparking discussion about its potential applications and implications for local AI processing.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
4m
Peak period
6
0-6h
Avg / period
3.3
Comment distribution13 data points
Loading chart...
Based on 13 loaded comments
Key moments
- 01Story posted
Oct 15, 2025 at 7:43 AM EDT
3 months ago
Step 01 - 02First comment
Oct 15, 2025 at 7:47 AM EDT
4m after posting
Step 02 - 03Peak activity
6 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 18, 2025 at 10:14 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45590926Type: storyLast synced: 11/20/2025, 2:18:13 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
This is why in the long run I believe we all should aspire to do LLM inference locally. But unfortunately we just are not anywhere close to par with the SoTA cloud models available. Something like DGX spark would be a decent step in this direction, but this platform appears to mostly be for prototyping / training models meant to eventually be run on data center nvidia hardware.
Personally I think I will probably spec out an M5 max/ultra Mac Studio once that’s a thing, and start trying to do this more seriously. The tools are getting better every day and “this is the worst it’ll ever be” is much more applicable to locally run models.
https://nvdam.widen.net/s/tlzm8smqjx/workstation-datasheet-d...
At the lower price points you have the AMD machines which are significantly cheaper, even though they're slower and with worse support. Then there's apple's with higher memory bandwidth and even the nvidia agx Thor is faster in GPU compute at the cost of worse CPU and networking, and at the 3-4K price point even a threadripper system becomes viable that can get significantly more memory
But (non-batched) LLM processing is usually limited by memory bandwidth, isn't it? Any extra speed the GPU has is not used by current-day LLM inference.
Framework's AMD AI Max PCs also come with LPDDR5x-8000 memory: https://frame.work/desktop?tab=specs
A bit expensive for 128 GB RAM. What can the CPU do ? Can it run flawlessly all svchost.exe instances in Windows 11 ? At this money, does it have a headphones output ?