Nvidia is gearing up to sell servers instead of just GPUs and components
Mood
excited
Sentiment
positive
Category
tech
Key topics
Nvidia
AI Servers
Vertical Integration
Nvidia is reportedly planning to sell entire AI servers instead of just GPUs and components, marking a shift towards vertical integration.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
41m
Peak period
75
Day 1
Avg / period
38
Based on 76 loaded comments
Key moments
- 01Story posted
11/14/2025, 1:18:09 PM
4d ago
Step 01 - 02First comment
11/14/2025, 1:59:01 PM
41m after posting
Step 02 - 03Peak activity
75 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
11/17/2025, 6:58:49 PM
1d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
I wouldn't be surprised if we see some major acquisitions or mergers happening in the next few years by one of the independent AI vendors like OpenAI and Nvidia.
What is changing?
> Further limiting supply
Even if they don't increase their GPU production capacity, that's not "limiting" supply. It's keeping it the same. Only now they can sell each unit for a larger profit margin.Are they creating their own software stack or working with one or more partners?
What I don't really get, is that Nvidia is worth like $4.5T on $130B revenue. If they want to sell servers, why don't they just buy Dell or HP? If they want CPUs want not buy AMD, Qualcomm, Broadcom or TI? (I know they got blocked on their ARM attempt before the AI boom) Their revenue is too low to support their value, shouldn't they use this massive value to just buy up companies to up their revenue?
And no sane regulator on the planet will allow them to takeover AMD, Qualcomm, or Broadcom.
https://www.npr.org/2025/04/09/nx-s1-5356480/nvidia-china-ai...
Why buy a complex but relatively low margin business that comes with a lot of baggage they don't need, when they can focus on what they do best and let Dell and HP compete against each other for Nvidia's benefit?
Same reason why Apple doesn't buy Foxconn or TSMC.
They want to sell HPC servers, not general purpose servers.
Now? You would have to tell me nVidia was also building multiple nuclear power plants to get the scale to make sense.
It'll work until you can buy comparable expansion cards for open systems (if history is any guide).
And at least if anyone can buy the hardware you'll have your own or have multiple competing providers you can lease it from. If you can only lease it and only from one company, who would want to touch that? It's like purposely walking into a trap.
Meanwhile, at the hardware level, TPU's provide some competition for NVidia.
Sounds like they are going the Apple way. How long until we have to pay 30% to get our apps in their AI-Store?
It's more efficient to have companies that specialize in making all kinds of boards than to make each of the companies making chips have to do that too. And it's a competitive market so the margins are low and the chip makers have little incentive to enter it when they can just have someone else do it for little money.
It’s pretty mind blowing what this crisis shows from the manipulation of atoms and electrons all the way up to these clusters. Particularly mind blowing for me who has cable management issues with a ten port router.
I thought with fiber we wouldn't need coper cables maybe just for electricity distribution but clearly I was wrong.
thanks for sharing
Seems like where things are heading?
> Starting with the VR200 platform, Nvidia is reportedly preparing to take over production of fully built L10 compute trays with a pre-installed Vera CPU, Rubin GPUs, and a cooling system instead of allowing hyperscalers and ODM partners to build their own motherboards and cooling solutions. This would not be the first time the company has supplied its partners with a partially integrated server sub-assembly: it did so with its GB200 platform when it supplied the whole Bianca board with key components pre-installed. However, at the time, this could be considered as L7 – L8 integration, whereas now the company is reportedly considering going all the way to L10, selling the whole tray assembly — including accelerators, CPU, memory, NICs, power-delivery hardware, midplane interfaces, and liquid-cooling cold plates — as a pre-built, tested module.
Makes sense longer term for NVidia to build this but adds to the bear case for AWS et al long term on AI infrastructure.
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.