Building A16z's Personal AI Workstation
Posted5 months agoActive5 months ago
a16z.comTechstory
skepticalnegative
Debate
80/100
AI HardwareVc FirmsHigh-End Computing
Key topics
AI Hardware
Vc Firms
High-End Computing
A16Z, a VC firm, has built a high-end AI workstation with four NVIDIA RTX 6000 GPUs, sparking skepticism and criticism from the HN community about its practicality and the firm's technical expertise.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
N/A
Peak period
66
0-12h
Avg / period
18.3
Comment distribution73 data points
Loading chart...
Based on 73 loaded comments
Key moments
- 01Story posted
Aug 23, 2025 at 12:03 PM EDT
5 months ago
Step 01 - 02First comment
Aug 23, 2025 at 12:03 PM EDT
0s after posting
Step 02 - 03Peak activity
66 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 28, 2025 at 1:57 PM EDT
5 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 44996892Type: storyLast synced: 11/20/2025, 4:32:26 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The funny part is that they still make money. It seems like once you’ve got the connections, being a VC is a very easy job these days.
My weak, uncited, understanding from then they're poorly positioned, i.e in our set they're still the guys who write you a big check for software, but in the VC set they're a joke: i.e. they misunderstood carpet bombing investment as something that scales, and went all in on way too many crypto firm. Now, they have embarrassed themselves with a ton of assets that need to get marked down, it's clearly behind the other bigs, but there's no forcing function to do markdowns.
So we get primal screams about politics and LLM-generated articles about how a $9K video card is the perfect blend between price and performance.
There's other comments effusively praising them on their unique technical expertise. I maintain a llama.cpp client on every platform you can think of. Nothing in this article makes any sense. If you're training, you wouldn't do it on only 4 $9K GPUs that you own. If you're inferencing, you're not getting much more out of this than you would a ~$2K Framework desktop.
I was with you up till here. Come on! CPU inferencing is not it, even macs struggle with bigger models, longer contexts (esp. visible when agentic stuff gets > 32k tokens).
The PRO6000 is the first gpu that actually makes sense to own from their "workstation" series.
The Framework Desktop thing is that has unified memory with the GPU, so much like an M-series, you can inference disproportionately large models.
Well, you're getting the ability to maintain a context bigger than 8K or so, for one thing.
As Mr. Hildebrand used to say, when you assume, you make...
(also note the article specifically frames this speccing out as about training :) not just me suggesting it)
Based on what? your feelings?
> being a VC is a very easy job these days.
There you go. Why hasn't everyone who have connections became VC.
what? a 12 year old with a titanic budget could put this PC together
I was not impressed by any of the partners
(but hey they were better than the PE partners I worked for immediately after)
What did you say?? I can't hear you over these blowers!!
The workstation versions are fine if you're running one or maybe two cards with an airflow gap between them, but if you pack four of them right next to each other then you're going to have a bad time when the fans get going.
What's the recommended operating system with support for this hardware and local compute without cloud telemetry/identity?
> Surprising efficiency: Despite its scale, the workstation pulls 1650W at peak, low enough to run on a standard 15-amp / 120V household circuit.
> But making this article should be a humiliation.
Article? It's their f*king website. Did you say the same when someone post their complaint about LLM on their blog?
Grand total: ~ $41,000
Motherboard https://www.newegg.com/gigabyte-mh53-g40-amd-ryzen-threadrip... $895
CPU https://www.microcenter.com/product/674313/amd-ryzen-threadr... - $3500
Cooler https://www.newegg.com/p/3C6-013W-002G6 $585
RAM https://www.newegg.com/a-tech-256gb/p/1X5-006W-00702 $1600
SSDs https://www.newegg.com/crucial-2tb-t700-nvme/p/N82E168201563... $223 x 4 = $892
GPUs https://www.newegg.com/p/N82E16888892012 - $8295 x 4 = $33,180
Case https://www.newegg.com/fractal-design-atx-full-tower-north-s... $195
Power Supply https://www.newegg.com/thermaltake-toughpower-gf3-series-ps-... - $314
Of course I don't personally have any use for this but it's good to have an idea what it takes to run the best openweight models in a secure/controlled environment. To get started a single 96GB GPU system is only $16,115. For perspective I spent about $10k (today dollars) for a Toshiba Portege 320CT laptop with as much memory and accessories as I could get in 1998.
I’m glad they did. It’s weird and different.
> We are planning to test and make a limited number of these
So this does approximately nothing to solve the original problem of supply and cost. Even if you sold it at a loss, that GPU is still going to be expensive.
Just be honest and say you thought it would be cool and you're not Y Combinator so you gotta do whatever you can to make your firm seem like a special smart kids club.
My only question is: why not Zen 5? No suitable motherboards?
Who is buying hardware this expensive from a business that probably doesn’t really know how to do (or isn’t setup to do) proper manufacturing tests?