A Behind-the-Scenes Look at Broadcom's Design Labs
Postedabout 2 months agoActiveabout 2 months ago
techbrew.comTechstory
skepticalmixed
Debate
60/100
NetworkingAI Data CentersBroadcom
Key topics
Networking
AI Data Centers
Broadcom
The article provides a behind-the-scenes look at Broadcom's design labs, highlighting their focus on AI data centers, while the discussion revolves around the implications of this focus and concerns about Broadcom's business practices.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
8d
Peak period
7
Day 9
Avg / period
7
Key moments
- 01Story posted
Nov 4, 2025 at 3:28 AM EST
about 2 months ago
Step 01 - 02First comment
Nov 12, 2025 at 5:30 AM EST
8d after posting
Step 02 - 03Peak activity
7 comments in Day 9
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 12, 2025 at 5:20 PM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45808693Type: storyLast synced: 11/20/2025, 2:27:16 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The minimum to get into this party is already have 400gbE (QSFP 112) or 800 (QSFP-DD 112 and QSFP 224) and already working on 800 (QSFP-DD 112) or 1600 (QSFP-DD 224).
Broadcom doesn't belong to any of the AI-era SIGs, so they're trying to drag their networking fabric stuff up to speed to match, but they do belong to Ethernet Alliance and technically the IBTA (but the IBTA is no longer relevant since Nvidia bought Mellanox).
The SIG they need to belong to is the UALink Consortium, which is moving past simply RoCE/iWARP-style RDMA over Ethernet to CPU bus over Ethernet. In other words, Ultra Ethernet is trying to do multi-vendor super-computer stuff like how AMD did Hypertransport over Mellanox circa 2001-2015 (and this is why Nvidia bought Mellanox btw, they wanted to deprive AMD of an advantage that they no longer needed... they had already moved to external PCI-E fabric to replace Mellanox, and thus the brand switch to Infinity Fabric).
UALink is a socket-to-socket protocol that is PHY independent, and can use CXL (common with Intel-focused supercomputing), PCI-E (PCI over PCI-E, ie, normal non-CCNUMA hardware being babysat by the local CPU), InfinityFabric/XGMI (AMD CPU, AMD GPU), and others, while having native support for RDMA over Ultra Ethernet (200gbE and up) to glue clusters together across NUMA/UALink domains.
The UALink Consortium was founded by Alibaba, AMD, Apple, Astera Labs, AWS, Cisco, Google, Hewlett Packard Enterprise, Intel, Meta, Microsoft and Synopsys... notice the lack of Broadcom in that list. Nvidia is also not a member of this, as they desperately want a moat to keep the rest of the industry out.