Free Interactive Tool That Shows You How Pcie Lanes Work on Motherboards
Key topics
Regulars are buzzing about Mobomaps, a free interactive tool that demystifies how PCIe lanes are allocated on motherboards, sparking lively discussions about the intricacies of hardware configuration. Commenters riff on the tool's potential to help users optimize their systems, with some noting that it highlights the often-overlooked complexities of motherboard design. As users explore the tool, they're discovering surprising limitations and trade-offs in their own hardware setups, fueling a sense of community troubleshooting and knowledge-sharing. The conversation feels particularly relevant now as PC enthusiasts and builders seek to squeeze every last bit of performance from their systems.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
7h
Peak period
13
39-42h
Avg / period
4.1
Based on 33 loaded comments
Key moments
- 01Story posted
Nov 19, 2025 at 2:13 AM EST
about 2 months ago
Step 01 - 02First comment
Nov 19, 2025 at 8:47 AM EST
7h after posting
Step 02 - 03Peak activity
13 comments in 39-42h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 21, 2025 at 1:44 AM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
All the motherboards these days make me feel claustrophobic. My current workstation is pretty old, but feels like it had more expansion capability (relative to its time) than what's on the market today.
I really suggest not seeking a lot of PCIe lanes unless you really need them right now, though. The price premium for a platform with a lot of extra PCIe is very steep once you get past consumer boards. It would be a shame to spend a huge premium on a server board and settle for slower older tech CPUs only to have all of those slots sit empty.
It’s a good idea to add up the PCIe devices you will use and the actual bandwidth they need. You lose very little by running a GPU in a PCIe x8 slot instead of a full x16 slot, for example. A 10G Ethernet card only needs 1 lane of PCIe 4.0. Even fast SSDs can get away with half of their lanes and you’ll never notice except in rare cases of sustained large file transfers.
Sorta yes but kinda the other way around — you’ll mostly notice in short high burst of I/O. This is mostly the case for people who use them to run remote mounted VM.
Nowadays all nvme have a cache on board (ddr3 memory is common), which is how they manage to keep up with high speed. However once you exhaust the cache speeds drop dramatically.
But your point is valid that very few people actually notice a difference
https://pcpartpicker.com/forums/topic/423337-animated-graphs...
I appreciate your advice. I use the machine for a variety of different tasks, and am looking to accommodate at least two high-end GPU's (1 for passthrough to virtual machines for running things like Solidworks), a number of SSD's, and as many PCIe expansion cards as possible. Many of the cards are older-gen, so could be consolidated to just a few modern lanes if I could find an external expander with sufficiently generous capacity. Here's a quick inventory of what's in the existing box:
- Mellanox Infiniband. For high-speed, low-latency networking... these days, probably replaceable with integrated NIC's, particularly if they come with RDMA.
- High-performance RAID. I've found dedicated cards offer better features, performance, capacity, resilience and reliability than any of the mobo-integrated garbage I've tried over the years. Things like BBU's/SuperCaps, seamless migration and capacity upgrades, out-of-band monitoring, etc. e.g. I've taken my existing mass storage array created on a modest ARC-1231ML 15+ years ago, through several newer generations to an ARC-1883, with many disk and capacity upgrades along the way, but it's still the same array without ever having had to reformat and restore from scratch. Incidentally I've been particularly happy with Areca's hardware, and they've even implemented some features I requested over the years (like the ability to hot-clone a replacement disk for one expected to fail soon then swap in the new one, without having to degrade the array and wait for a lengthy rebuild process that reduces your fault tolerance while hammering all member disks; as well as some other tweaks for better compatibility with tools like Hard Disk Sentinel). I notice they're finally starting to come out with controllers oriented to SSD's, like a PCI 5.0 product (https://www.areca.com.tw/products/nvme-1689-8N.html) for up to 8 x4 M.2 SSD's that boasts up to 60 GB/s, which is interesting (though the high-queue-depth random performance still doesn't match directly-plugged drives). I know software-RAID for the solid state stuff is also an option (as is just living without redundancy), but it's been convenient outsourcing the complexity.
- Slim, low-performance accessory GPU for more displays
- A few others this crowd would just laugh at me for (e.g. a PCI I/O card that includes a true parallel port, because nothing is more fun™ for hobbyist stuff and USB-based alternatives were found to have too much abstraction or latency; a SCSI adapter for an archaic piece of vintage hardware I'd love to keep installed permanently but there ain't space, and occasional one-off use stuff like a high-bandwidth digitizer).
The motherboard had 6 PCIe slots, and I've got two more provided by an external PCIe expander (after accounting for the one lost for it's own connection). If I could find some kind of expander that took a single PCIe 5.0 slot and turned it into half a dozen PCIe 3.0 slots (some full-width) I'd be set.
I know I'm at the crazy end of how-much-crap-can-you-jam-in-one-PC, but it still seems bizarre to me that newer boards have so many fewer slots yet feel lane-constrained, when between leading-edge SSD's and high-bandwidth GPU's the demand for more lanes is skyrocketing. When I built the previous PC it felt tight but doable... these days it feels like I can barely accommodate the level of graphics and storage I'd like, and by the time I do, there's nothing left for anything else. Granted it's been a few years since I got my hands dirty with this stuff, so maybe I'm just doing it wrong?
And yes, I've heard of USB... and have a bazillion devices plugged in (including some of exotic ones like an LCD display, logic analyzer, and a legit floppy drive that does get used once in a blue moon like when I need to make a memtest86 boot disk for a vintage PC). I've actually found some motherboards have issues where the USB stack gets flakey once you have too many devices connected (even using powered hubs to mitigate power constraints).
Ok... go ahead and have at me; tell me I'm old and dusty and I should take my one GPU and one SSD and be happy with them ;-).
This means you can get a motherboard like the "Asus Pro WS WRX90E-SAGE SE" which dedicates 104 lanes to seven PCIe slots and 16 lanes to four M.2 slots.
For more like $3000 you can get a non-Pro Threadripper; the "Asus Pro WS TRX50-SAGE" has a more restrained 48 PCIe 5.0 and 32 PCIe 4.0 lanes, meaning the board's five PCIe slots and three M.2 slots have a mixture of speeds and lanes.
The rest of the market seems to think you just want to plug in one huge four-slot GPU and perhaps one other card.
(ps. I don't suppose they make a "supersized" version of that board with a gap beside the first one or two GPU slots? So you can install a couple double-width cards without losing the underlying slots? Or a good source for a single-width, high-end GPU like the Inno3D RTX 5090 iChill Frostbite Pro?)
Let's Encrypt documented their early 2021 whitebox that used 128 PCIe 4.0 lanes, mainly for storage: https://letsencrypt.org/2021/01/21/next-gen-database-servers...
Troy Hunt (HaveIBeenPwned) recently solicited upgrade advice from the internet and settled on an Asus Pro WS TRX50-SAGE WIFI (which doesn't appear to be in the MoboMaps database yet): https://gist.github.com/troyhunt/a6e565981e4769976e9cffb705f...
In previous decades, non-mainstream CPU sockets were also more accessible to consumer budgets; first-gen Threadripper started at only 8 cores, so it was possible to pay extra for more memory channels and IO lanes without also buying an excess of CPU cores. But that had little to do with the popularity or viability of multi-GPU consumer systems.
On the server side, seven x16 slot motherboards exist.
NVLink is another one you might have heard of, although it might also fall in the exotic category. I think some systems take AXI off-chip too. So there's various other weird and wonderful things. But none you're likely to have in your PC I think.
On-chip is another story, you can connect USB or NVMe or GPU "peripherals" using an on-chip interconnect type. But I guess you are asking about off-chip.
In a pedantic/technical sense, no. Practically speaking though, yes.
I found it useful and thought others might also like it.
Him are you sure about some of the PCI slots? I think some marked as 4x get downgraded to 1x on these boards…
Further edit - this maybe accurate - how are you getting this / confirming it?
25 more comments available on Hacker News