Connecting M.2 Drives to Various Things (and Not Doing So)
Key topics
The quest to connect M.2 drives to various interfaces has sparked a lively debate about the feasibility and practicality of such adapters. While some commenters, like yummypaint, suggest that an FPGA dev board could be used to create a custom adapter, others, like IgnaciusMonk, argue that writing a SATA to NVMe adapter is nonsensical, drawing parallels to incompatible protocol conversions. However, userbinator counters that it makes as much sense as existing USB-NVMe adapters, highlighting the complexity of the issue. As the discussion unfolds, a consensus emerges that tri-mode HBAs, which can handle multiple protocols, are a more viable solution, with privatelypublic pointing out that they can be found on eBay for around $200.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1d
Peak period
22
36-48h
Avg / period
8.5
Based on 51 loaded comments
Key moments
- 01Story posted
Aug 25, 2025 at 8:39 AM EDT
4 months ago
Step 01 - 02First comment
Aug 26, 2025 at 5:01 PM EDT
1d after posting
Step 02 - 03Peak activity
22 comments in 36-48h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 29, 2025 at 3:34 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Any NVME disk can be connected even over PCIE3 x1 so there is plenty of capability on DESKTOP computers he is "managing".
And what is he writing and how is he writing it is unbelievable that he can not seem to understand what SAS expander is etc.
A bidirectional example is IDE/SATA, for which plentiful cheap adapters in both directions (one IC automatically detects its role) exist; IDE host to SATA device, or SATA host to IDE device.
For another "directional" example, it's worth noting that SATA to MMC/(micro/mini)SD(HC/XC)/TF adapters exist which let you use those cards (often multiple, even in RAID!) as a SATA drive, but the opposite direction, exposing a SATA drive as an SD card, does not (yet).
I don't think that's true. USB is accessible in lots of places where SATA and PCIe is not, i.e. as external connectors. Yes eSATA is a thing but eSATA without being able to use USB or PCIe?
Or in other words, SATA->NVMe would at best serve users unwilling to upgrade their legacy racks while USB->NVMe has plenty of non-legacy use cases.
"" rest of your comment is "ok" ""™ XD
PCIE sata controllers in m.2 ssd are a thing, as are m.2 direct sata ssd, as are sata controllers on m.2 card with sata connectors capable of connecting 4-6 disks ( ASM1166 ). so i do not seem to see point you want to make there
sata -> memory card is solution for embedded market of 2000s not today, for refurbishments or efforts to keep using old embedded stuff. and again it has nothing to do with guys point, it is absolutelly different use case. he is talking about servers, ( servers with a lot of drives have expanders ) ! ! ! ! M.2 NVMe to CFexpress Extender is something else entirely, so depends highly on what EXACTLY are we talking about ! ! ! !
simple reason why it is nonsense - how much does 256 GB m.2 ssd cost? so just use that.
or use M.2 to PCI-E 4X 1X Riser Card ( adt-link K42ST ) and connect standard ubiquitous SATA/SAS/NVME HBA/RAID card into it and use any freaking disks.
or
M.2 Key M to SFF-8643 and use cable to connect it to something like H3platform Falcon 4118... which is "just" PCIE switch + psu + connectors.
or
m.2 to pcie connector and use HBA with optics to connect to 5 miles remote ARRAY.
But if you really insisted SATA and USB would occupy layers 1 and 2 while PCIE goes all the way to 3, so would ancient Firewire. PCIE (and FW) support bus mastering, USB does not. USB/SATA devices are purely host pooled and unable to initiate any transfers, PCIE device can talk to any other PCIE device without host help.
If you want to get super hacky, if the SATA backplane is direct instead of multiplexed, passing a single lane of PCIe over SATA is possible. Probably limited to pcie 2.0 though. And they're absolute abominations, real frankensteins monster stuff.
Edit: I mean the physical layer. Obviously you can't control a pcie device with a sata controller, but the cable will do a single wide pcie link.
In an enterprise environment, nobody is really hooking up fast new storage to old slow storage controllers. They are either maintaining old systems, where they will use the legacy storage technologies, or they are deploying entirely new systems.
e.g. 3.5" floppies are 40 years old, were obsolete 25 years ago, went out of production 15 years ago, and are just about ready to deplete their stock today. Yes, there are flash-to-floppy adapter, and a similar thing may happen for SATA, but we may not see that as a necessity until 2050.
1. I already had a 2.5" hotswap setup
2. 2.5" 8TB SSDs are 4x as expensive as 8TB NVMEs.
A: https://utcc.utoronto.ca/~cks/space/blog/tech/NVMeOvertaking...
Huh?
870 QVO 8 TB SSD SATA III 2.5 inch: $629.99
Which is in the same range as M.2 ones.
Sure, you are getting gouged if try to buy it on Amazon but then... just don't buy it on Amazon?
https://www.samsung.com/us/computing/memory-storage/solid-st...
SSDs need to sit more between HDD/NVME's, but they are on the same level as NVME.
Another issue, is just like with 2.5" drives, you see manufactures really only focus on specific drives. Its going to be 3.5" or U.2/U.3 and now NVME NAS solutions. But you do see any 2.5" / SATA solutions?
I mean, the only thing i remember seeing is the Synology DiskStation DS620slim that is now like 5 years old product. And still expensive as hell. Nobody makes any SATA products.
The market is now being a ton of Chinese brands / mini-pc makers that offer 4, 5, 6 NVME products. And even with PCIe3.0 x1 lane support, they are faster then SATA SSDs. And benefit from the massive better random/lower latency.
I love to shove a ton of SATA SSDs in a system, instead of HDDs but the prices need to be somewhere in the middle of HDD/NVMEs per TB. Not the same as NVMEs.
> Nobody makes any SATA products.
Because there is no demand for it. 2.5" HDDs stopped to grow in size 10 years ago. If you need capacity you use 3.5" HDDs, if you need speed you use M.2/2.5" SATA SSDs and 2.5" SATA HDDs... what are these could be used for?
There is no capacity with 2.5" HDDs, it's only 1TB CMR HDDs x numdrives, ie 5TB RAW for 620slim, 4TB RAW for TS-410E.
Or you risk it all and use a 4/5TB SMR HDDs for a ~20~25TB of RAW capacity and now you can't use RAID.
So what the point?
> but the prices need to be somewhere in the middle of HDD/NVMEs per TB
... and why a SATA SSDs should be cheaper than the NVMe ones? Especially if they are both use the same flash modules?
https://www.qnap.com/en/product/ts-410e/specs/hardware
Technically, SATA SSD's need to fill this spot as a cheaper alternative but with their prices being just as expensive (often only 5% cheaper) as m.2. If SATA had a price range in the 0.30 EUro/TB, it will have been a great alternative to HDD based storage.
And you can go really crazy with bifurcation > 4/4/4/4x and then converting all those new m.2 slots to 6 SATA (ASM controller for cheap and work great). Plop, 24 SATA ports for 4W power draw.
But nobody is going to pay for SATA SSD's when they can just buy m.2 for the same price. So the result is that the SATA SSD market is "kind of dying", and manufactures look at it like 2.5" drives got looked at from HDD manufactures.
I have a high capacity M.2 SATA in my computer. It's 4TB which I think qualifies for high. I bought it because I found out about that empty slot in my computer and wanted to fill it, not because of a particular need. Having a rare part in my computer gives me an indescribable sense of joy. And don't worry it's entirely used for extra redundancy so I won't lose data even if it dies.
From what I can tell, this was true when it was written, but has changed since with the Samsung QVO 8TB SATA only at a roughly 25% markup compared to a 8TB QLC M2 2280 drive. When I searched last winter it was 2-4x expensive as soon as you crossed the 4TB mark, with 4TB and under being similarly priced between SATA and nvme drives.
To properly design a computing solution, first you define the requirements, and then you select components with specifications that will fulfill your requirements.
If you try to work backwards, you are destined for failure. Dropping big cash on 16 high-capacity SSDs just to ham-jam them in an old system is a really really dumb idea, especially if you're concerned about IOPS.
Just make sure to still back up the data!
Maybe I need to use a little sata SSD as /boot?
To explain what I learned on my own:
If you want to (for example) put 4 NVME drives (4 lanes each) in a 16x slot, then you need two things:
1. The 16x slot actually needs to have 16 lanes (on consumer motherboards there is only one slot like this, and on many the second 16x slot shares the lanes, so it will need to be empty)
2. You need to configure the PCIe controller to treat that slot as 4 separate slots (this is called PCIe bifurcation.
For recent Ryzen CPUs, the first 16x slot usually goes directly to the CPU, and the CPU supports bifurcation, so (assuming your BIOS allows enabling bifurcation; most recent ones do) all you need to do is figure out which PCIe lanes go to which slots (the motherboard manual will have this).
If you aren't going to use the integrated graphics, you'll need a 16x slot for your GPU. This will at best have 4 lanes, and on most motherboards (all 800 series chipset motherboards from all major manufacturers) will be multiplexed over the chipset, so now your GPU is sharing bandwidth with e.g. USB and Ethernet, which seems less than ideal (I've not benchmarked, maybe someone else has?).
In the event that you want to do a 4x NVME in a 16x slot I found that the "MSI X670E ACE" has a 4x slot that does not go throught the chipset, and so does the "ASRock B650M Pro x3D RS WiFi" either of those should work with a 9000 series Ryzen CPU.
ThreadRipper CPUs have like a gajillion PCIe lanes, so there shouldn't be any issues there.
I have also been told that there are some (expensive) PCIe cards that present as a single PCIe device to the host. I haven't tried them.
1: https://utcc.utoronto.ca/~cks/space/blog/tech/PCIeBifurcatio...
Depending on how old the motherboard, or UEFI, is - it may not be able to directly boot from NVMe. Years ago I modified the UEFI on my Haswell-era board to add a DXE to support NVMe boot. You shouldn’t need to do that to see the NVMe device within your OS though.
As the other reply mentioned, if you want to run multiple SSDs on a cheap adapter your platform needs to support bifurcation, but if it doesn’t support bifurcation hope is not all lost. PCIe switches have become somewhat cheaper - you can find cards based on the PEX8747 for relatively little under names like PE3162-4IL. The caveat here is that you’re limited to PCIe 3.0, switches for 4.0+ are still very expensive.
Anything since socket 1155 can work and even Westemere/Nehalem should work too, except you don't really want a system that old.
I have a 4Tb M.2 drive in x1 slot in my GA-Z77X, works fine.
Not a boot drive and I didn't bother to look if the BIOS supports booting from NVMe, though a quick search says the support for that was started with Z97 chipset/socket 1150.
> use a little sata SSD as /boot
For Linux you can even use some USB thumbdrive, especially a small profile like Kingston DataTraveler Micro G2.
https://www.aorus.com/motherboards/ga-z77x-ud3h-rev-10/Key-F...
https://www.ebay.com/sch/i.html?_nkw=Asus+Hyper+M.2
I still have to boot off a SATA SSD.
> (These chipsets are, for example, the Realtek RTL9210B-CG or the ASMedia ASM3242.)
The NVMe to USB adapters aren't converting the NVMe protocol to another disk access protocol. They are USB3-connected PCIe endpoints, which allow the PCIe NVMe drive to connect to the host as an NVMe device.
This isn't equivalent to the protocol conversion the author is seeking, which would accept SATA commands on one end and translate them to NVMe on the other end. I would actually call that SATA drive emulation, not protocol conversion, as SATA and NVMe aren't 1:1 such that you can convert SATA commands into NVMe commands and vice versa.
No?
https://us1.discourse-cdn.com/flex001/uploads/framework3/ori...
This is an RTL9210B NVMe enclosure. It's a UAS device (and I believe it supports BOT too.)
I have not examined this in much detail but I believe these converter ICs are actually rather powerful SoCs with PCIe host, SATA host, and USB device peripherals. The existence of firmware (several hundred KB!) for them is further evidence of this fact.
That chassis sports a proper 16-port SAS backplane so they can just use... SAS drives?
Sure, SAS 7.68Tb drives cost a bit more than some shit like 870 QVO 8Tb SATA drive, but:
you will have at least 12Gbps instead of 6Gbps of bandwidth so your storage would be faster;
you will not have a shitshow of STP so your storage would be faster;
you will not have a USB thumbdrive speeds if you exhaust the SLC cache so your storage would be faster;
you will have a better DWPD (1 vs 0.3) so your storage would be faster for a longer time.
But okay, even if you don't go the SAS way... I'm again not sure what is going on here, but besides 870 EVO (desktop SATA QLC shit) there are Kingston DC600M, Solidigm D3-S4520 and Samsung PM893 which are the enterprise SATA drives and they cost only 10% more than 870 EVO (and only 10% less than Kioxia PM6-R SAS).
Oh, by the way: don't do U.2 in 2025 and later. It would bite you later.
his article does not make sense at all. i do not ... know why it is even here and why some other commenters are inserting additional "points" to that article just to make it seem sane. :)
If I had transfinite funds, I would make a video about turning a dual socket motherboard+CPU combination with the most PCIe lanes with the goal to connect maximum GPUs via Thunderbolt 4 hubs and enclosures, PCIe bifurcation cards, and M.2-to-PCIe adapters (whichever method maximizes GPU count) all powered by many PSUs.
3 more comments available on Hacker News