Samsung 870 Qvo 4tb Sata SSD-S: How Are They Doing After 4 Years of Use?
Posted4 months agoActive3 months ago
ounapuu.eeTechstory
calmmixed
Debate
60/100
SSD ReliabilityStorage TechnologyHardware Durability
Key topics
SSD Reliability
Storage Technology
Hardware Durability
The post discusses the author's experience with Samsung 870 QVO 4TB SATA SSDs after 4 years of use, sparking a discussion on SSD reliability, durability, and best practices for maintaining them.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
8h
Peak period
31
48-60h
Avg / period
6
Comment distribution42 data points
Loading chart...
Based on 42 loaded comments
Key moments
- 01Story posted
Sep 15, 2025 at 4:03 AM EDT
4 months ago
Step 01 - 02First comment
Sep 15, 2025 at 11:57 AM EDT
8h after posting
Step 02 - 03Peak activity
31 comments in 48-60h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 19, 2025 at 5:33 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45247259Type: storyLast synced: 11/20/2025, 6:12:35 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
[1]: https://www.tomshardware.com/pc-components/storage/unpowered...
The only insight you can gleam from that is that bad flash is bad, and worn bad flash is even worse, and that's frankly a stretch given the lack of sample sizes or a control group.
The reality is that its non trivial to determine data retention/resilience in a powered off state, at least as it pertains to a coming to a useful and reasonably accurate generalism of "X characteristics/features result in poor data retention/endurance when powered off in Y types of devices," and being able to provide the receipts to back that up. There are far more variables than most people realize going on under the hood with flash and how different controllers and drives are architected(hardware) and programmed(firmware). Thermal management is a huge factor that is often overlooked or misunderstood and that has substantial impact on flash endurance (and performance). I could go into more specifics if interested (storage at scale/speed is my bread and butter), but this post is long enough.
All that said, the general mantra remains true: more layers per cell generally means data per cell is more fragile/sensitive, but that's generally in the context of write cycle endurance.
Can you elaborate wrt the reason for your critique considering they're pretty much just testing from the perspective of the consumer? I thought their explicit goal is not to provide highly technical analysis and niche preferences but instead look at it for John Doe that's thinking about buying X, and what it would mean for his usecases. From my mental model of that perspective, they're reporting was pretty spot on and not shoddy, but I'm not an expert on the topic
The video isn't perfect, but I thought it had some interesting data points regardless.
Maybe the quality looks good to you, but maybe you don't know what it used to be 25 years ago to compare to. Maybe it is a problem of wrong baseline.
Yes. that's their schtick. Do just enough so that the average non-tech literate user doesn't know any better. And if you're just a casual consumer/reader, It's fine. Not great, not even necessarily accurate, but most of their readership don't know enough to know any better (and that's on purpose). I don't believe their intentionally misleading people. Rather - simply put - It's evident the amount of fucks they give regarding accuracy, veracity, depth, and journalism in general is decidedly less than their competition.
If you're trying to gain an actual technical insight with any real depth or technical merit, toms is absolutely not the place to go. Compare to ServeTheHome (servers, networking, storage, and other homelab and enterprise space related stuff), GN (gaming focused), RTings.com (Displays and peripherals), to name a few to see the night and difference between people that know what they're talking about and strive to be accurate and frame things in the right context, and compare that with what Toms does.
Again, depends on what the user is looking for, but Toms is catering to the casual crowd, aka people who don't know any better and aren't gonna look any deeper. Which is totally fine, but it's absolutely not a source for nuance, insight, depth, rigor, or anything like that.
The article in question[0] is actually a great example of this. They found a youtube video of someone buying white-label drives, with no control to compare it to, nor further analysis to confirm that the 4 drives in question actually all had the same design, firmware, controller, and/or NAND flash underneath (absolutely not a given with bargain bin white label flash, which these were, and it can make a big difference). I'm not trying to hate on the youtuber, there's nothing wrong with their content, but rather with how Toms presents it as an investigation into unpowered SSD endurance while in the same article they themselves admit: "We also want to say this is a very small test sample, highlighted out of our interest in the topic rather than for its hard empirical data." This is also why I say I don't believe their trying to be disingenuous. Hell, I give them credit for admitting that. But it is not a quality or reliable source that informs us of anything at all about the nature of flash at large, or even the specific flash in question, because we don't know what the specific flash in question is Again, just because they're the same brand, model and capacity does not mean they're all the same, even for name brand devices. Crucial's MX500 SSD's for example have been around for nearly a decade now, and the drives you buy today are VERY MUCH different from the ones you could buy of the same capacities in 2017 for example.
Don't even get me started on their comments/forums.
0: https://www.tomshardware.com/pc-components/storage/unpowered...
I would read an entire series of blog posts about this.
They primarily focus on Storage (SSD's and HDD's) but also evaluate storage controllers, storage-focused servers/NAS/DAS/SAN/etc and other such storage-adjacent stuff. For an example of the various factors that differentiate different kinds of SSD's, I'd recommend the above's article reviewing Micron's 7500 line of SSD's[0]. It's from 2023, but still relevant and you don't have to read the whole thing. Heck just scroll through the graphs and it's easy to see this shit is far from simple even when you're accounting for using the same storage controllers and systems and testing methodologies and what not.
If you want to know about the NAND (or NOR) flash itself, and what the difference/usecases are at a very technical level, there's stuff like this from Micron "NAND Flash 101 NAND vs. NOR Comparison"[1]
If that's too heavy for you (it is a whitepaper/technical paper after all), and you want a more light read on some of the major differences between Enterprise Flash and Consumer flash, SuperSSD has a good article on that [2] as well as many other excellent articles.
Wanna see some cool use cases for SSD's that aren't so much about the specific low-level technicals of the storage device itself, but rather how they can be assembled into arrays and networked storage fabrics in new and interesting ways ServeTheHome as some interesting articles such as their "ZFS without a Server Using the NVIDIA BlueField-2 DPU"[3]
Apologies for responding 2 days late. I would be happy to answer any specific questions, or recommend other resources to learn more.
Personally my biggest gripe that I've not really seen anyone do proper analysis on is the thermal dynamics of storage devices and the impact that has (especially on lifespans). We know this absolutely has an effect just from deploying SSD's at scale and seeing in practice how otherwise identical drives within the same arrays and in the same systems have differing lifespans with the number one differentiating factor being peak temperatures and temperature delta's (high delta T can be just as bad or worse than just high temp, although that comes with a big "it depends"). Haven't seen a proper testing methodology really trying to take a crack at it (because that's a time consuming, expensive, and very difficult task, far harder to control for relevant variables than in GPU's imo, due in part to the many many different kinds of SSD's from different NAND flash chips, different heatsinks/form factors, wide variety in where they're located within systems, etc etc). Take note that many SSD's, save for those that explicitly are built for "extreme/rugged environments" have thermal limits that are much lower than other components in a typical server. Often the operating range spec is something like -10C to 50C for SSD's (give or take 10C on either end depending on the exact device), meanwhile GPU's and CPU's can operate at over 80C which - while not a preferred temp - isn't out of spec, especially under load. Then consider the physical packaging of SSD devices as well as where they are located in a system can mean they often don't get adequate cooling; M.2 form factor SSD's are especially prone to issues in this regard, even in many enterprise servers both due to where they're located in relation to airflow or other hot components (often have some NIC/GPU/DPU/FPGA sitting right above them or a nearby onboard chip(set) dumping heat into the board which raises the thermal floor/ambient temps). There's a reason the new EDSFF form factor has so many different specs to account for larger heatsinks and cooling on SSDs [4][5][6]
I've barely even touched on things like networked arrays, the explosion in various accelerators and controllers for storage, NVMeoF/RoCE/Storage Fabrics, Storage Class Memory, HA storage, Transition flash, DRAM and Controllers within SSD's, Wear-leveling and Error Correction, PLP, ONFI, SLC/MLC/TLC/QLC and the really fun stuff like PCIe root topologies, NVME zoned namespacing, Computational Storage, CXL, GPUDirectStorage/BaM, Cache Coherency, etc etc etc.
0: https://www.storagereview.com/review/micron-7500-pro-7500-ma...
1: https://user.eng.umd.edu/~blj/CS-590.26/micron-tn2919.pdf (Direct PDF link)
2: https://www.superssd.com/kb/consumer-vs-enterprise-ssds/
3: https://www.servethehome.com/zfs-without-a-server-using-the-...
4: https://americas.kioxia.com/en-us/business/ssd/solution/edsf...
5: https://americas.kioxia.com/en-us/business/ssd/solution/edsf...
6: https://members.snia.org/document/dl/27231 (Direct PDF link, Technical Whitepaper on the Standard if you really want to dive deep into what EDSFF is)
I would love this.
Still to be seen how that works out in long run but so far so good.
That said I only have a couple of TBs...bit more and HDDs do become unavoidable
Can't say I've heard of people worrying about this angle before tbh
Reading the linked post, it's not a Linux kernel issue. Rather, the Linux kernel was forced to disable queued TRIM and maybe even NCQ for these drives, due to issues in the drives.
I have an old Asus with a M.2 2280 slot that only takes SATA III.
I recall 840 EVO M.2 (if my memory serves me right) is the current drive but looking for a new replacement seems not to be straightforward as most SATA is 2.5 in. Or if its the correct M.2 2280, its for NVMe.
No issues were found on either of them.
Glad for the guy, but here are a bit different view on the same QVO series:
NB you need to look at the first decimal number in 177 Wear_Leveling_Count to get the 'remaining endurance percent' value, ie 59 and 60 hereWhile overall it's not that bad, losing only 40% after 4.5 years - it means what in another 3-4 years it would be down to 20% if the usage pattern wouldn't change and the system wouldn't hit the write amplification. Sure, someone had that "brilliant" idea ~5 years ago to use a desktop grade QLC flash as a ZFS storage for PVE...
As I understand it, the values in the device statistics log have standardized meanings that apply to any drive model, whereas any details about SMART attributes (as in the meaning of a particular attribute or any interpretation of its value apart from comparing the current value with the threshold) are not. So absent a data sheet for this particular drive documenting how to interpret attribute 177, I would not feel confident interpreting the normalized value as a percentage; all you can say is that the current value is > the threshold so the drive is healthy.
I said about 177 because it's the same numbers what PVE gives in the webui and I didn't found the obvious 'wearout/lifeleft' I'm accustomed to see in the SMART attributes.
The 4TB models obviously will hold up better under 170+ TB of writes than the 1TB drives will, and it wouldn't be surprising to see less write amplification on the larger drives.
My apartment is super quiet, you hardly hear anything from the outside, so I can hear a HDD in the living room during the silent parts of movies. In some relative’s house, however, you only notice how loud is the background noise when the power goes off. No wonder why I always have a headache when I go there.
I've never had any spinning drives come close to that level of reliability. I've only actually had one SSD or NVME fail, and that's the first gen Intel drive in my desktop that had a firmware bug and one day showed up as an 8mb empty drive. It was a 64gb unit and I was so impressed by the speed, but tired of symlinking directories to the HDD for storage needs I just bumped to 240gb+ models and never looked back.
Currently using a Corsair MP700 Pro drive (gen 5 nvme) in my desktop. Couldn't be happier... Rust and JS projects build crazy fast.
Solution is to remove some files... and pray it lasts half of a 64GB Intel X25-E!
It should last 30x shorter because it is 30x larger?
Or is this game only about saturation rate?