Demystifying Dvds
Key topics
Diving into the intricate world of DVD recovery, a fascinating discussion unfolds around the heroic efforts to dump a damaged DVD by combining various methods. Commenters chimed in with their expertise, sharing insights on DVD error correction, with some noting that DVDs employ ECC across larger data blocks, potentially improving recovery chances when reading aligned 32KB blocks at a time. The conversation also touched on modern DVD ripping tools, with MakeMKV emerging as a popular choice, especially when paired with a LibreDrive-compatible drive. As enthusiasts exchanged knowledge, a deeper understanding of DVD internals and recovery techniques came to light.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
N/A
Peak period
8
60-72h
Avg / period
3.7
Based on 22 loaded comments
Key moments
- 01Story posted
Dec 28, 2025 at 10:25 PM EST
11 days ago
Step 01 - 02First comment
Dec 28, 2025 at 10:25 PM EST
0s after posting
Step 02 - 03Peak activity
8 comments in 60-72h
Hottest window of the conversation
Step 03 - 04Latest activity
Jan 3, 2026 at 7:41 AM EST
5d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
So when I used ddrescue, I would read in that block size (instead of just 2048) as if I would get lucky and get a good read (or enough signal that ECC could repair it on the large block).
This was very effective at recovering DVDs with repeated reads vs when I had previously done it with 2048 byte reads only I would end up with 2048 byte reads scattered all over (which if ECC is done on 16x2k 32k byte block size, means there was a lot of data I was leaving on the floor that should have been recovered on those reads).
Ddrescue was also good for this in the sense that if I was trying to recover a DVD (video) from multiple damaged DVDs, as long as they were not damaged in the same location, i was able to fill in the blanks.
Perhaps you can correct me about the 16 block mechanism, perhaps it was just random that it worked and my understanding at the time was wrong.
> Then you have 2048 bytes of user data, scrambled for the reasons mentioned before. The best way to look at the sector as a whole is to think of each sector as 12 “rows” consisting of 172 bytes each. After each 172-byte row is 10 bytes of ECC data called Parity Inner (PI), which is based on Reed-Solomon and applied to both the header and scrambled user data per row within the sector itself. Then, after the user data and parity inner data, is the 4-byte EDC, which is calculated over the unscrambled user data only. Then, finally, Parity Outer (PO) is another form of ECC that is applied by “column” that spans over an entire block of multiple sectors stacked horizontally, or in other words, a group of 16 sectors. Altogether, this adds up to 2366 bytes of recorded sector data.
PO works on the 32KB block (after PI fixes what it can of the 2KB blocks).
So if PO works, it means that it was able to correct any errors in any blocks in the 32KB block, but it doesn't mean it will be able to do it every time. But my assumption is that if I read 32KB aligned that the hardware operates on the 32KB block once.
But if the hardware only operates on 2KB blocks, so a a 32KB read would be internally treated as 16 2KB reads, just that if a 2KB read fails even with PI, it will try to read 32KB and correct with PO, but then forget everything it just did if it succeeded. Then my assumption of how to do it better fails, as each 2KB block (even within a 32KB aligned read), would still need to be lucky, vs just needing to get lucky once for each 32KB aligned block.
the reason I'm wondering is that the "raw bytes" cache the author demonstrates the drive as having is only 2+KB in size (based on what they are reading) and that makes me wonder about my assumptions.
https://en.wikipedia.org/wiki/DVD_Decrypter
It seems like it was a follow-up from previous bruteforce efforts, which include a spreadsheet with various results, but it would help to have some conclusions on which were best: http://forum.redump.org/topic/51851/dumping-dvds-raw-an-ongo...
Also, couldn't find any source/download for DiscImageMender.
A key theme in a future fiction I am writing (slowly) is that all digital data has been lost and the time we are in now is known as a digital dark age where little is known about our society and culture. Resurrecting an archeologically discovered DVD is a key plot point I am working through. That it will be the first insight into our time in over a millennium. Other conflicting interests will be finally succeeding at re-introducing corn at commercial scale after all hope had been lost and past attempts at re-germinating from the frozen seed bank had failed for hundreds of years. It's a work in progress.
It's strange to see no mention of cleaning the drives themselves, although maybe it was implicit --- if you have a pile of old drives sitting around, chances are they're not going to be perfectly clean. A tiny bit of dirt on the lens can have a huge effect on the read signal, especially on a marginal disc.
Related article from 18 years ago: https://news.ycombinator.com/item?id=21242273