Disk Utility Still Can't Check and Repair Apfs Volumes and Containers (2021)
Key topics
The article discusses the ongoing issues with Disk Utility's inability to check and repair APFS volumes and containers, with comments expressing frustration and disappointment with Apple's handling of the issue and the overall quality of macOS.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
3m
Peak period
19
2-4h
Avg / period
5.6
Based on 62 loaded comments
Key moments
- 01Story posted
Sep 21, 2025 at 9:37 AM EDT
4 months ago
Step 01 - 02First comment
Sep 21, 2025 at 9:40 AM EDT
3m after posting
Step 02 - 03Peak activity
19 comments in 2-4h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 22, 2025 at 4:35 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The Asahi installer couldn't resize the partition due to some orphan inodes or something.
Rebooting into Recovery mode and using Disk Utility (GUI) and diskutil (CLI) didn't fix the issues.
But `fsck_apfs -y` did the trick. I had to first do `diskutil unlock volume -nomount` as it was an encrypted volume.
https://news.ycombinator.com/item?id=45322650
(EDIT: corrected link to comment)
https://news.ycombinator.com/item?id=45322650
There was no good reason to add the year.
No, that's not the reason for the HN convention. Why would someone even submit an article with out of date information?
The reason for the convention is that "news" is generally expected to be new, so when it's not new, HN readers want to be informed of that fact, and they can react to the submission accordingly. It's a simple courtesy to readers.
Anything that good hackers would find interesting is on topic.[1] This includes some history.
[1] https://news.ycombinator.com/newsguidelines.html
The submitted article is not an historical review. If there was an article written explaining how Disk Utility had a bug, but the bug is now fixed, that might be interesting. On the other hand, to submit an article about a bug that no longer exists, with no explanation, would simply be misleading, out-of-date information. In this case, however, the bug still exists presently, so it's not history either.
I did not say that you did say that.
> I answered why someone would submit an article with out of date information.
And I explained why history is not the same as out of date information. Thus, you have not explained why someone would submit an article with out of date information.
Submitting history is fine. Submitting out of date information is not fine, and it wasn't done in this case, because the information continues to be accurate.
I did not say you said I said that. My point was it was irrelevant.
> And I explained why history is not the same as out of date information. Thus, you have not explained why someone would submit an article with out of date information.
There is nothing to explain. Some history is on topic. History includes articles with out of date information. Consider the 1st Linux announcement.[1]
[1] https://news.ycombinator.com/item?id=6276961
Obviously because people don't check or don't know or don't understand assume it's still valid. It happens all the time.
The actual question is why would anyone even think that people would not submit articles with out of date information?
> The year is useful in the title when the article provides out of date information.
In other words, rahimnathwani was suggesting, incorrectly, that appending the year is necessary only when the article submitter knows that the article provides out of date information.
In that context, when I asked rhetorically, "Why would someone even submit an article with out of date information?" I meant why would someone knowingly submit an article with out of date information? Thus, your examples are not applicable to the debate about when an article submitter should append the year to the article title, because in your examples, the out of date information is submitted unknowingly:
> Obviously because people don't check or don't know or don't understand assume it's still valid.
Of course people unknowingly submit out of date information. If you thought I was suggesting otherwise, that's a misunderstanding of the argument.
So that's the case my comment was about (which was precisely the case of TFA, since people were discussing adding the date after it was posted).
rahimnathwani submitted the article. I personally replied to rahimnathwani, the submitter, suggesting that the year (2021) be appended after it was posted. So, your so-called "fun fact" describes the very situation that I already participated in. You're explaining something that is not only totally obvious to me already but in fact should be totally obvious to everyone, including yourself, since it occurred just a few comments earlier in this discussion thread!
Again, though, the publication date has absolutely nothing to do with whether or not the information is out of date.
What I was getting at, in any case, is that it sometimes the need to add the date after-the-post-is-up can arise. To that, the objection that "but who would post outdated information knowingly" is moot.
In the general case, I stand by what rahimnathwani said, that "the year is useful in the title when the article provides out of date information".
You might think "that's not the reason for the HN convention". It might not even be the canonical reason PG or whoever established the convention.
Nonetheless, it's a solid idea, and it probably should have been the reason for the HN convention to begin with.
I don't care nor have a use for a date stamp on evergreen or still relevant content. Even if it's older article or post someone posts here. I do have a use for a date stamp of a post with information that's potentially out of date, whether the OP includes one, or someone here adds it.
The incredulous tone of this hypothetical worries me, because I think this actually happens with troubling regularity.
I also reported a bug in Safari HTTP proxy handling that prevents encryption. No reply.
I provided source code, and reproduction steps for both.
Fuck Apple
So I emailed AppleInsider who did a short article about it and within two weeks another .x.x release came out and the bug was fixed.
Sadly I think this is one of the only ways to get big tech companies to take action these days. Cant tell you how many times I have read about Comcast, Verizon, etc screwing someone over and being unreasonable about it until theres an article on ArsTechnica or some similar site about it.
their software-hardware design philosophy is unmatched in the consumer space i would say. the fact that the transition from intel chips to arm chips went as smooth as it did is a testament to my point.
of course, they are not perfect.
Apple is the only company that makes such terrible file systems. I have resized partitions on NTFS and EXT3 and never lost any data. Apple is uniquely terrible in terms of file systems and data integrity in general.
It just never works. And just when you think it's finally reliable and has worked for a while, it breaks in new unexpected ways. Sometimes hanging the whole machine. This was with both macOS as a server and a Linux server (less issues with Linux, but still broken).
Samba isn't great on other OSs either, but not as broken as on macOS. At this point I've given up on Samba completely, and consider it something I won't use again.
Carbon Copy Cloner was excellent at creating a bootable backup, and Super Duper seemed very serviceable too.
For most of iOS 18 there was a bug where iOS and iPadOS simply couldn’t connect to Samba shares on Linux but that has since been resolved.
Apple does implement some custom functions that make CIFS (Samba or Windows based) shares less performant than Apple platform served shares in certain situations. Especially for server side copy. TrueNAS has recently patched this so that it works.
Adopting/inheriting a CIFS-backed Time Machine share is needlessly precarious.
Yes, exactly. That bug also affected macOS Sequoia but IIRC could be worked around (not on iOS though). And that was just the latest series of bugs, the pattern repeats itself every once in a while and it got worse after they discontinued Mac OS X Server and their own Time Capsule. Every few months something breaks.
E.g. just in March 2025, the 13.7.5 update to Ventura (last OS supporting a 2017 Mac) broke SMB filesharing for many users. There was a workaround, but it was only fully fixed in 13.7.7 four months later.
The fruit extensions are useful for performance, but don't really help with connection issues / hangs. Aside from that, the main usecase they enabled in the past was working Timemachine backups, but my long-term experiences with Timemachine over the network (with Mac OS X Server, fully supported by Apple at the time) were less than stellar and so I'm not doing that ever again either.
Overall, it's just not a level of reliability I'm comfortable with for a network filesystem implementation.
I don't compile off of a Samba share for example, or do operations involving tons of small files frequently.
If/where there's hotfixes or patches needed, seeking scripts that can run when waking seem to be the only way to ensure any connectivity remains in place when opening one's laptop.
Maybe your distro’s samba is out of date?
I'd use SMB only if I had to connect to some corporate server on Windows, or if whetever system I connected to didn't support anything else.
But a few weeks before release, Sun was acquired by Oracle.
It was going to take months of further negotiations to nail it down. Apple-sourced ZFS on macOS was canceled.
ZFS had been released by Sun before the Oracle Situation under their MIT-like CDDL.
I suppose when Big Tech is involved, they rattle patents at one another until the dust settles with handshakes and payouts all around. I'm speculating here. But I was told that the CDDL was not considered sufficient for Apple to support its own development efforts.
ZFS is relatively complicated, but it generally works. At the time, Apple was shipping servers with iSCSI SAN and a GUI comparable to Disk Utility.
Really a shame. I was running native ZFS on my Mac Pro that summer. Eventually migrated those pools to Open Solaris and eventually to Linux.
It can feel like until there's a bit more clarity or certainty publicly, or personally running multiple backups on different file systems is the default start, which isn't always ideal.
I like storage to become, and remain an appliance.
No widely-deployed filesystem before or since ZFS is in the same league.
For workstations I just use the distribution default. APFS on Mac, NTFS windows, and my Linux distro happens to use btrfs by default.
It doesn’t really have a stable and battle tested competitor in the FOSS arena considering its feature set.
(Of course there are things like Lustre and Glustre, but these are orthogonal solutions for different use cases.)
So is this the actual bug then? Because I just used Disk Utility (in Tahoe) to check and repair an AFPS volume and it appeared to do the right thing, with the caveat that I had to "eject" it manually since Disk Utility complained that stuff was using it. Presumably, booting into macOS Recovery would've worked, too.
If the author's reading, is there a way we can help amplify any existing bug report(s)?
tl;dr:
- Missing disk space can be related to a broken filesystem
- APFS tooling is currently really bad, you probably need to erase the volume and reinstall to fix any filesystem problem
... and this was in 2018 and I fear not much changed.
2 more comments available on Hacker News