Bcachefs Removed From the Mainline Kernel
Posted3 months agoActive3 months ago
lwn.netTechstoryHigh profile
heatedmixed
Debate
80/100
BcachefsLinux KernelFile Systems
Key topics
Bcachefs
Linux Kernel
File Systems
Bcachefs was removed from the mainline Linux kernel due to disagreements between its developer and Linus Torvalds, sparking a discussion about the project's future and the kernel development process.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
3h
Peak period
110
6-12h
Avg / period
22.9
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 30, 2025 at 3:52 AM EDT
3 months ago
Step 01 - 02First comment
Sep 30, 2025 at 6:25 AM EDT
3h after posting
Step 02 - 03Peak activity
110 comments in 6-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 4, 2025 at 5:33 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45423004Type: storyLast synced: 11/20/2025, 7:50:26 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
AFAIK that change didn't add functionality or fix any existing issues, other than breaking ZFS - which GKH was absolutely fine with, dismissing several requests for it to be reverted, stating the "policy": [1]
> Sorry, no, we do not keep symbols exported for no in-kernel users.
[0] https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux... [1] https://lore.kernel.org/lkml/20190111054058.GA27966@kroah.co...
> Sun explicitly did not want their code to work on Linux, so why would we do extra work to get their code to work properly?
Why would you accommodate someone who explicitly went out of their way to not accommodate you?
It took many conflicts with bcachefs developer to reach this state. Olive branch has been extended again and again...
I am not kernel developer, but less exposed API/functions is nearly always better.
The removed comment of function even starts with: Careful: __kernel_fpu_begin/end() must be called with
(Hard and tedious work, but not impossible).
So of course they won't, but it isn't impossible.
The modern OpenZFS project is not part of Oracle, it's a community fork from the last open source version. OpenZFS is what people think of when they say ZFS, it's the version with support for Linux (contributed in large part by work done at Lawrence Livermore).
The OpenZFS project still has to continue using the CDDL license that Sun originally used. The opinion of the Linux team is the CDDL is not GPL compatible, which is what prevents it from being mainlined in Linux (it should be noted not everyone shares this view, but obviously nobody wants to test it in court).
It's very frustrating when people ascribe malice to the OpenZFS team for having an incompatible license. I am sure they would happily change it to something GPL compatible if they could, but their hands are tied: since it's a derivative work of Sun's ZFS, the only one with the power to do that is Oracle, and good luck getting them to agree to that when they're still selling closed source ZFS for enterprise.
Making /home into a btrfs filesystem would be an opening salvo.
IBM now controls Oracle's premier OS. That is leverage.
I'm just sorry for the guy and perhaps a little bit sorry for myself that I might have to reformat my primary box at some point…
Also unrelated, but Sun was a very open source friendly company with a wide portfolio of programs licensed under GNU licenses, without some of which Linux would still be useless to the general public.
Overall, designing a good filesystem is very hard, so perhaps don't bite the hand that feeds you…?
The maintainer kept pushing new features at a time when only bugfix are allowed. He also acted like a child when he got asked to follow procedures. Feel sorry for his bad listening and communication abilities.
The "new features" were recovery features for people hit by bugs. I can see where the ambiguity came from.
CDDL was a compromise choice that was seen as workable for inclusion based especially on certain older views on what code will be compatible or not, and it was unclear and possibly expected that Linux kernel will move to GPLv3 (when it finally releases) which was seen as compatible with CDDL by CDDL drafters.
Alas, Solaris source release could not wait unclear amount of time for GPLv3 to be finalized
> it was unclear and possibly expected that Linux kernel will move to GPLv3
In what world? Kernel was always GPLv2 without the "or later" clause. Kernel had would tens of thousands of contributors. Linus made it quite obvious by that time kernel will not move to GPLv2 (even in 2006).
Even if I gave them benefit of the doubt, GPLv3 was released in 2007. They had years to make license change and didn't. They were sold to Oracle in 2010.
The CDDL is actually very permissive. You can combine it with anything, including proprietary licences.
[0] https://github.com/openzfs/zfs/issues/8259
[1] https://github.com/openzfs/zfs/pull/8965
Is moving a symbol from EXPORT_SYMBOL(some_fun) to EXPORT_SYMBOL_GPL(some_func) actually changing the API? Nope, the API is exactly the same as it was before, it's just changed who is allowed to use it.
From the perspective of an out of tree module that isn't GPL you have removed stuff.
I'm honestly not sure how one outside the kernel community could construe that as not removing something.
No, it was always designed to be hostile to Linux from the outset. It's a project that doesn't want to interoperability with Linux so I'm not entirely sure why you think the Linux folks should maintain an API for them.
Since the pre-fork code is from Sun, Oracle owns the copyright, and they won't re-license it.
The idea that the OpenZFS team wants CDDL out of spite for Linux is an absurd conspiracy theory. Their hands are tied - I'm sure they'd move to a compatible license if they could, but they can't.
So the OpenZFS team is not exactly interested in moving to GPLv2, because it would break multiple platforms.
But it's an academic exercise anyway, since it seems Oracle has no intention of allowing them to relicense.
Linux and OpenZFS are pretty much locked into their licenses, regardless of what people might want today. There are too many contributors to Linux to relicense, and while OpenZFS has fewer, I don't think there's any reason to think Oracle would relicense, given they went back to closed source with Solaris and ZFS on Solaris.
> It's a project that doesn't want to interoperability with Linux.
Regardless of the original intent of Sun in picking the license, it's hard to imagine a project called ZFS on Linux (which was merged into OpenZFS) doesn't want to interoperate with Linux.
> They could always move to a compatible license? > No, it was always designed to be hostile to Linux from the outset. It's a project that doesn't want to interoperability with Linux
I'm not sure why you've jumped here. I didn't mention a specific project or licence.
But, nonetheless I'm going to assume you mean OpenZFS.
1. No they can't change the license. Much like Linux contributors retain their own copyright, OpenZFS can't just change the license. The only group that could hypothetically change it is Oracle given the clause the steward of the license can release a new version, but that's unlikely and Oracle has absolute nothing to do with the existing project.
2. Staying on the license and compatibility. It's really quite confusing on what's compatible in the eyes of Linux. The very fact they have separate export and export GPL symbols suggest Linux as a project sanctions non GPL modules, and considers them compatible if they only use those symbols, perhaps in the same vein as they consider the syscall boundary to be the compatible with non GPL? If someone who is actually in the know about why there are two sets of exports if love to be know.
3. Always designed to be hostile to Linux. Whether that's true or not is the debateable, there are conflicting opinions from those who worked at Sun at the time. Also the comment criticises a community that had no hand in whether or not it was intended to be hostile to Linux or not. In the end is copyleft software, very similar in spirit to the Mozilla public license. And by definition copyleft licenses are inherently incompatible without specific get out of the jail clauses to combine them (see MPLv2 for example).
4. Re interoperability. Strongly disagree. OpenZFS takes great strides to be compatible with Linux. Each release a developer sends hours pouring over Linux changes and updating a compat layer, and the module remains compatible to compile against multiple Linux versions at any one time, there are even compat patches to detect distro specific backports that while the version hasn't changed the distro have back ported things that change behaviour. That's a serious commitment to interoperability. And a large number of openzfs devs do their work against Linux as their primary platform, hence why the FreeBSD rebased their ZFS upstream on ZFS on Linux, leading it to become the official upstream OpenZFS. I can't see how anyone could say in good faith they don't care about Linux compatibility unless they haven't looked over at the openzfs project for over a decade.
4. Re why do I think Linux folks should maintain APIs for them.
The way you worded this strongly implies I was saying Linux should maintain an API for them. In no way did I say that. I was replying to a post that was adament that Linux doesn't remove things. I provided a perspective that Linux does in fact remove things. I wasn't arguing for maintaining any API, Linux doesn't even guarantee internal APIs for themselves. I was pointing out changing a symbol export from export to everyone to export GPL only isn't changing it, given it's the exact same API they've just simply removed it for some groups.
None the less I think it'd be great if Linux could maintain some APIs for out of tree modules. But they don't and that's fine. I just find changing exports from open for everyone tomorrow GPL only to be rather hostile.
Really, no one in either of these communities had any say in their license (sans Torvalds). Both creating great stuff for we as users to run. And it'd be great if people working on free software could get along, and those in the peanut gallery didn't prescribe malcontent between them because of a difference in license they didn't pick.
The choice of creating a new license was because of two reasons:
- Internally people wanted for the code to be usable by not just Linux and Solaris (lots of BSD fans, for example)
- Sun was insisting on mutual patent protection clauses because GPLv2 didn't support them, and GPLv3 was not yet available to discuss viability at all.
For now
Changes would therefore need to be an improvement for in-tree drivers, and not merely something for an out-of-tree driver.
Bugs are a fact of life. Bug fixes are a fact of life. Sometimes those bugs will cause data loss. Adding code in and -rc to support data recovery when a bug caused data loss to occur is a good thing for users. Portraying it as a bad thing is the worst kind of bike shedding.
Would be great to have an in kernel alternative to ZFS for parity RAID.
This is not the first project for which this was an issue, and said maintainer has shown no will to alter their behaviour before or since.
The underlying problem might have been importing Bcachefs into the mainline kernel to early in it's life cycle.
A lot of people aren't going to keep up with Linus personal travel plans just so they don't send a late patch.
He refused to acknowledge his place on the totem pole and thought he knew better than everyone else, and that they should change their ways to suit his whims.
I can understand the motivation. It's a PITA to support an older version of code. But that's not how linux gets it's stability.
Since linux has closed windows and long term kernels it means the fix to the same bug could need to be done in multiple ways.
Multiple changes per PR is bad, but I assume it's still one change per commit.
IMHO, it may be more natural, but only during development. Trying to do a git bisect on git histories like the above is a huge pain. Trying to split things up when A is ready but B/C are not is a huge pain.
Over the long term the number of cases where such a response is needed will decrease as expected.
Do you really want to live in a world where data losses in stable releases is considered Okay?
Why do they need to be in the kernel anyways? Presumably they are running on an unmounted device?
Maintaining a piece of code that needs to run in both user space and the kernel is messy and time consuming. You end up running into issues where dependencies require the porting of gobs of infrastructure from the kernel into userspace. That's easy for some thing, very hard for others. There's a better place to spend those resource: by stabilizing bcachefs in the kernel where it belongs.
Other people have tried and failed at this before, and I'm sure that someone will try the same thing again in the future and relearn the same lesson. I know as business requirements for a former employer resulted in such a beast. Other people thought they could just run their userspace code in the kernel, but they didn't know about limits on kernel stack size, they didn't know about contexts where blocking vs non-blocking behaviour is required or how that interacted with softirqs. Please, just don't do this or advocate for it.
It's really not, the proper way to recover your important data is to restore from backups, not to force other people to bend longstanding rules for you.
>Do you really want to live in a world where data losses in stable releases is considered Okay?
Bcachefs is an experimental filesystem.
There is no reason to break kernel guidelines to deliver a fix.
If I'm not mistaken Kent pushed recovery routines in the RC to handle some catastrophic bug some user caused by loading the current metadata format into an old 6.12 kernel.
It isn't some sinister "sneaking features". This fact seems to be omitted by clickbaity coverage over the situation.
Rule 1: don't assume malice.
That claim was to add new logging functionality to allow better troubleshooting to eventually address critical issues.
This should have been out of trunk for someone to test, rather than claiming it to be something that wasn't strictly true. Especially when it's the kernel.
I never understand why some people are unwilling to make any attempt at getting along. Some people seem to feel any level of compromise is too much.
I have a multidevice filesystem, comprised of old HDDs and one sketchy PCI-SATA extension. This FS was assembled in 2019 and, though it went through periods of being non-writable, is still working and I haven't lost any[1] data. This is more than 5 years, multitude of FS version upgrades, multiple device replacements with corresponding data evacuation and rereplication.
[1] Technically, I did lose some, when a dying device started misbehaving and writing garbage, and I was impatient and ran a destructive fsck (with fix_errors) before waiting for a bug patch.
Don't want to compare it to other solutions but this is impressive even on its own merits.
IIRC the whole drama began because Kent was constantly pushing new features along with critical bug fixes after the proper merge window.
I meant stable in the sense where most changes are bug fixes, reducing the friction of working within the kernel schedules.
Yes, me too.
> Would be great to have an in kernel alternative to ZFS
Yes it would.
> for parity RAID.
No.
Think of the Pareto Principle here. 80% of the people only use 20% of the functionality. BUT they don't all use the same 20% so overall you need 80% of the functionality... or more.
ZFS is one of the rivals here.
But Btrfs is another. Stratis is another. HAMMER2 is another. MDRAID is another. LVM is another.
All proviude some or all of that 20% and all have pros and cons.
The point is that, yes, ZFS is good at RAID and it's much much easier than ext4 on MDRAID or something.
Btrfs can do that too.
But ZFS and Btrfs do COW snapshots. Those are important too. OpenSUSE, Garuda Linux, siduction and others depend on Btrfs COW.
OK, fine, no problem, your use case is RAID. I use that too. Good.
But COW is just as important.
Integrity is just as important and Btrfs fails at that. That is why the Bcachefs slogan is "the COW filesystem that won't eat your data."
Btrfs ate my data 2-3 times a year for 4 years.
Doesn't matter how many people who praise it, what matters are the victims who have been burned when it fails. They prove that it does fail.
The point is not "I can do that with ext4 on mdraid" or "I can do that with LVM2" or "Btrfs is fine for me".
The point is something that can do _all of these_ and do it _better_ -- and here, "better" includes "in a simpler way".
Simpler here meaning "simpler to set up" and also "simpler in implementation" (compared to, say, Btrfs on LVM2, or Btrfs on mdraid, or LVM on mdraid, or ext4 on LVM on RAID.
Something that can remove entire layers of the stack and leave the same functionality is valuable.
Something that can remove 90% of the setup steps and leave identical functionality matters... Because different people do those steps in different order, or skip some, and you need to document that, and none of us document stuff enough.
The recovery steps for LVM on RAID are totally different from RAID on LVM. The recovery for Btrfs on mdraid is totally different from just Btrfs RAID.
This is why tools that eliminate this matter. Because when it matters whether you have
1 - 2 - 3 - 4 - 5
or
1 - 2 - 4 - 3 - 5
Then the sword that chops the Gordian knot here is one tool that does 1-5 in a single step.
This remains true even if you only use 1 and 5, or 2 and 3, and it still matters if you only do 4.
> ext4 on MDRAID or something
Are trivially easy to set up, expand, or replace drives; require no upkeep; and no setup when placed into entirely different systems. Anybody using ZFS or ZFS-like to do some trivial standard RAID setup (unless they are used to and comfortable with ZFS, which is an entirely different story) is just begging to lose data. MDADM is fine.
Or people who want data checksums.
> Anybody using ZFS or ZFS-like to do some trivial standard RAID setup (unless they are used to and comfortable with ZFS, which is an entirely different story) is just begging to lose data.
How? You just... hand it some devices, and it makes a pool. Drive replacement is a single command.
> Are trivially easy to set up
Done it. Been doing it for 25+ years.
ZFS is easier. MUCH easier, and much quicker too.
> expand
As easy with ZFS.
> or replace drives;
Easier with XFS.
> require no upkeep;
False. Ext4 requires the occasional check. This must be done offline. ZFS doesn't and can be scrubbed while online and actively in use.
> and no setup when placed into entirely different systems.
Same as ZFS.
> Anybody using ZFS or ZFS-like to do some trivial standard RAID setup (unless they are used to and comfortable with ZFS, which is an entirely different story) is just begging to lose data.
False.
> MDADM is fine.
I am not saying it isn't. I am saying ZFS is better.
I think you haven't tried it, because your claims betray serious ignorance of what it can do.
I built my main NAS box's ZRAID with the drives in USB-3 caddies on a Raspberry Pi 4. I moved it to the built in SATA controllers of an HP Microserver running TrueNAS Core.
Imported and just worked. No reconfig, no rebuild, nothing.
It moves seamlessly between Arm and x86, Linux and FreeBSD, no problem at all. Round trip if you want.
He is BDFL. No, these changes do not belong into this part of our release window. No pull. End of discussion. Instead he always talked and caved and pulled. And of course situation repeated, as they do...
Perhaps as BDFL he let it slip a few too many times, but that's generally the way you want to go - as a leader, you want to trust your subordinates are doing the right thing; which means that you'll get burned a few times until you have to take action (like this).
The only other option makes you into a micromanager, which doesn't scale.
> He is BDFL.
As far as I remember, "B" in "BFDL" stands for "benevolent". This usually might mean give a couple of warnings, give a benefit of doubt, extend some credit, and if that doesn't help, invoke the "D".
[0] https://www.phoronix.com/review/linux-617-filesystems
https://www.phoronix.com/forums/forum/software/general-linux...
I know more about ZFS than the others. It wasn't specified here whether ZFS had ashift=9 or 12; it tries to auto-detect, but that can go wrong. ashift=9 means ZFS is doing physical I/O in 512 bytes, which will be an emulation mode for the nvme. Maybe it was ashift=12. But you can't tell.
Secondly, ZFS defaults to a record size of 128k. Write a big file and it's written in "chunks" of 128k size. If you then run a random read/write I/O benchmark on it with a 4k block size, ZFS is going to be reading and writing 128k for every 4k of I/O. That's a huge amplification factor. If you're using ZFS for a load which resembles random block I/O, you'll want to tune the recordsize to the app I/O. And ZFS makes this easy, since child filesystem creation is trivially cheap and the recordsize can be tuned per filesystem.
And then there's the stuff things like ZFS does that XFS / EXT4 doesn't. For example, taking snapshots every 5 minutes (they're basically free), doing streaming incremental snapshot backups, snapshot cloning and so on - without getting into RAID flexibility.
On the the configuration stuff, these benchmarks intentionally only ever use the default configuration – they're not interested in the limits of what's possible with the filesystems, just what they do "out of the box", since that's what the overwhelming majority of users will experience.
Substitutable how? Like, I'm typing this on a laptop with a single disk with a single zpool, because I want 1. compression, 2. data checksums, 3. to not break (previous experiments with btrfs ended poorly). Obviously I could run xfs, but then I'd miss important features.
You probably don't want to do that because that'll result in massive metadata overhead, and nothing tells you that the app's I/O operations will be nicely aligned, so this cannot be given as general advice.
has the benchmarks of the dkms module
[0]: https://www.phoronix.com/forums/forum/software/general-linux...
Doesn't that mean I now have to enroll the MOK key on all my work workstations that use secure boot? If so that's a huge PITA on over 200 machines. As like with the NVIDIA driver you can't automate the facility.
Is this filesystem stable enough for deploying on 200 production machines?
From a cursory look I get things like this:
https://hackaday.com/2025/06/10/the-ongoing-bcachefs-filesys...
Anyway, fair question IMO. Another point I'd like to make... migrating away from this filesystem, disabling secure boot, or leaning into key enrollment would be fine. Dealer's choice.
The 'forced interaction' for enrollment absolutely presents a hurdle. That said: this wouldn't be the first time I've used 'expect' to use the management interface at scale. 200 is a good warm up.
The easy way is to... opt out of secure boot. Get an exception if your compliance program demands it [and tell them about this module, too]. Don't forget your 'Business Continuity/Disaster Recovery' of... everything. Documents, scheduled procedures, tooling, whatever.
Again, though, stability is a fair question/point. Filesystems and storage are cursed. That would be my concern before 'how do I scale', which comparatively, is a dream.
Not going to happen. Secure Boot is a mandatory requirement in this scenario.
I can't talk further because NDA, but sure am confused by the downvotes for asking a question.
I'll hit this post positively in an attempt to counter the down-trend. edit: well, that was for squat.
However, I would like to push back on that article.
It says that bcachefs is "unstable" but provides no evidence to support that.
It says that Linus pushed back on it. Yes, but not for technical reasons but rather process ones. Think about that for a second though. Linus is brutal on technology. And I have never heard him criticize bcachefs technically except to say that case insensitivity is bad. Kind of an endorsement.
Yes, there have been a lot of patches. It is certainly under heavy development. But people are not losing their data. Kent submitted a giant list of changes for the kernel 6.17 merge window (ironically totally on time). Linus never took them. We are all using the 6.16 version of bcachefs without those patches. I imagine stories of bcachefs data loss would get lots of press right now. Have you heard any?
There are very few stories of bcachefs data loss. When I have heard of them, they seems to result in recovery. A couple I have seen were mount failures (not data loss) and were resolved. It has been rock-solid for me.
Meanwhile just scan the thread for btrfs reports...
Where did Linus call bcachefs "experimental garbage"? I've tried finding those comments before, but all I've been able to find are your comments stating that Linus said that
For sure it's a headache when you install some module on a whole bunch of headless boxes at once and then discover you need to roll a crash cart over to each and every one to get them booting again, but the secure boot guys would have it no other way.
I'm not even a bcachefs user, but I use ZFS extensively and I _really_ wanted Linux to get a native, modern COW filesystem that was unencumbered by the crappy corporate baggage that ZFS has.
In the comments on HN around any bcachefs news (including this one) there are always a couple throwaway accounts bleating the same arguments - sounding like the victim - that Kent frequently uses.
To Kent, if you're reading this:
From a long time (and now former) sponsor: if these posts are actually from you, please stop.
Also, it's time for introspection and to think how you could have handled this situation better, to avoid having disappointed those who have sponsored you financially for years. Yes, there are some difficult and flawed people maintaining the kernel, not least of which Linus himself, but you knew that when you started.
I hope bcachefs will have a bright future, but the ball is very clearly in your court. This is your problem to fix.
(I'm Daniel Wilson, subscription started 9th August 2018, last payment 1st Feb 2025)
Seems to tick all of the boxes in regard to what you're looking for, and its mature enough that major linux distros are shipping with it as the default filesystem.
Your statement is misleading. No one is using btrfs on servers. Debian and Ubuntu use ext4 by default. RHEL removed support for btrfs long ago, and it's not coming back:
> Red Hat will not be moving Btrfs to a fully supported feature. It was fully removed in Red Hat Enterprise Linux 8.
https://philip.greenspun.com/blog/2024/02/29/why-is-the-btrf...
> We had a few seconds of power loss the other day. Everything in the house, including a Windows machine using NTFS, came back to life without any issues. A Synology DS720+, however, became a useless brick, claiming to have suffered unrecoverable file system damage while the underlying two hard drives and two SSDs are in perfect condition. It’s two mirrored drives using the Btrfs file system
I am hoping we will get ZFS from Ubnt NAS via update.
First one is that they don't use btrfs own RAID (aka btrfs-raid/volume management). They actually use hardware RAID so they don't experience any of the stability/data integrity issues people experience with btrfs-raid. Ontop of this, facebooks servers run in data centers that have 100% electricity uptime (these places have diesel generators for backup electricity)
Synology likewise offers btrfs on their NAS, but its underneath mdadm (software RAID)
The main benefit that Facebook gets from btrfs is transparent compression and snapshots and thats about it.
So yes, if you are Facebook, and put it on a rock-solid block layer, then it will probably work fine.
But outside of the world of hyperscalers, we don't have rock solid block layers. [1] Consumer drives occasionally do weird things and silently corrupt data. And on top of drives, nobody uses ECC memory and occasionally weird bit flips will corrupt data/metadata before it's even written to the disk.
At this point, I don't even trust btrfs on a single device. But the more disks you add to a btrfs array, the more likely you are to encounter a drive that's a little flaky.
And Btrfs's "best feature" really doesn't help it here, because it encourages users to throw a large number of smaller cheap/old spinning drives at it. Which is just going to increase the chance of btrfs encountering a flaky drive. The people who are willing to spend more money on a matched set of big drives are more likely to choose zfs.
The other paradox is that btrfs ends up in a weird spot where it's good enough to actually detect silent data corruption errors (unlike ext4/xfs and friends where you never find out your data was corrupted), but then it's metadata is complex and large enough that it seems to be extra vulnerable to those issues.
---------------
[1] No, mdadm doesn't count as a rock-solid block layer, it still depends on the drives to report a data error. If there is silent corruption, madam just forwards it. I did look into using a synology style btrfs on mdadm setup, but I searched and found more than a few stories from people who's synology filesystem borked itself.
In fact, you might actually be worse off with btrfs+mdadm, because now data integrity is done at a completely different layer to data redundancy, and they don't talk to each other.
Plus I needed zvols for various applications. I've used ZFS on BSD for even longer so when OpenZFS reached a decent level of maturity the choice between that and btrfs was obvious for me.
It's really difficult to get a real feel for BTRFS when people deliberately omit critical information about their experiences. Certainly I haven't had any problems (unless you count the time it detected some bitrot on a hard drive and I had to restore some files from a backup - obviously this was in "single" mode).
Some of the most catastrophic ones were 3 years ago or earlier, but the latest kernel bug (point 5) was with 6.16.3, ~1 month ago. It did recover, but I already mentally prepared to a night of restores from backups...
Keeping it healthy means paying close attention to "btrfs fi df" and/or "fi usage" for best results.
ZFS also does not react well to running out of space.
I don't understand how btrfs is considered by some people to be stable enough for production use.
[0]I'm currently evaluating OpenSuse as a possible W11 replacement, but not using it for anything serious atm.
I am also frustrated by this whole debacle, I'm not going to stop funding him though Bcachefs is a solid alternative to btrfs. It's not at all clear to me what really happened to make all the drama. A PR was made that contained something that was more feature-like than bugfix-like, and that resulted in a whole module being ejected from the kernel?
I really wish, though that DKMS was not such a terrible a solution. It _will_ break my boot, because it always breaks my boot. The Linux kernel really needs a stable module API so that out-of-tree modules like bcachefs are not impossible to reliably boot with.
This isn't just a one time thing, speaking as someone who follows the kernel, apparently this has been going on pretty much since bcachefs first tried to get into Linus's tree. Kent even once told another kernel maintainer to "get your head examined" and was rewarded with a temporary ban.
Edit: To be fair, the kernel is infamous for being guarded by stubborn maintainers but really I guess the lesson to be learned here is if you want your pet project to stick around in the kernel you really can't afford to be stubborn yourself.
Amen.
And to your point about it being a "pet project" - I'm sure I could go look at the commit history, but is anyone other than Kent actually contributing meaningfully to bcachefs ? If not, this project sorely needs more than one person involved.
The dev acted out of line for kernel development, even if _kind_ of understandable (like with the recovery tool), but still in a way that would set a bad precedent for the kernel, so this appears to be good judgement from Linus.
Hope the best for Bcachefs's future
Bcachefs is exciting on paper, but even just playing around there are some things that are just untenable imho. Time has proven that the stability of a project stems from the stability of the teams and culture behind it. As such the numbers don’t lie and unless it can be at parity with existing filesystems I can’t be bothered to forgive the misgivings. I’m looking forward to the day when bcachefs matures… if ever, as it is exciting.
Also if something has changed in the last year I’d love to hear about it! I just haven’t found anything compelling enough yet to risk my time bsing around with it atm.
[1] https://youtube.com/watch?v=_RKSaY4glSc&pp=ygUZTGludXMgZmlsZ...
51 more comments available on Hacker News