Many Hells of Webdav
Key topics
The frustrations of WebDAV are still echoing through the tech community, with many sharing their own tales of woe and relief that this "nightmare" is slowly fading away. As one commenter quipped, "WebDav, life before S3," highlighting the protocol's outdated status. Despite the collective groaning, some have found success with alternative implementations like Radicale and Nextcloud, which offer more streamlined experiences for calendar and contact syncing. A surprising consensus emerged, with some users reporting that their WebDAV experiences have improved significantly with newer versions and simpler deployment methods, such as using Docker images.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
29m
Peak period
55
0-6h
Avg / period
10.7
Based on 96 loaded comments
Key moments
- 01Story posted
Jan 7, 2026 at 10:50 AM EST
2d ago
Step 01 - 02First comment
Jan 7, 2026 at 11:19 AM EST
29m after posting
Step 02 - 03Peak activity
55 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Jan 9, 2026 at 2:40 PM EST
8h ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
It used to be a constant headache to keep running, but ever since I switched to the TrueNAS/Docker plugin it has worked smoothly. I know a lot of other people also have had good luck with the much lighter Radicale if CalDAV is your primary concern.
It’s been very easy to run for me since version 15 or something. Basically i just use the stock docker image and mount a few files over there. The data folders are bind-mounded directories.
As usual with anything php, it’s only a mess if you start managing php files and folders yourself. Php has a special capability of making these kind of things messy, i don’t know why.
I picked it because it's in a language I know (Python) and free and copyleft. These days I don't contribute to anything unless it's copyleft.
No idea if it supports family calendar, I need to look into that as well at some point.
Yes! This is my #1 issue with the library as well.
Love the libraries BTW. Thank you for all of your hard work.
If there's actual employer IP in there then just leaving said employer wouldn't magically clear it.
If there isn't and you're just trying to avoid red tape, then publishing it anonymously would work around the issue.
If there is an actual IP issue then even waiting after you’re out of the company will not resolve said IP issue. If you’re using your employer’s IP then waiting is unlikely (both legally and especially morally) to magically resolve it - it’s still your employer’s IP.
If it’s just to avoid red tape but otherwise the IP is yours and has nothing to do with your employer (aka you could’ve done it just as well even if you weren’t at your current employer) then it should be fine and you’re just taking a shortcut to save time on both sides.
Basically, if your employer is a vendor of WebDAV libraries, yeah of course there’s a (legal, or a least moral) issue. If not, then all fine.
(Obviously this is just opinion and not legal advice - but legality only matters if they can figure out who did it ;)
Some employers have an unbelievably unreasonable interpretation of non-compete and IP. They think they own everything their employees do, and even though they're wrong. That doesn't stop them from ruining you and whatever unfortunate open source project they set their sights on with vexatious litigation.
Thus the suggestion to publish anonymously.
On a different point, I don't think the author's point about having to "also" inspect the headers is a fair critique of DAV - HTTP headers are to indicate a certain portion of the request/response, and the body a different one. I wish it was simpler, but I think it's an acceptable round peg in a round hole use of the tools.
What a strange process... why not read the source code of an open source working library (easy to test, run a client made by someone else on its server, and vice versa) in a language close to the target?
Why not use then those tests as a way to verify your own work after?
FWIW I'm using WebDAV, both with clients and with my own self hosted servers, on a daily basis and... it works.
One niceish thing about WebDAV/CalDAV is it's pretty set in stone for now.
I don't know if you've ever heard "Latin is a dead language"; many people think that statement is a somewhat negative-sentiment one, amounting to something along the lines of "there's no good reason to learn Latin, it's dead", but I've heard that it's actually supposed to be a positive-sentiment statement, something like "we can have confidence that contemporary interpretations of this text haven't changed in the last ~1800 years because the language itself stopped changing around then".
https://github.com/lookfirst/sardine
[0] https://jmap.io/spec.html
I heard DeltaV is very advanced, and Subversion supported it. I'm afraid to ask.
Mounting a directory through nfs, smb or ssh and files are downloaded in full before program access them. What you mean? Listing a directory or accessing file properties, like size for example do not need full download.
I might be wrong, but when I last mounted webdav from windows, it did the same dumb thing too.
On a second thought, I think you are looking at webdav as sysadmins not as developers. Webdav was designed for document authoring, and you cannot author a document, version it, merge other authors changes, track changes without fully controlling resources. Conceptually is much like git needs a local copy.
I can't imagine how to have an editor editing a file and file is changed at any offset at any time by any unknown agent whitouth any type of orchestration.
The parent comment was stating that if you use the open(2) system call on a WebDAV mounted filesystem, which doesn't perform any read operation, the entire file will be downloaded locally before that system call completes. This is not true for NFS which has more granular access patterns using the READ operation (e.g., READ3) and file locking operations.
It may be the case that you're using an application that isn't LibreOffice on files that aren't as small as documents -- for example if you wanted to watch a video via a remote filesystem. If that filesystem is WebDAV (davfs2) then before the first piece of metadata can be displayed the entire file would be downloaded locally, versus if it was NFS each 4KiB (or whatever your block size is) chunk would be fetched independently.
But many others clients won't. In particular, any video player will _not_ download entire file before accessing it. And for images, many viewers start showing image before whole thing is downloaded. And to look at zip files, you don't need the whole thing - just index at the end. And for music, you stream data...
Requiring that file is "downloaded in full before program access them" is a pretty bad degradation in a lot of cases. I've used smb and nfs and sshfs and they all let you read any range of file, and start giving the data immediately, even before the full download.
Thank you!!!!
Overall, this has worked great for me, but it did take me a while before I set it up correctly. Now I have a cache of files I use, and the rest of the stuff that I just keep there for backup or hogging purposes doesn't take disk space and stays in the cloud until I sync it.
Realistically speaking, most files I have in my cloud are read-only. The most common file that I read-write on multiple devices is my keepass file, which supports conflict resolution (by merging changes) in clients.
Also used to happen when I tried editing some markdown notes using obsidian on PC, and then using text editor (or maybe obsidian again?) on android, but I eventually sort of gave up on that use-case. Editing my notes from my phone is sort of inconvenient anyway, so I mostly just create new short notes that I can later edit into some larger note, but honestly can't remember the last time this happened.
But yes, if not careful, you could run into your laptop overwriting the file when it comes online. In my case, it doesn't really happen, and when it does, Nextcloud will have the "overwritten version" saved, so I can always check what was overwritten and manually merge the changes.
own^H^H^Hnextcloud
or
own^Wnextcloud
You might wanna look into OpenCloud (formerly known as nextcloud-go) [1]. I still use Nextcloud for the uploading of files and the calendar (though I may switch the latter), but I now sync the dir with Immich. Performance-wise a relief. I also swapped Airsonic Advanced (Java) with Navidrome (Go). Same story.
[1] https://github.com/opencloud-eu/opencloud
Do you use this for anything other than photos and videos?
https://www.thehacker.recipes/ad/movement/mitm-and-coerced-a...
Its certainly not the optimal design, but it exists in pretty much all cars, so we use it because it's there, and because of it's universal presence, its also hard to replace.
The sad part is, in a world that is increasingly mobile first, and computing devices move in and out of network coverage, HTTP based protocols actually handle frequent disconnects/reconnects much better than something like SMB.
For my personal backup needs, running from my phone, WebDAV is king. S3 would probably be better, protocol wise, but i can't have that in a simple "wrapper" that simply exposes existing files, and WebDAV works perfectly fine for LAN anyway.
Played around with WebDAV alot... a long time ago... (Exchange Webstore/Webstorage System, STS/SharePoint early editions)...
Why not download the most popular DAV libraries from various languages, Java, C++, PHP, etc. Regardless how ancient they are.
And then have AI like Claude to analyze and bring in the improvements to your own Go library?
I was doing something like that for Kerberos and Iceberg Rest Catalog API, until I got distracted and moved on to other things.
You don't have to use it to directly write code. You can use it just for the analysis phase, not making any changes.
I think the issue is mostly that it desperately tries to avoid filling its context window, and Anthropic writes system prompts that are so long it's practically already full from the start.
A good harness to read code for you and write a report on it would certainly be interesting.
How close to retirement are you?
There were also a bunch of fun things with quirks around unicode filename handling which made me sad (that was just a matter of testing against a ton of clients).
As for CalDAV and CardDAV - as others have said, JMAP Calendars/Contacts will make building clients a lot easier eventually... but yeah. My implementation of syncing as a client now is to look for sync-collection and fall back to collecting etags to know which URLs to fetch. Either way, sync-collection ALSO gives a set of URLs and then I multi-get those in batches; meaning both the primary and fallback codepath revert to the multi-get (or even individual GETs).
> Now before you mention NIH syndrome, yes, we looked at the existing Go implementation, go-webdav. This library was lacking some key features we needed, like server-side collection synchronization, and the interfaces didn’t really align with our data model. This is also going to be a key feature of our product, so we should have some level of ownership for what gets implemented.
This is a different, non x/net library.
> You just need to wrap it in a main.go and boom, webdav server.
Lol
This is a major complaint I have with RFCs.
If you want to know the current standard for a protocol or format you often have to look at multiple RFCs. Some of them partially replace parts of a previous RFC, but it isn't entirely clear which parts. And the old RFCs don't link to the new ones.
There are no less than 11 RFCs for HTTP (including versions 2 and 3)
I really wish IETF published living standards that combined all relevant RFCs together in a single source of truth.
Apple calendar supports caldav but in a way not specified in the spec. I basically had to send requests and responses to figure out how it works. I would be willing to open source my server and client (alot of which was built using/on top of existing libraries) if there is interest.
Also, would be nice to add some screenshots of the web UI.
Looks like a nice little app!
All servers have quirks, so each test is marked as "fails on xandikos" or "fails on nextcloud". There's a single test which fails on all the test servers (related to encoding). Trying to figure out why this test failed drove me absolute crazy, until I finally understood that all implementations were broken in the same subtle way. Even excluding that particular test, all server fail at least one other test. So each server is broken in some subtle way. Typically edge-cases, of course.
By far, however, the worst offender is Apple's implementation. It seems that their CalDAV server has a sort of "eventual consistency" model: you can create a calendar, and then query the list of calendars… and the response indicates that the calendar doesn't exist! It usually takes a few seconds for calendars to show up, but this makes automated testing an absolute nightmare.
[1]: https://pimsync.whynothugo.nl/
[0] https://www.samba.org/ftp/tridge/misc/french_cafe.txt
The author's mention of a lawsuit for not following an RFC is insane.
The nasty surprise was doing the server-side (for a hobby-project), many layers. Luckily found out that something called DavTest exists (it's included with Debian) so testing most basic things wasn't too bad.
Then tried mounting from Windows and running into a bunch of small issues (iirc you need to support locking), got it to mount before noticing notes about a 50mb file-size limit by default (raisable.. but yeah).
It's a shame it's all such a fragmented hodge-podge because adding SMB (the only other "universal" protocol) to an application server is just way too much complexity.
I created a test LMS in 2003 based on SCORM, at the time there really wasn't a good server for the standard... The main point was to be able to test the content which the company was hired to generate. I didn't implement several points of functionality that I just didn't need, didn't care about, and would have been difficult to implement.
That testing LMS turned into an actual product used by several aerospace companies (a few F100's, etc) and was in production for over 15 years. I remain friends with the company owner... It was about 12 years later than someone had an actual course that used one of the features that wasn't implemented... and even then, they only implemented it the half way needed, because it would have been difficult to do the rest.
Also the overwrite option was never used. You'd expect a client to copy a file, get and error if the target exists, ask user if it's ok, send same copy with overwrite flag set to true. In reality clients are doing all steps manually and delete the target before copying.
It was satisfying seeing it work at the end, but you really need to test all the clients in addition to just implementing the standard.
I hope WebDAV had a better reputation, it's already better than S3 for most use case but it feels like S3 won that space. I would much have preferred new version of the protocol being made to address its quirks like what happened with any successfull protocol like http, oauth, ...