Static Sites with Python, Uv, Caddy, and Docker
Posted5 months agoActive4 months ago
nkantar.comTechstoryHigh profile
heatedmixed
Debate
80/100
Static Site GenerationDockerWeb Development
Key topics
Static Site Generation
Docker
Web Development
The article discusses using Python, uv, Caddy, and Docker to create a static site, sparking debate among HN commenters about the complexity and over-engineering of the approach.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
3h
Peak period
78
Day 2
Avg / period
22
Comment distribution88 data points
Loading chart...
Based on 88 loaded comments
Key moments
- 01Story posted
Aug 22, 2025 at 11:15 AM EDT
5 months ago
Step 01 - 02First comment
Aug 22, 2025 at 2:04 PM EDT
3h after posting
Step 02 - 03Peak activity
78 comments in Day 2
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 2, 2025 at 3:36 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 44985653Type: storyLast synced: 11/20/2025, 3:16:55 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
For making a static site that you're personally deploying, exactly why is Docker required? And if the Docker process will have to bring in an entire Linux image anyway, why is obtaining Python separately better than using a Python provided by the image? And given that we've created an entire isolated container with an explicitly installed Python, why is running a Python script via `uv` better than running it via `python`? Why are we also setting up a virtual environment if we have this container already?
Since we're already making a `pyproject.toml` so that uv knows what the dependencies are, we could just as well make a wheel, use e.g. pipx to install it locally (no container required) and run the program's own entry point. (Or use someone else's SSG, permanently installed the same way. Which is what I'm doing.)
Broadly speaking, I explicitly wanted to stay in the Coolify world. Coolify is a self-hostable PaaS platform—though I use the Cloud service, as I mentioned—and I really like the abstraction it provides. I haven’t had to SSH into my server for anything since I set it up—I just add repos through the web UI and things deploy and show up in my browser.
Yes, static sites certainly could—and arguably even should—be done way simpler than this. But I have other things I want to deploy on the same infrastructure, things that aren’t static sites, and for which containers make a whole lot more sense. Simplicity can be “each thing is simple in isolation”, but it can also be “all things are consistent with each other”, and in this case I chose the latter.
If this standardization on this kind of abstraction weren’t a priority, this would indeed be a pretty inefficient way of doing this. In fact, I arrived at my current setup by doing what you suggested—setting up a server without containers, building sites directly on it, and serving them from a single reverse proxy instance—and the amount of automation I found myself writing was a bit tedious. The final nail in the coffin for that approach was realizing I’d have to solve web apps with multiple processes in some other way regardless.
So what you're saying is that "Static sites with Python, uv, Caddy, and Docker" wasn't the overall goal. You want to stay in Coolify world, where most things are a container image.
It just so happens that a container can be just a statically-served site, and this is a pattern to do it.
By treating everything as a container, you get a lot of simplicity and flexibility.
Docker etc is overkill for the static case, but useful for the general case.
I too was skeptical of the motivation until reading this. Given that Coolify requirement, your solution (build static files in one container, deploy with Caddy in another) seems quite sensible.
Static sites with HTML, CSS, Apache and Linux.
So you just solve all problems with advanced tools, no matter how simple. You get into tech by learning how to use a chainsaw because it's so powerful and you wanted to cut a tree, now you need to cut some butter for a toast? Chainsaw!
Using a Ferrari to deliver the milk is how I've heard it said.
I mostly work in a different domain than webdev, but feel strongly about trying to decouple base technologies of your OS and your application as much as possible.
It's one thing if you are using a Linux image and choose to grab their Python package and other if their boot system is built around the specific version of Python that ships with the OS. The goal being if you later need to update Python or the OS they're not tethered together.
With Docker, usually you try to pick the base image that is the most closer to your usecase.
For instance, if you intend to run a Python 3.10 script in your final image, you'll start from a python:3.10 base image, since it contains your most important requirement.
But in the article, the author is starting from an "uv" image, and then in the creation process (described in their Dockerfile), they'll install their required version of python.
Which, for many people writing Dockerfiles daily (me included), feels weird: if you intend to run python, why not starting from an image including your python version?
Then, if we look at what uv actually is, it can of make sense: it's an app written in Rust (so not a python app) that can be seen as a _docker for python apps_.
And since the author's focus seemed to be about uv, we can see why they started from an uv image.
One advantage is being able to run the Docker anywhere with the same build. Two runs of the same build are not always the same, but the Docker image will be.
Once the basic functionality was worked out in js (with a simple JSON schema and Lourm Ipsum text), I created a Samba share on the network, mapped it to her computer.
Her first post was an edit of the Lorum Ipsum markdown and some new images, and from then on she had a pattern to follow: add a directory with a markdown file and assets.
The NAS runs a Go program that generates a JSON file for the static site js, starts a http server for testing. She can access the http server over the network to preview the static site. If it's ok, she can trigger a git commit and push (with a click). She didn't have to learn anything (like Hugo or jinja), because markdown is kinda obvious. She has exactly the website she wants.
Of course, I had more work to do initially, coming up with the static site, the JSON schema and the go generator, but it was over a week of evenings. She's been happily adding to it regularly without my interaction.
And this, my friends, is why I (a mere DevOps / Cloud guy) "vibe code" with Claude.
At the very least, assuming you want to keep it somewhat techy, I would setup a repository with a Hugo config, theme pulled in as a module, with ci to auto deploy it. Make it so the repository is just her markdown content, some css/js overrides if needed, and a hugo.toml.
Or just setup a Wordpress or similar site for her
This project seems more like something you'd do to demonstrate your skills with all these tools that do have use in a business/for-profit context working with groups but they have absolutely no use or place hosting a personal static website. Unless you're doing it for kicks and enjoy useless complexity. That's fair. No accounting for taste in recreation.
Also, starting any comment with an unqualified "The best way..." is probably not the best way to engage in meaningful dialog.
1. https://news.ycombinator.com/item?id=44993875
This level of complexity would've been acceptable if this was about deploying one's own netlify type of service for personal use. Otherwise, it's just way too complicated.
I'm currently working on a Django app, complete with a database, a caching layer, a reverse-proxy, a separate API service, etc. and still much simpler to deploy than this.
It might have gotten better since, but back when I was running a Wordpress install it was a constant battle to keep bots out.
The tools selected are faster than their more mainstream counterparts — but since it's a static site anyway, the pre-build side of the toolchain is more about "nice dev ux" and the post-build is more about "really fast to load and read".
So I can't agree.
"helps contextual the blog" mean?
It appears, for the verb, you meant: "frobnicate".
Taken from the Coolify website (which OP uses for hosting):
> Brag About It. You can impress anyone by saying that you self-host in the Cloud. They will definitely be amazed.
This is the result of a hyper consumerist post Protestant culture in America and the rest of the English speaking countries.
My Ops brain says "Taken in vacuum, yes" However, if you make other things that are not static, put them into a container and run said container on a server, keeping the CI/CD process consistent makes absolute sense.
We run static sites at my company in containers for same reason. We have Kubernetes cluster with all DNS Updating, Cert Grabbing and Prometheus monitoring so we run static sites from nginx container.
Same as how it's good to be able to easily run the exact same test suite in both dev and CI.
Even if you aren't an expert it is trivial these days to copy/paste it into chatGPT and ask it to optimize or suggest improvements to the dockerfile, it will then explain it to you.
uv's easy dependency definition really made it much easier to manage these. My previous site was org exported to html and took much more effort.
(With the conceit that the website is a "notebook" I call this file "bind").
The org mode one is at https://explog.in/config.html.
I had the opposite reaction when I read this post: I thought it was a very neat, clean and effective way to solve this particular problem - one that took advantage of an excellent stack of software - Caddy, Docker, uv, Plausible, Coolify - and used them all to their advantage.
Ignoring caching (which it sounds like the author is going to fix anyway, see their other comments) this is an excellent Dockerfile!
8 lines is all it takes. Nice. And the author then did us the favor of writing up a detailed explanation of every one of them. I learned a few useful new trick from this, particularly around using Caddy with Plausible.This one didn't strike me as over-engineering: I saw it as someone who has thought extremely carefully about their stack, figured out a lightweight pattern that uses each of the tools in that stack as effectively as possible and then documented their setup in the perfect amount of detail.
I have ~60 static websites deployed on a single small machine at zero marginal cost. I use nginx but I can use caddy just the same. With this "lightweight pattern" I'd be running 60 and counting docker containers for no reason.
Also their site isn't entirely static: they're using Caddy to proxy specific paths to plausible.io for analytics.
That doesn't mean that a container-based Caddy solution in an existing container-based deployment environment built around Coolify isn't a reasonable way to solve this.
https://raw.githubusercontent.com/jgbrwn/my-upc/refs/heads/m...
[0] https://github.com/lipanski/docker-static-website
https://github.com/kissgyorgy/redbean-docker
People are mocking you without even trying to understand what you did, why, and also the actual work that went in writing the article.
That's not everyday that someone takes the time to explain every layer of their Dockerfile.
Even if I would have went a different way, I found it interesting and it also forced me to dig deeper in uv, which I wrongly assumed I understood.
Thank you for writing it and please don't make the bad comments have any significant impact on you (maybe just put a big disclaimer in your intro next time so they'll find someone else to pick on)
I’m glad this lead to some learning for you! Some of the comments did that for me, which is great.
https://news.ycombinator.com/item?id=43555996
- Setup a docker container - Install python 3.x (specified by you) - Install poetry x.y (specified by you) - Setup README - Setup gitignore - Install black, isort, flake8, tox, pre-commit, mypy, commitizen, darglint, xdoctest, pytest with src and tests directory with a lot of inputs from you with pinned dependencies - Add nginx / caddy / traefik with Github Actions CI / CD and templates for pull requests, issues, feature requests, questions etc - Verify everything works by running all commands in docker - Give you a downloadable version of this production grade python application - As dependencies update, this whole pipeline updates immediately
If you don't want to have multiple `COPY`s, you can add a `.dockerignore` file (https://docs.docker.com/build/concepts/context/#dockerignore...) with the `COPY . .` directive and effectively configure an allowlist of paths, e.g.,
Oh wait …
Also while using Kubernetes, please use event-driven PubSub mechanisms to serve files for added state-of-the-art points.
/pun
html file -> ftp -> WWW html file -> mv /var/www/public -> WWW
Possibly SSG -> html -> etc.