Jeffgeerling.com Has Been Migrated to Hugo
Key topics
The blogging world is buzzing as Jeff Geerling reveals his site's migration to Hugo, sparking a lively discussion among fellow bloggers who've made the same switch. While some, like unsungNovelty, are thriving with their custom Hugo themes, others, like dijit, are grappling with issues stemming from off-the-shelf themes and versioning woes. Commenters chimed in with debugging suggestions, including leveraging AI coding tools like Claude Code and pinning versions in CI configs to avoid compatibility headaches. As the conversation unfolded, a consensus emerged: with the right approach, Hugo can be a powerful blogging platform.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
43m
Peak period
103
0-6h
Avg / period
20
Based on 160 loaded comments
Key moments
- 01Story posted
Jan 4, 2026 at 7:57 AM EST
4d ago
Step 01 - 02First comment
Jan 4, 2026 at 8:40 AM EST
43m after posting
Step 02 - 03Peak activity
103 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Jan 8, 2026 at 6:00 AM EST
10h ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I regret it.
I decided to use an off-the-shelf theme, but it didn't quite meet the needs and I forked it; as it so happens Hugo breaks userland relatively often and a complex theme like the one I have requires a lot of maintenance. Like.. a lot.
Now I can't really justify the time investment of fixing it so I just don't post anymore, the site won't even compile. In theory I could use an old version of Hugo, but I have no idea when it broke, so how far do I go back?
So, advice: submit the binary you used to generate the site to source control. I know git isn't the best at binary files, but I promise you'll thank me at some point.
I’ve had amazing success debugging compile errors with Claude Code.
Perhaps a coding agent could help you get it going again?
* I have forked some public repository that has kept up with upstream (IE; lots of example code to draw from)
* Upstream is publishing documentation on what's changing
* The errors are somewhat google-able
* Can be done in a VM and thrown away
* Limited attack surface anyway.
I think you're downvoted because the comment comes across as glib and handwavy (or not moving the discussion forward.. maybe?), and if it was a year ago I would probably argue against it.. but I think Claude Code can definitely help with this.
It just didn't exist as it does in 2023~ or whenever it was that I originally started having issues.
---
That said: it shouldn't be necessary. As others in this thread have articulated (well, imo) sometimes software is "done" and Hugo could be "done" software, except it's not; so the onus is on the operator to pin their definition of "done" version.. which is not what you'd expect.
Yep. I missed the mark.
OP seemed down and out about their blog being broken. So I was trying to put the idea across as not something to be afraid of.
I should’ve just said it - LLMs are perfect for this use case.
I am the parent, and I am indeed down about it. :P
It's a fair fix today like I mentioned, but back when it happened it wasn't available, and anyway, as I mentioned it shouldn't have been necessary.
A) Low-stakes application with
B) nearly no attack surface that
C) you don’t use consistently enough to keep in your head, but
D) is simple enough for an experienced software developer to do a quick sanity check on and run it to see if it works.
Hell, do it in a sandbox if you feel better about it.
If it was a Django/Node/rails/Laravel/…Phoenix… (sorry, I’ve been out of my 12+ years web dev career a short 4 years and suddenly realized I can only remember like 4 server-side frameworks/environments now) application, something that would run on other people’s devices, or really anything else that produces an executable output, then yeah fuck that vibe coding bullshit. But unless you’ve got that thing spitting out an SPA for you, then I say go for it.
Also, you know that you can do a binary search for the version that works for you? 0.154.0, 0.77.0, 0.115.0 ... (had to do it once myself)
[0]: https://github.com/oslc-op/website/blob/9b63c72dbb28c2d3733c...
Alternatively there's apparently some nix flakes that have been developed.
So, there's options.
I just recommend pinning your version and being intentional about upgrades.
Oh definitely. How can you suggest adding a binary to a git repository? It's a bad idea on many levels: it bloats the repository by several orders of magnitude, and it locks you to the chosen architecture and OS. Nope, nope, nope.
Unless the new version of the software includes some feature I need, I can be totally fine just running an old version forever. I could just write down the version of the SSG my site builds with (or commit it to source control) and move on with my life. It’ll work as long as operating systems and CPU architectures/whatever don’t change too much (and in the worst case scenario, I’m sure the tech exists to emulate whatever conditions it needs to run) Some software is already ‘finished’ and there’s no need to update it, ever.
Like most build systems work, for example when you set a "rust-version" in Cargo.toml and only bump it when you explicitely want to. This way it will still use the older version on a fresh checkout.
Once setup, all you to do is:
My devshell can be found here and is dead simple: https://github.com/stusmall/stuartsmall.com/blob/main/defaul...
I used Zola for my SSG and can't think of the last breaking change I've hit. I just use the pattern of locked nix devshells for everything by default. The extra tools are used for processing images or cooklang files.
For Hugo, there is Hugo Version Manager (hvm)[0], a project maintained by Hugo contributor Joe Mooring. While the way it works isn't precisely what you described, it may come close enough.
[0]: https://github.com/jmooring/hvm
I say this as someone who uses Hugo and is regularly burned (singed) by breaking changes.
Pinning your version is great until you trip across a bug (usually rendering, in my case) and need to upgrade to get rid of it. There goes a few hours. I won’t even mention the horror of needing a test suite to make sure the rendering of your old pages hasn’t changed significantly. (I ended up with large portions of text in a code block, never tracked the root cause down… probably something to do with too much indentation inside a bulleted list. It didn’t render that way several years before, though.)
0: not all, I use cargo to manage the rust toolchain
Had the same problem. Binary search is the latest trick people use.
For SSG there's not much point in upgrading if everything works, and planned migration beats the churn in this case.
You can just print a Hugo version in a HTML comment to track it in git.
Right now, you are pretty much locked into the theme (and it's version) when you set up your website for the first time.
No need for docker.
If I used MacOS then Hugo was probably very old, since I often forget to update brew packages and end up running very old software.
But, that's what I thought to do first also.
In the end, it becomes not worth the hassle, and spending time fixing it means that whatever I was going to write gets pushed out of my head, and it's very difficult to even bother.
I'll probably go back to Svbtle.
Hugo-papermod, the most famous Hugo theme, doesn't support the latest 10 releases of Hugo.
So, everyone using it is locked into using an old version (e.g. via Docker).
Nobody can point to a reason why it's a good idea for a site with any interactivity now.
All the supporters here are all the same: "I had to do a whole bunch of mental gymnastics and compromises to get <basic server side site feature> but it's worth it!" But they don't say why it was worth it, beyond "it's easy now <after lots of work costs sunk>".
When you try get to why they did it in the first place, it's universally some variation on "I got fed up with <some large server side package> so took the nuclear SSG route <and then had to eventually rewrite or get someone else's servers involved again>"
Part of this is a me problem: a personal website should be owned by the person, IMO. A lot of people are fine to let other people own parts of their personal websites, and SSGs encourage that. What even is a personal website if it's a theme that looks like someone else's, hosted and owned on someone else's server - why not just use Facebook at that point?!
1: https://www.vice.com/en/article/this-solar-powered-low-tech-...
This is the part I'm struggling with. That's the view I held from 2016 - 2024. Practically though, it's only true if you want a leaflet website with 0 interactivity.
If you want _any_ interactivity at all (like, _any_ written data of any kind, even server or visitor logs) then you need a server or a 3rd party.
This means for 99% of personal websites with an SSG, you need a real server or a 3rd party service.
When SSGs first came around (2010 - 2015) compute was getting expensive, server sides were getting big and complex, bot traffic solutions were lame, and all the big tech companies started offering free static hosting because it was an easy free thing to offer.
Compare this to now, 2026, it's apparently nothing special to handle hackernews front page on free or cheap compute. Things like Deno, Bun, even Go and Python make writing quick, small, modern server sides so much quicker, easier and safer. Cloudflare and or crowdsec can cover 99% of bot and traffic issues. It's possible to get multiple free multiple GB compute instances now.
I didn't mean to imply there's some sinister plot of people maliciously encouraging people to use SSGs to steal their stuff, but that's the reality that modern personal webdev has sleepwalked into. SSGs were first sold to make things better performing and easier than things were at the time. Pretty much any "server anywhere" you own now will be able to run a handwritten server doing SSR markdown -> HTML now.
So why force yourself to have to start entertaining ideas like making your visitors download multiple megabyte client side index files to implement search, or embedded iframes and massive JS external libraries for things like comment sections? Easier looking SSG patterns like that typically break all the stuff required to keep the web open and equal, like screen readers, low bandwidth connections and privacy. (Obviously SSR doesn't implicity solve these, but many of these things were originally conceived with SSR in mind and so are naturally more compatible).
Ask anyone who's been in and out of web dev for more than 15 years to really critically think about SSGs in depth, and I think they'll conclude they offer a complete solution for maybe 1% of websites, but seem to be recommended in 99% of places as the only worthy way to do websites now. But when you pick it apart and try it, you end up in Jeff's position: statically rendered pages (the easy bit) and a TODO with a list of compromising options for basic interactivity. In 5 years time, he'll have complex SSG pipelines that's running almost 24/7, or a complex mesh of dependencies on external services that are constantly changing or trying to start charging him more to deal with his own creations.
I really hope I'm wrong.
My needs for a site are pretty simple, so I might just go with the custom-built one to be honest.
If it breaks, I can just go look in the mirror for the culprit =)
Looking at the comments here a common pain (that I share) is config and code drift, or just losing your config file and being unable to publish a new version without re-doing everything.
I made a version where everything, including the HTML templates and CSS, is built in to a single static Go executable, no configuration files, everything is hard-coded.
This way as long as I have the specific executable version and the source markdown files, I can deterministically replicate my blog output structure.
The source is a directory in my Obsidian vault and the setup supports Obsidian-style front-matter
I just assumed static website generators would be stable but well, there's always something that breaks. Terrible experience for someone who just wants to use the generator to well generate a website vs. to tinker with it as a hobby.
I'm in the process of porting my website to PHP ... but that project hasn't gone anywhere because currently Jekyll works for me ;)
>So, advice: submit the binary you used to generate the site to source control.
If it was so easy. Jekyll is written in Ruby and comes with a load of dependencies. As far as I understand Hugo is the same - just Python. I'm neither a python nor a ruby dev so I have _ZERO_ idea how to actually install those properly. I just do a `brew install python/ruby` and pray it works.
I've been using 4.3 to 4.4 without much issues, granted the sites I generate are simple.
[1]: https://jekyllrb.com/news/
I maintained a personal fork of Zola for my site (and a couple of others), and am content to just identify the Git repository and revision that’s used.
Zola updates broke my site a few times, quite apart from my patches not cleanly rebasing. I kept on the treadmill for a while, initially because of a couple of new features I did want, but then decided it wasn’t necessary. You don’t need to run the latest version; old is fine.
—⁂—
One piece of advice I would give for people updating their SSG: build your site with the old and new versions of the SSG, and diff the directories, to avoid regressions.
If there are dynamic values, normalise both builds before diffing: for example, if you have timestamp-based cachebusting, zero all such timestamps with something like `sed -i 's/\?t=[0-9]+/?t=0/' **/*`. Otherwise regressions may be masked.
I caught breakages a couple of times this way. Once was due to Zola changing how shortcodes or Markdown worked, which I otherwise might not have noticed. (Frankly, Markdown is horrible for things like this, and Zola’s shortcodes badly-designed; but really it’s mostly Markdown’s fault.)
Better than committing the binary to source control is to:
1. encode the transformations you need an SSG to perform as plain text source code that a web browser is capable of executing (i.e., an "HTML tool"[1])
2. publish that description as another piece of content on your website
Running the static site generator to create a new post[2], then, doesn't need to involve anything more than using your browser to hit /new.html (or whatever) on your current site, clicking the button for the type=file input on that page, using the brower file picker to open the directory where the source to your static site lives, and then saving the resulting ZIP somewhere so the contents can be copied to whatever host you're using.
Previously <https://crussell.ichi.city/pager.app.htm>
1. <https://simonwillison.net/2025/Dec/10/html-tools/>
2. in fact, there's nothing stopping you from, say, putting a textarea on that page and typing out your post right there, before running the SSG
Pretty sure the version of Hugo used to generate a site is included in metadata in the generated output.
If you have a copy of the site from when it last worked, then assuming my above memory is correct you should be able to get the exact version number from that. :)
No need for the entire binary.
Just put `go run github.com/gohugoio/hugo@vX.Y.Z "$@"` into a `hugo.sh` script or similar that's in source control, and then run that script instead of the Hugo binary.
You'll need Go installed, but it's incredibly backwards compatible, so updating to newer Go versions is very unlikely to break running the old Hugo version.
So I have a fixed Hugo version that I know works.
And when I mean "solved", I actually never had the issue because, since I have no reason to upgrade Hugo, I never had to change my Docker image and never had the opportunity to risk breaking my theme.
Given, mine is not sophisticated at all and simple by design. But curious what kind of issues pops up.
> Now I can't really justify the time investment of fixing it so I just don't post anymore, the site won't even compile. In theory I could use an old version of Hugo, but I have no idea when it broke, so how far do I go back?
I've had the same issues as you, and yes, I agree that pinning a version is very important for Hugo.
It's more useful for once-and-done throwaway sites that need some form of structure that a static site generator can provide.
Or use HVM and submit the .hvm file (which is just a text file with the Hugo version that you use)
[0]: https://reederapp.com
[1]: https://1password.com
[2]: https://github.com/quoid/userscripts
Don't know what it is either, but I'd like to got off-topic and remember with fondness the time when you could subscribe to RSS feeds directly in Safari. Google Reader was replacable, a direct integration into the browser not.
And for a short time, RSS was the bee's knees across the entire Internet. Apple had the best support for it, and almost put NetNewsWire out to pasture, until they just removed all baked in RSS functionality, entirely :(
But I use Reeder across Mac, iPad, and iPhone to keep up with feeds.
[1] https://github.com/quoid/userscripts
Using Zola's GitHub actions to test/build and deploy to GitHub pages too.
I was thinking of making a GitHub action that uploaded the image from a given branch, deleted it, set the URL, and finally merged only the md files to main.
Or do you just check in images to GitHub and call it a day?
Usually you just store the images in the same git repo as the markdown. How you initially host the static site once generated is up to you: it could be GitHub Pages, it could be a small VM somewhere on the internet, whatever. Once it is running somewhere, a CDN can cache things automatically. You probably don't need a CDN at all, but if you do, that's one easy way to do it.
I guess you can also just publish the generated site directly to a CDN – images, html, css, everything – with no backing server of your own, but I can't say I've ever tried that. I would still keep the entire site source (including images) in git.
The problem with storing binaries in Git is when they change frequently, since that will quickly bloat the repo. But, images that are part of the website will ~never change over time, so they don't really cause problems.
Even unchanging images can soon become a problem causing the repo to bloat with binary data. Yes there are git tools around that.
All I meant was a way to avoid storing images in git, the rest is quite simple.
Sorry I tried to help? If that's the response I get for helping, good luck...
> All I meant was a way to avoid storing images in git, the rest is quite simple.
There is no good way to do that, and no way that I would recommend. Git is the correct solution, if that is where you are storing the markdown. No fancy git tools are required.
> Even unchanging images can soon become a problem causing the repo to bloat with binary data.
Data that is essential to the repo is not bloat.
Working Copy (git for iPad) handles submodules reasonably well, I have a few that I'm working on cloned on it and others are not so I don't use so much space.
I was frustrated that (because my posts are less frequent) changes in Hugo and my local machine could lead to changes in what is generated.
So I attached a web hook from my websites GitHub repo to trigger an AWS Lambda which, on merge to main, automatically pulled in the repo + version locked Hugo + themes. It then did the static site build in-lambda and uploaded the result to the S3 bucket that backs my website.
This created a setup that now I can publish to my website from any machine with the ability to edit my git repo. I found it a wonderful mix of WordPress-like ability to edit my site anywhere along with assurance that there's nothing that can technically fail* (well, the failure would likely, ultimately block the deploy, but I made copies of my dependencies where I could, so very unlikely).
But really the main thing I love is not maintaining really anything here... I go months without any concern that the website functions... Unlike every WordPress or similar site I help my friends run.
I have found it an absolute joy to maintain this piece of little 'machinery' for my website. The best part is that I understand every line of code in it. Every line of code, including all the HTML and CSS, is handcrafted. This gives me two benefits. It helps me maintain a sense of aesthetics in every byte that makes up the website. Further, adding a new feature or section to the site is usually quite quick.
I built the generator as a set of layered, reusable functions, so most new features amount to writing a tiny higher level function that calls the existing ones. For example, last month I wanted to add a 'backlinks' page listing other pages on the web that link to my posts and it took me only about 40 lines of new Lisp code. It took less than 15 minutes from wishing for it to publishing the new section.
Over the years this little hobby project has become quite stable and no longer needs much tinkering. It mostly stays out of the way and lets me focus on writing, which I think is what really matters.
A neat little in-between "string replacements" and "flown blown templating" is doing something like what hiccup introduced, basically using built-in data structures as the template. Hiccup looks something like this:
And you get both the power of templates, something easier than "templating engine" and with the extra benefit of being able to use your normal programming language functions to build the "templates".I also implemented something similar myself (called niccup) that also does the whole "data to html" shebang but with Nix and only built-in Nix types. So for my own website/blog, I basically do things like this:
And it's all "native" Nix, but "compiles" to HTML at build-time, great for static websites :)Thank you. That was, in fact, the inspiration behind writing my own in CL.
You list all the links to posts on the landing page: what if you have 1000, 2000 posts ? Have you thought of paginating them ?
I created a test page with 2000 randomly generated entries here: <https://susam.net/code/test/2k.html>. Its actual size is about 240 kB and the compressed transfer size is about 140 kB.
It doesn't seem too bad, so I'll likely not introduce pagination, even in the unlikely event that I manage to write over a thousand posts. One benefit of having everything listed on the same page is that I can easily do a string search to find my old posts and visit them.
How large does the canvas need to get before pagination makes sense?
Modern websites are enormous in terms of how much needs to be loaded into memory- sure, not all of it is part of the rendered document, but is there a limit to the canvas size?
I'm thinking you could probably get 100,000+ entries and still be able to use CTRL+F on the site in a responsive way since even at 100,000+ entries you're still only about 10% of Facebooks "wall" application page. (Without additional "infinite scroll" entries)
The comment form is implemented as a server-side program using Common Lisp and Hunchentoot. So this is the only part of the website that is not static. The server-side program accepts each comment and writes to a text file for manual review, so I don't have to worry about spam, cross-site scripting, or irrelevant comments. I usually review the comments on weekends and add them to my blog.
In the end, the comments live like normal content files in my source code directory just like the other blog posts and HTML pages do. My static site generator renders the comment pages along with the rest of the website. So in a way, my static site generator also has a static comment pages generator within it.
The next time I rebuilt the blogs the page "XXX" would render a loop of all the comments, ordered by timestamp, if anything were present.
The CGI would send a "thanks for your comment" reply to the submitter and an email to myself. If the comment were spam I'd just delete the static file.
[0] https://indieweb.org/Webmention
[1] https://www.mollywhite.net/micro/entry/202511101848
I am now slowly rebuilding it in TypeScript/Bun and still finding a lot of LISP-isms, so it’s been a fun exercise and a reminder that we still don’t have a nice, fast, batteries-included LISP able to do HTML/XML transforms neatly (I tried Fennel, Julia, etc., and even added Markdown support to Joker over the years, but none of them felt quite right, and Babashka carries too much baggage).
If anyone knows about a good lightweight LISP/Scheme dialect that has baked in SQLite and HTML parsing support, can compile to native code and isn’t on https://taoofmac.com/space/dev/lisp, I’d love to know.
Why does bb carry too much baggage? Because it has useful libraries like the above?
If I were maintaining multiple large sites or working with many collaborators, I'd rely on something standard or extract and publish my SSG. For a personal site, I believe custom is often better.
The current generator is around 900 SLOC of Python and 700 of Pandoc Lua. The biggest threats to stability have been my own rewrites and experimentation, like porting from Clojure to Python. I have documented its history on my site: https://dbohdan.com/about#technical-history.
I had a vision of what I wanted the site to look like, but the org exporter had a default style it wanted. I spent more time ripping out all the cruft that the default org-html exporter insisted on adding than it would have taken to just write a new blog engine from scratch and I wish I had.
There's a way to set a custom export template, but I couldn't figure it out from the docs. I found and still do find the emacs/org docs to be poorly written for someone who doesn't already understand the emacs internals, and I wasn't willing to spend the time to become an emacs internals expert just to write a blog.
So I lived with a half-baked org->pandoc->html solution for a while but now I'm on Jekyll and much happier with my blogging experience.
Also have a RSS feed generator and it can highlight code in most programming languages, which is important to me as I write posts on many languages.
I did try Hugo before I went on to implement my own, and I got a few things from Hugo into mine, but Hugo just looked like far too overengineered for what I wanted (essentially, easy templating with markdown as the main language but able to include content from other files either in raw HTML or also markdown, with each file being able to define variables that can be used in the templating language which has support for the usual "expression language" constructs). I used the Go built-in parser for the expression language so it was super easy to implement it!
Used this for code syntax higlighting: https://github.com/alecthomas/chroma And this for markdown: https://github.com/russross/blackfriday
The rest I implemented myself in simple to read Go code.
I'm amazed there still isn't a decent free simple to host CMS solution with live page previews, a basic page builder, and simple hosting yet though. Is there one?
There's https://demo.decapcms.org/ (previously Netlify CMS) that you install and run via adding a short JavaScript snippet to a page. It connects to GitHub directly to edit content. You can running it locally or online, but you need some hosting glue to connect to GitHub. Netlify provides this but more options would be nice and I think they limit how many total users can connect on free plans.
@geerlingguy Not a huge deal but noticed (scanning with https://www.checkbot.io/) if you click a tag in a post, it has an unnecessary redirect causing a speed bump that's easy to fix e.g. the post has a link to https://www.jeffgeerling.com/tags/drupal which then redirects to https://www.jeffgeerling.com/tags/drupal/.
Internal redirects are really easy to miss without checking with a tool because browsers aren't noisy about it. Lots of sites have unnecessary redirects from URLs that use http:// instead of https://, www vs no-www, and missing/extra trailing slashes, where with some configs you can get 2 or 3 redirects before you get to the destination page.
This Christmas, I redesigned my website [1] into modular "middlewares" with the idea that each middleware has its own assets and embed.FS included, so that I can e.g. use the editor to write markdown files with a dynamic backend for publishing and rendering, and then I just generate a static version of my website for the CDN of choice. All parts of my website (website, weblog, wiki, editor, etc) are modular this way and just dispatch routes on a shared servemux.
The markdown editor turned out to be a nice standalone project [2] and I customized the commonmark format a bit with a header for meta data like title, description, tags and a teaser image that is integrated with the HTML templates.
Considering that most of my content was just markdown files already, the migration was pretty quick, and it's database free so I can just copy the files somewhere to have a backup of everything, which is also very nice.
[1] https://cookie.engineer
[2] https://github.com/cookiengineer/golocron
Previously: <https://news.ycombinator.com/item?id=29384788>
SSGs are good for static sites with no interactivity or feedback. If you want interactivity or feedback, someone (you or a 3rd party service provider) is going to have to run a server.
If you're running a server anyway, it seems trivial to serve content dynamically generated from markdown - all an SSG pipeline adds is more dependencies and stuff to break.
I know there's a fair few big nerd blogs powered by static sites, but when you really consider the full stack and frequency of work that's being done or the number of 3rd party external services they're having to depend on, they'd have been better by many metrics if the nerds had just written themselves a custom backend from the start.
Jeff: I think you'll regret this. I think you'll waste 5 - 10 years trying to shoehorn in basic interactivity like comments, and land on a compromised solution.
I also used and managed Drupal and Joomla before I went to SSGs, and then finally realised there's a sensible midpoint for the pain you're feeling: you write/run a simple server that dynamically compiles your markdown - good ol' SSR. It's significantly lighter, cheaper and easier than drupal, and lets you keep all the flexibility and features you need a server for. Don't take cave to the "self hosted tech was too hard so I took the easy route that forces me to also use 3rd party services instead" option.
SSGing your personal site is the first step to handing it over to 3rd party services entirely IMO.
Until you have enough visitors or evil AI bots scraping your site so that it crashes, or if you're using an auto-scaling provider, costs you real money.
The problem isn't in markdown→HTML conversion (which is pretty fast), it's that it's a first step in adding more bells and whistles, and before you know it, you're running a nextjs blog which requires server-side nodejs daemon so that your light/dark theme switch works as copy-pasted from stackoverflow.
For blogs, number of reads vs number of comments or other actions that require a server is probably on the order of 100:1 or 1000:1, even more if many of the page loads are bots/scrapers.
> SSGing your personal site is the first step to handing it over to 3rd party services entirely IMO.
Why? Your interactive/feedback parts can be a 10-line script as well, running on the same site where you'd run Drupal, Joomla, Wordpress, Django, or whatever.
Looks like Jeff plans to do exactly that: https://github.com/geerlingguy/jeffgeerling-com/issues/167
There's been multiple blog posts on HN from people who've received a hug of death and handled it fine with basically free or <$10 /month VMs
A couple of gigs of RAM and 2 cores can take viral posts and the associated bots. 99% of personal websites never go viral either.
> The problem isn't in markdown→HTML conversion (which is pretty fast), it's that it's a first step in adding more bells and whistles, and before you know it, you're running a nextjs blog which requires server-side nodejs daemon so that your light/dark theme switch works as copy-pasted from stackoverflow.
This is my exact argument against SSGs, and Jeffs post proves it: it's easy to use SSG to generate web pages, but the moment you want comments, or any other bells and whistles, you do what Jeff's going to have to do and say you'll do it later because there's no obvious easy solution that doesn't work against and SSG.
> Why? Your interactive/feedback parts can be a 10-line script as well, running on the same site where you'd run Drupal, Joomla, Wordpress, Django, or whatever.
EXACTLY! This is my point! Why not just SSR the markdown on the server you're already running?!
This is the opposite of what Jeff and 99% of other SSG users do, they switch to SSGs to get rid of dealing with servers, only to realise they need servers or third parties, but then they're sunk-cost-fallacied into their SSG by the time they realise.
The Markdown-to-templated-HTML pipeline code is the same whether it runs on each request or on content changes, so why not choose the one that's more efficient? Serving static HTML also means that the actually important part of my personal webpage (almost) never breaks when I'm not looking.
SSGs force people into particular ways of doing all the other parts of a website by depending on external stuff. This is often contrary to long term reliability, but nobody associates those challenges with the SSG that forced the external dependencies.
It becomes a sunk cost fallacy because people do what Jeff has done, they switch to an SSG in the promise of an easier website and proudly proclaim they're doing things the new best way. But they do the easy SSG bit (the content rendering) and then they create a TODO with all the compromised options for interactivity.
When they've got to a feature complete comparison, they've got a lot more dependencies and a lot less control/ownership, which inevitably leads to future frustrations.
The end destination for most nerdy personal website is a hand crafted minimal server with minimal to no dependencies.
The code surface with SSG + 1 or 2 small self-hosted OSS tools is much, much smaller than it ever was running Drupal or another CMS.
I really want to know because there is a Drupal 7 site that I need to migrate to something but I need good search on it (I’m using solr now).
When does this become 1 step forward with the SSG and 2 steps back with search solutions like this?
Nothing is perfect, but the above is really simple to host, is low maintenance, and easy to secure.
Sorry, I don't mean to come across as disagreeable. You're right, nothing is perfect, and this is obviously a workable and usable solution. My issue is if we analyse it beyond "it looks like it works", it starts to look like a slightly worse solution than what we already had.
Nothing wrong with moving backwards in some direction, as long as we can clearly point to some benefit we're getting elsewhere. My issues with SSGs is most of the benefits they offer end up undermined if you want any interactivity. This is a good example of that, as you end up compromising on build time and page load time compared to an SSR search solution.
You don't see how the server based solution is an order of magnitude more effort to maintain, monitor, optimize, and secure compared to hosting static files? Especially if search is a minor feature of your site, keeping the hosting simple can be a very reasonable tradeoff.
Lots of blogs/sites do fine with only category and tag filtering (most SSGs have this built in) without text search, and users can use a search engine if they need more.
Assuming 500 bytes of metadata + URL per blog post, a one megabyte index is enough for 2000 blog posts.
As already mentioned, you don't generate search result pages, because client side Javascript has been a thing for several decades already.
Your suggestion of converting markdown on every request also provides near zero value.
Writing a minimal server backend is also way easier if you separate it from the presentation part of the stack.
Based on https://news.ycombinator.com/item?id=46489563, it also seems like you fundamentally misunderstand the point. Interactivity is not the point. SSGs are used for publishing writing the same way PDF is used. Nobody sane thinks that they need a comment section in their PDFs.
Your 1 megabyte index file has just added over 2 seconds to your page load time in 30 different countries based on average internet speeds in 2024. Chuck in some pictures, an external comment library and your other SSG hacks, and you've just made your website practically unresponsive to a quarter of the planet and a bunch of other low powered devices.
Value is relative. The benefit of rendering markdown on every request is it makes it easier to make it dynamic, so you don't need to do SSG compromises like rebuild and reupload multiple pages when a single link changes.
You're replying in my thread here, to my original points. My original points were that SSGs don't make sense for sites with interaction, which is why were were discussing the limitations of SSG search approaches.
> SSGs are used for publishing writing the same way PDF is used. Nobody sane thinks that they need a comment section in their PDFs.
Thank you! We're in agreement, it doesn't make sense to use SSGs for sites that require interaction. When you do, it forces the rest of your site to do the compromising search stuff like we're discussing here.
It might not be intentional (I doubt), but your replies really read like bad fiath discourse.
You may be interested in Backdrop, which is a maintained fork of Drupal 7.
https://backdropcms.org/
(No experience with it personally. Only know about it from a friend who uses it.)
Of course, for my site I just redirect the user to a search engine plus `site:stavros.io`.
But all you've done in bought on all the pain and compromise of having to think from an SSG perspective, and that created problems which you've already identified you'll figure out in future
I'm suggesting 2 or 3 small self-hosted OSS tools, where one is a small hand crafted server that basically takes a markdown file, renders it, and serves it as plain HTML with a header/footer.
This is more homogenous, fewer unique parts/processes, and doesn't have the constraint of dealing with an SSG.
I remember my own personal pain from 2010 - 2016ish of managing Drupal and Joomla. I did exactly the same as you in 2016 and went all in on SSGs and in 2024, I realised all of the above. I feel like I wasted years of potential development time reinventing basic personal website features to try and work with an SSG and you literally have a ticket to do just that: https://github.com/geerlingguy/jeffgeerling-com/issues/167. 1 of your 3 solutions involves letting someone else host your comments:(
A custom framework/server is the end destination for all nerdy personal websites - I can't wait to see what you make when you realise this:)
For me, an unstated reason for SSG is being able to scale to millions of requests per hour without scaling up my server to match.
Serving static HTML is insanely easy, even on a cheap $5-10/month VPS. Serving anything dynamic at all is an order of magnitude harder.
Though... I've been using Cloudflare since 2022, after I started getting regular targeted DDoSes (was fun initially, seeing someone target different parts of Drupal, until I just locked down everything except for the final comment pages). The site will randomly get 1-2 million requests in a few minutes, and now Cloudflare eats those quickly, instead of my VPS getting locked up.
Ideally, I'll be able to host without Cloudflare in front at some point, but every month, because of one or two attacks, the site's using 25-35 TB of bandwidth (at least according to CF).
I totally see where you're coming from, but you just said it yourself, SSGs don't actually solve any problems for you right now that cloudflare doesn't. A site of Jeffgeerling.com scale is the archetypal scale site that _should_ benefit from SSGs, but Cloudflare is the easier, and arguably better, solution to the traffic/bot/scale problem.
If the problem you hit with Drupal is that it was more and less than you needed and became a headache to maintain, you will hit the same problem with Hugo eventually.
The solution to that problem is to just write your own server side that does what you need. It's so much more fun and rewarding, and I'm confident if you did it, the output would be better. With modern servers and server side technologies, you would most likely not have a problem running your minimal MD->html server on your current VPS behind Cloudflare.
Worst case scenario is you spend your time dealing with problems or misunderstandings with your own code, at least that'll be a refreshing change to dealing with problem or misunderstandings in Drupal's or Hugo's code or decisions.
There's a time and a place for SSGs, and geerlingengineering is the perfect use case, because it has no real interactivity. But - again, please take this from a place of candour than intended offence - from a user perspective, in the process of migrating Jeffgeerling.com to hugo, comments and search have been broken. Your migration to Hugo has just begun, you did the easy hugo part and created a post suggesting it was done. But the extra phases and tickets for comments and search suggest there's no obvious and easy answer on how to finish migrating interactive bits to an SSG.
Custom server side software is a complete solution, SSGs restrict what your complete solution can be without being one themselves. Nobody really seems to mention this until they move away from SSGs.
(Sorry for the bluntness again! Thanks again for your content, I stumble across your stuff all the time. I migrated my dad from a Windows XP machine to a Pi, and your resources are particularly useful and accessible for both of us!)
"Why go to all the burden of serving a few k of static html directly when you could just require a globe-spanning mega cdn?"
I feel you're missing my point which was "SSGs aren't good for sites which require interactivity because they force compromises elsewhere", a corollary to that is that for any problem an SSG promises to solve, if you have interactivity on your site, you probably already have a better solution available. E.g Jeff/bots/traffic/cloudflare.
For my website, I do both. Static HTML pages are generated with a static site generator. Comments are accepted using a server-side program I have written using Common Lisp and Hunchentoot.
It's a single, self-contained server side program that fits in a single file [1]. It runs as a service [2] on the web server [2], serves the comment and subscriber forms, accepts the form submissions and writes them to text files on the web server.
[1] https://github.com/susam/susam.net/blob/0.4.0/form.lisp
[2] https://github.com/susam/susam.net/blob/0.4.0/etc/form.servi...
[3] https://github.com/susam/susam.net/blob/0.4.0/etc/nginx/http...
It looks like you did exactly what Jeff did: got fed up with big excessive server sides and went the opposite way and deployed and wrote your own minimal server side solutions instead.
There's nothing wrong with that, but what problem were you solving with the SSG part of that solution? Why would you choose to pregenerate a bunch of stuff which might never get used any time anyone comments or updates your website, when you have the compute and processes to generate HTML from markdown and comments on demand?
The common sales points for SSGs are often:
- SSGs are easier (doesn't apply to you because you had to rewrite all your comment stuff anyway)
- cheaper (doesn't apply to you since you're already running a server for comments, and markdown SSR on top would be minimal)
- fewer dependencies (doesn't apply to you, the SSG you use is an added dependency to your existing server)
This largely applies to Jeff's site too.
Don't get me wrong, from a curious nerd perspective, SSGs presented the fun challenge of trying to make them interactive. But now, in 2026, they seem architecturally inappropriate for all but the most static of leaflet sites.
I was not trying to solve a specific problem. This is a hobby project and my choices were driven mostly by personal preference and my sense of aesthetics.
Moving to a fully static website made the stack simpler and more enjoyable to work with. I did not like having to run a local web server just to preview posts. Recomputing identical HTML on every request also felt wasteful (no matter how trivially) when the output never changes between requests. Some people solve this with caching but I prefer fewer moving parts, not more. This is a hobby project, after all.
There were some practical benefits too. In some tests I ran on a cheap Linode VM back in 2010, a dynamic PHP website could serve about 4000 requests per second before clients began to experience delays, while Nginx serving static files handled roughly 12000 requests per second. That difference is irrelevant day to day, but it matters during DDoS attacks, which I have experienced a few times. Static files let me set higher rate limits than I could if HTML were computed on demand. Caching could mitigate this too, but again, that adds more moving parts. Since Nginx performs extremely well with static files, I have been able to avoid caching altogether.
An added bonus is portability. The entire site can be browsed locally without a server. In fact, I use relative internal links in all of my HTML (e.g., '../../foo/bar.html' instead of '/foo/bar.html') so I can browse the whole site directly from the local filesystem, from any directory, without spinning up a web server. Because everything is static, the site can also be mirrored trivially to hosts that do not support server-side programming, such as https://susam.github.io/ and https://susam.codeberg.page/, in addition to https://susam.net/. I could have achieved this by crawling a dynamic site and snapshotting it, which would also be a perfectly acceptable solution. Static site generation is simply another acceptable solution; one that I enjoy working with.
This, definitely.
I think until you experience your first few DDoSes, you don't think about the kind of gains you get from going completely static (or heavily caching, sometimes at the expense of site functionality).
I feel like I'm going crazy here: you're both advocating for SSGs, but when pressed, it sounds like the only benefits you ever saw were to problems from many years ago, or problems which you already have alternate and more flexible solutions to.
Regardless, I'm going to hunt you down and badger you both with this thread in a few years to see where we all stand on this! Thanks again :)
Your post boils down to "I evolved into this form problems I had in the 2010 - 2020 period".
> There were some practical benefits too. In some tests I ran on a cheap Linode VM back in 2010, a dynamic PHP website could serve about 4000 requests per second before clients began to experience delays, while Nginx serving static files handled roughly 12000 requests per second. That difference is irrelevant day to day, but it matters during DDoS attacks, which I have experienced a few times. Static files let me set higher rate limits than I could if HTML were computed on demand. Caching could mitigate this too, but again, that adds more moving parts. Since Nginx performs extremely well with static files, I have been able to avoid caching altogether.
I really appreciate this explanation. It mirrors my experiences. But it's literally saying you did it for performance reasons at the time, and that doesn't matter now. You then say it allowed you to avoid caching, and that's a success because caching is extra moving parts which you want to avoid.
The SSG is an extra moving part, and it basically is a cache, just at the origin rather than the boundary.
Portability is a good point. My preferred/suggested alternative to SSG is dynamic rendering and serving of markdown content, and that gives me the same portability. Most markdown editors now respect certain formats of relative wiki links.
> Static site generation is simply another acceptable solution; one that I enjoy working with.
You are right. Fun is the primary reason why I'm being so vocal about this, because I spent 5 - 10 years saying and thinking and feeling all the things SSG advocates are saying and thinking and feeling about SSGs. I spent a few years with Jekyll, then Hugo, a brief stint with 11ty, and also Quartz. But when I wanted to start from scratch and did a modern, frank, practical analysis for greenfielding from a bunch of markdown files last year, I realised SSGs don't make sense for 99% of sites, but are recommended to 99% of people. If you already build, run and maintain a server side, SSG is just an extra step which complicates interactivity.
Having said all that, I don't really share my stuff or get any traffic though, so whilst I might be having fun, you and Jeff both have the benefit of modern battle testing of your solutions! My staging subdomain is currently running a handcrafted SSR markdown renderer. I've been having fun combining it with fedify to make my stuff accessible over ActivityPub using the same markdown files as the source of truth. It might not work well or at all (I don't even use Mastodon or similar) but it's so, so much fun to mess around with compared to SSG stuff. If fun coding is your motivator, you should definitely at least entertain the throw-out-the-SSG way of thinking, y'know, for fun:)
Thank you for the kind words. I don't think your position is controversial.
> But it's literally saying you did it for performance reasons at the time, and that doesn't matter now.
Actually, it still matters today because it's hard to know when the next DDoS attack might come.
> The SSG is an extra moving part, and it basically is a cache, just at the origin rather than the boundary.
I agree and I think that is a very nice way to put it. I realise that the 'fewer moving parts, not more' argument I made earlier does not quite hold. You are right that I am willing to incur the cost of an SSG as an additional moving part while being less willing to add a caching layer as an additional moving part. So in the end it really does come down to personal preferences. The SSG happens to be a moving part I enjoy and like using for some of the benefits (serverless local browsing, easy mirroring, etc.) I mentioned in my earlier comment. While mentioning those benefits, I also acknowledged that there are perfectly good non-SSG ways of attaining the same benefits too.
> My preferred/suggested alternative to SSG is dynamic rendering and serving of markdown content, and that gives me the same portability. Most markdown editors now respect certain formats of relative wiki links.
Yes, sounds like a good solution to me.
> If you already build, run and maintain a server side, SSG is just an extra step which complicates interactivity.
Yes, I agree with this as well. For me personally, the server-side program that runs the comment form feels like a burden. But I keep it because I do find value in the exchanges that happen in the comments. I have occasionally received good feedback and corrections there. Sometimes commenters share their own insights and knowledge which has helped me learn new things. So I keep the comments around. While some people might prefer a consolidated moving part, such as a server-side program that both generates the website and handles interactivity, I lean the other way. I prefer an SSG and then reluctantly incur an additional moving part in the form of a server-side program to handle comment forms. Since I lean towards the SSG approach, I have restricted the scope of server-side programming to comment forms only.
> If fun coding is your motivator, you should definitely at least entertain the throw-out-the-SSG way of thinking, y'know, for fun :)
I certainly do entertain it. I hope I have not given the impression that I am recommending SSGs to others. In threads like this, I am simply sharing my experience of how I approach these problems, not suggesting that my solution is better than anyone else's. Running a personal website is a labour of love and passion, and my intention here is to share that love of the craft. The solution I have chosen is just one solution. It works for me and suits my preferences but I do not mean to imply that it is superior to other approaches. There are certainly other equally good, and in many cases better, solutions.
https://gohugo.io/content-management/comments/
This includes a giant list of open source commenting systems.
I really don’t understand why people commonly say static site generators are a good candidate for building your own when there are a good selection of popular, stable options.
The only thing I don’t like about Hugo is the experience of using other people’s themes.
Getting someone else's SSG to do exactly what you want (and nothing more) takes longer than just building it yourself. Juice isn't worth the squeeze.
This resonates with me! Both in terms of things I use and things I make - I want them to "just work"
2 more comments available on Hacker News