Pricing Changes for Github Actions
Key topics
GitHub's introduction of a $0.002 per-minute charge for GitHub Actions workflows has sparked a lively debate, with commenters weighing in on the justification for the fee, particularly for self-hosted runners. Some users felt blindsided by the change, while others acknowledged the infrastructure costs behind displaying build information and hosting JSON APIs. The discussion also touched on the era of VC-subsidized developer infrastructure and the perceived value of GitHub's services, with some users questioning whether the existing monthly per-seat license fee should cover these costs. As one commenter wryly noted, it seems counterintuitive to charge per minute for self-hosted runners, especially if users are already unhappy with GitHub Actions.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
16m
Peak period
109
0-3h
Avg / period
14.5
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 16, 2025 at 12:12 PM EST
21 days ago
Step 01 - 02First comment
Dec 16, 2025 at 12:27 PM EST
16m after posting
Step 02 - 03Peak activity
109 comments in 0-3h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 18, 2025 at 4:59 AM EST
20 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Charging for self-hosted runners is an interesting choice. That's the same cost as their smallest hosted runners [1]
[1] - https://docs.github.com/en/billing/reference/actions-runner-...
I don't know if it's worth the amount they are targeting, but it's definitely not zero either.
Like I said, there should be some none zero amount paid for the control plane. But after doing my research since yesterday, I would definitely say that charging around the same as an actual runner makes absolutely no sense. There is 100% some form of runner on the control plane side that is constantly waiting for input from the actual task runner. But it can be a single scalable software element vs something that spins up its own environment and whatnot each time it works.
I work in marketing and probably have much less of an issue with advertising than most people on this site (sorry friends!) - but holy Barilla Macaroni, I could accept that if something was free I'd pay via ads, wtf is all this paid stuff doing also serving me ads?
I thought that "Bitbucket" was in your original post and you added only your edit message to say that it was, in fact, Gitlab and not Bitbucket that added cost for self-hosted runners.
(ofc, that'd only mean they stop updating the status page, so eh)
https://downdetector.com/status/github/
After like day 2 my workflows would take 10-15 minutes past their trigger time to show up and be queued. And switching to the self hosted runners didn't change that. Happens every time with every workflow, whether the workflow takes 10 seconds or 10 minutes.
https://github.com/neysofu/awesome-github-actions-runners
Also quite expensive!
It does have a big 'it shouldn't be this expensive' energy, but the market has shown it needs to be unfortunately. Nobody really survives in the CI world without going to complete neglect mode or goes expensive like buildkite I've found. It reminds me a lot of home automation / IoT. Lutron costs almost $100 a light switch for really silly economic reasons unfortunately even though the tech is basically unchanged since the 80s.
The interface is also geeky because the only people who are going to even realize you need to spend money on this are other software professionals.
I'm happy for competition, but this seems a bit foul since we users aren't getting anything tangible beyond the promise of improvements and investments that I don't need.
Given that GitHub runners are still slow as ever, it actually is a point in our favor even compared to self-hosting on aws etc. However, it makes the value harder to communicate <shrug>.
Of course, if you can just fence in your competition and charge admission, it'd be silly to invest time in building a superior product.
> Actions is down again, call Brent so he can fix it again...
Not sure if a Phoenix Project reference, but if it is, it's certainly in keeping with Github being as fragile as the company in the book!
The only self-hosted runners I've used have been for internalized deployments separate from the build or (pre)test processes.
Aside: I've come to rely on Deno heavily for a lot of my scripting needs since it lets me reference repository modules directly and not require a build/install step head of time... just write TypeScript and run.
When you've got many 100s of services managing these in actions yaml itself is no bueno. As you mentioned having the option to actually be able to run the CI/CD yourself is a must. Having to wait 5 minutes plus many commits just to test an action drains you very fast.
Granted we did end up making the CI so fast (~ 1 minute with dependency cache, ~4 minutes without), that we saw devs running their setup less and less on their personal workstations for development. Except when github actions went down... ;) We used Jenkins self-hosted before and it was far more stable, but a pain to maintain and understand.
After 10-ish hours the cluster was operational. The remaining 18 (plus 30-something unbillable to satisfy my conscience) were spent trying and failing to diagnose an issue which is still unsolved to this day[1].
[1]: https://github.com/actions/runner-container-hooks/issues/113
Using GitHub actions to coordinate the Valhalla builds was a nice-to-have, but this is a deal-breaker for my pull request workflows.
A lot of it is wasted in build time though, due to a lack of appropriate caching facilities with GitHub actions.
[0] https://github.com/Barre/ZeroFS/tree/main/.github/workflows
tl;dr uses a local slot-based cache that is pre-warmed after every merge to main, taking Sidecar builds from ~10-15 minutes to <60 seconds.
SlateDB, the lower layer, already does DST as well as fault injection though.
``` on: push ```
is the event trigger to act on every push.
You'll waste a lot of CI on building everything in my opinion, I only really care about the pull request.
I’m definitely sure it’s saving me more than $140 a month to have CI/CD running and I’m also sure I’d never break even on the opportunity cost of having someone write or set one up internally if someone else’s works - and this is the key - just as well.
But investment in CI/CD is investing in future velocity. The hours invested are paid for by hours saved. So if the outcome is brittle and requires oversight that savings drops or disappears.
I’m also someone who paid for JetBrains when everyone still thought it wasn’t worth money to pay for a code editor. Though I guess that’s again now. And everyone is using an MS product instead.
This is like if Dropbox started charging you money for the files you have stored on your backup hard drives.
When CI and CD stop being flat and straightforward, they lose their power to make devs clean up their own messes. And that's one of the most important qualities of CI.
Most of your build should be under version control and I don't mean checked in yaml files to drive a CI tool.
Perhaps that isn't most use of it; the big projects are really big.
Fundamentally, yes, what you run in a CI pipeline can run locally.
That's doesn't mean it should.
Because if we follow this line of thought, then datacenters are useless. Most people could perfectly host their services locally.
There are a rather lot of people who do argue that? Like, I actually agree that non-local CI is useful, but this is a poor argument for it.
I'm not aware of people arguing for self-hosting team or enterprise services.
Actions let you test things in multiple environments, to test them with credentials against resources devs don't have access to, to do additional things like deploys, managing version numbers, on and on
With CI, especially pull requests, you can leave longer running tests for github to take care of verifying. You can run periodic tests once a day like an hour long smoke test.
CI is guard rails against common failure modes which turn requiring everyone to follow an evolving script into something automatic nobody needs to think about much
... Is nobody in charge on the team?
Or is it not enough that devs adhere to a coding standard, work to APIs etc. but you expect them to follow a common process to get there (as opposed to what makes them individually most productive)?
> You can run periodic tests once a day like an hour long smoke test.
Which is great if you have multiple people expected to contribute on any given day. Quite a bit of development on GitHub, and in general, is not so... corporate.
I don't deny there are use cases for this sort of thing. But people on HN talking about "hosting things locally" seem to describe a culture utterly foreign to me. I don't, for example, use multiple computers throughout the house that I want to "sync". (I don't even use a smartphone.) I really feel strongly that most people in tech would be better served by questioning the existing complexity of their lives (and setups), than by questioning what they're missing out on.
>... Is nobody in charge on the team?
This isn't how things work. You save your "you MUST do these things" for special rare instructions. A complex series of checks for code format/lint/various tests... well that can all be automated away.
And you just don't get large groups of people all following the same series of steps several times a day, particularly when the steps change over time. It doesn't matter how "in charge" anybody is, neither the workplace nor an open source project are army boot camp. You won't get compliance and trying to enforce it will make everybody hate you and turn you into an asshole.
Automation makes our lives simpler and higher quality, particularly CI checks. They're such an easy win.
- github copilot PR reviews are subpar compared to what I've seen from other services: at least for our PRs they tend to be mostly an (expensive) grammar/spell-check
- given that it's github native you'd wish for a good integration with the platform but then when your org is behind a (github) IP whitelist things seem to break often
- network firewall for the agent doesn't seem to work properly
raised tickets for all these but given how well it works when it does, I might as well just migrate to another service
I assume that you do not work for free, either.
The runner software they provide is solid and I’ve never had an issue with it after administering self-hosted GitHub actions runners for 4 years. 100s of thousands of runners have taken jobs done the work, destroyed themselves, and been replaced with clean runners, all without a single issue with the runners themselves.
Workflows on the other hand, they have problems. The whole design is a bit silly
been working to move all our workflows to self hosted, on demand ephemeral runners. was severely delayed to find out how slipshod the Actions Runner Service was, and had to redesign to handle out-of-order or plain missing webhook events. jobs would start running before a workflow_job event would be delivered
we've got it now that we can detect a GitHub Actions outage and let them know by opening a support ticket, before the status page updates
That’s not hard, the status page is updated manually, and they wait for support tickets to confirm an issue before they update the status page. (Users are a far better monitoring service than any automated product.)
Webhook deliveries do suffer sometimes, which sucks, but that’s not the fault of the Actions orchestration.
The one for azure devops is even worse though, pathetic.
In my experience gitlab always felt clunky and overly complicated on the back end, but for my needs local forgejo is better than the cloud options.
Part of this is fair since there is a cost to operating the control plane.
One way around this is to go back to using check runs. I imagine a third party could handle webhooks, parse the .github/workflows/example.yml, then execute the action via https://github.com/nektos/act (or similar), then post the result.
It’s been awhile since I looked. What’s a good alternative?
Jenkins has been rock solid, we are trying to migrate to Argo Workflows/Events, but there are a complaints (like deploying argo workflows with helm, such fun!)
- runs locally
- has a language server: python, typescript, go, java, OR elixer
- has static typing
- the new caching mechanisms introduced in 0.19.4 are chef's kiss
I do not work for dagger and pay for it using the company credit card. A breath of fresh air after the unceasing misery and pain that is Gitlab and GHA.
I wouldn't call it a CI system though, but certainly the philosophy that local and CU should be running the same thing saves many hours of frustration.
I'm currently using Dagger to create forkable/rewindable agent sessions and environments (not with their agent nonsense). Dagger is a pretty sweet piece of tech, so many uses for programmatic container layers
-- Winston Churchill (probably)
https://tangled.org/tangled.org/core/blob/master/docs/spindl...
https://bsky.app/profile/tangled.org
There looks to be a blog post here: https://blog.tangled.org/ci
I'm not a fan of nix and would have picked containers regardless for a git forge CI offering
Like Argo or Jenkins. Pushing nix as the DX for GHA equivalent was a poor choice by Tangled IMO. It's too unusual for your average dev, I'm not interested in learning nix so I can use CI.
> Also Nix supports MacOS.
Doesn't matter. Tangled CI requires Docker.
I get that self-hosted runners generate huge egress traffic, but this is still wild. Hope it pushes more companies to look into self-hosted Gitea / Forgejo / etc.
Holy s***
That's more expensive than the on-demand list price of a m8i.large.
I realise 100% utilisation isn't realistic, but that still sounds very expensive when you're already BYOB.
It's worse than unrealistic. It's ludicrous. Any company running more than 10 hours of actions workflows per week on GitHub can afford a few dollars a month for infrastructure.
GitHub still supports e.g. PR checks that originate from other systems. We had PR checks before GHA and it's easy enough to go back to that. Jenkins has some stuff built in or you can make some simple API calls.
It's not as convenient, but it works just fine.
Now the only alternative is to move builds, CI, etc. off of GitHub's platform entirely, and maybe your source control as well. In other words, a big pain. Github seems to have entered peak encrapification: the point where they openly acknowledge rent-seeking as their product approach, fully deprecating "building the best, most reliable, trustworthy product." Now it's just "Pay us high margins because the effort to migrate off is big and will take too long to break even."
Basically the modern day Heroku business model.
I'm sure we'll feel it too at https://sprinters.sh, but probably a bit less than others as our flat $0.01 per job fee for runners on your own AWS account will still work out to about 80% average savings in practice, compared to ~90% now when using spot instances.
654 more comments available on Hacker News