A Faster Heart for F-Droid
Key topics
The F-Droid community is buzzing about their recent hardware upgrade, sparking a lively debate about the specifics of their new setup. Commenters are scratching their heads over the lack of details on the new hardware, with some speculating that a budget AM4 system could have been a cost-effective solution. While some users are touting the potential of second-hand systems with 16-core Ryzen processors, others are pointing out that RAM prices have skyrocketed, complicating such plans. The discussion also takes a turn towards the hosting arrangement, with some commenters raising eyebrows over the unconventional setup, suggesting it may not be the most secure or reliable configuration.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
N/A
Peak period
81
0-6h
Avg / period
20
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 30, 2025 at 1:36 PM EST
10 days ago
Step 01 - 02First comment
Dec 30, 2025 at 1:36 PM EST
0s after posting
Step 02 - 03Peak activity
81 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Jan 1, 2026 at 9:07 PM EST
8 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
> The previous server was 12 year old hardware
which is pretty mad. You can buy a second hand system with tons of ram and a 16-core Ryzen for like $400. 12-year old hardware is only marginally faster than a RPi 5.
Building a budget AM4 system for roughly $500 would be within the realm of reason. ($150 mobo, $100 cpu, $150 RAM, that leaves $100 for storage, still likely need power and case.)
https://www.amazon.com/Timetec-Premium-PC4-19200-Unbuffered-...
https://www.amazon.com/MSI-MAG-B550-TOMAHAWK-Motherboard/dp/...
For a server that's replacing a 12 year old system, you don't need DDR5 and other bleeding edge hardware.
(I might be spoiled by sane reproducible build systems. Maybe F-droid isn't.)
My 14yo disagrees
I'm also factoring in the ability to use battery bypass in phones I buy now because they are so powerful, I might want to use them as server in the future. You can do a heck of a lot on phone hardware nowadays
A Dell R620 is over 12 years old and WAY faster than a RPi 5 though...
Sure, it'll be way less power efficient, but I'd definitely trust it to serve more concurrent users than a RPi.
“F-Droid is not hosted in just any data center where commodity hardware is managed by some unknown staff. We worked out a special arrangement so that this server is physically held by a long time contributor with a proven track record of securely hosting services. We can control it remotely, we know exactly where it is, and we know who has access.”
Not reassuring.
The set of people who can maliciously modify it is the people who run f-droid, instead of the cloud provider and the people who run f-droid.
It'd be nice if we didn't have to trust the people who run f-droid, but given we do I see an argument that it's better for them to run the hardware so we only have to trust them and not someone else as well.
You still have to trust the app store to some extent. On first use, you're trusting f-droid to give you the copy of the app with appropriate signatures. Running in someone else's data-center still means you need to trust that data-center plus the people setting up the app store, instead of just the app store. It's just a breach of trust is less consequential since the attacker needs to catch the first install (of apps that even use that technology).
The F-droid app itself can then verify signatures from both third party developers and first party builds on an f-droid machine.
For all its faults (of which there are many) it is still a leaps and bounds better trust story than say Google Play. Developers can only publish code, and optional signatures, but not binaries.
I've been using Obtainium more recently, and the idea is simple: a friendly UI that pulls packages directly from the original source. If I already trust the authors with the source code, then I'm inclined to trust them to provide safe binaries for me to use. Involving a middleman is just asking for trouble.
App stores should only be distributors of binaries uploaded and signed by the original authors. When they're also maintainers, it not only significantly increases their operational burden, but requires an additional layer of trust from users.
State actor? Gets into data centre, or has to break into a privately owned apartment.
Criminal/3rd party state intelligence service? Could get into both, at a risk or with blackmail, threats, or violence.
Dumb accidents? Well, all buildings can burn or have an power outage.
I don’t think a state actor would actually break in to either in this case, but if they did then breaking into the private apartment would be a dream come true. Breaking into a data center requires coordination and ensuring a lot of people with access and visibility stay quiet. Breaking into someone’s apartment means waiting until they’re away from the premises for a while and then going in.
Getting a warrant for a private residence also would likely give them access to all electronic devices there as no 3rd party is keeping billing records of which hardware is used for the service.
> Dumb accidents? Well, all buildings can burn or have an power outage.
Data centers are built with redundant network connectivity, backup power, and fire suppression. Accidents can happen at both, but that’s not the question. The question is their relative frequency, which is where the data center is far superior.
Well, I remember one incident were a 'professional' data center burned down including the backups.
https://en.wikipedia.org/wiki/OVHcloud#Incidents
I know no such incident for some basement hosting.
Doesn't mean much. I'm just a bit surprised so many people are worried because of the server location and no one had mentioned yet the quite outstanding OVH incident.
https://www.reddit.com/r/homelab/comments/wvqxs7/my_homelab_...
I don't have a bone to pick here. If F-Droid wants to free-ball it I think that's fine. You can usually run things for max cheap by just sticking them on a residential Google Fiber line in one of the cheap power states and then just making sure your software can quickly be deployed elsewhere in times of outage. It's not a huge deal unless you need always-on.
But the arguments being made here are not correct.
Or just a warrant and a phone call to set up remote access? In the UK under RIPA you might not even need a warrant. In USA you can probably bribe someone to get a National Security Letter issued.
Depending on the sympathies of the hosting company's management you might be able to get access with promises.
I dare say F-Droid trust their friends/colleagues more than they trust randos at a hosting company.
As an F-Droid user, I think I might too? It's a tough call.
as a year long f-droid user I can't complain
Read Jabber.ru Hetzner accident: https://notes.valdikss.org.ru/jabber.ru-mitm/
A picture of the "living conditions" for the server would go a long way.
I had passively assumed something like this would be a Cloud VM + DB + buckets. The "hardware upgrade" they are talking about would have been a couple clicks to change the VM type, a total nothingburger. Now I can only imagine a janky setup in some random (to me) guy's closet.
In any case, I'm more curious to know exactly what kind hardware is required for F-Droid, they didn't mention any specifics about CPU, Memory, Storage etc.
> not hosted in just any data center where commodity hardware is managed by some unknown staff
I took this to mean it's not in a colo facility either, assumed it mean't someone's home, AKA residential power and internet.
If this is the hidden master server that only the mirrors talk to, then it's redundancy is largely irrelevant. Yes, if it's down, new packages can't be uploaded, but that doesn't affect downloads at all. We also know nothing about the backup setup they have.
A lot depends on the threat model they're operating under. If state-level actors and supply chain attacks are the primary threats, they may be better off having their system under the control of a few trusted contributors versus a large corporation that they have little to no influence over.
The build server going down means that no one's app can be updated, even for critical security updates.
For something that important, they should aspire to 99.999% ("five nines of") reliability. With a single physical server, achieving five nines over a long period of time usually means that you were both lucky (no hardware failures other than redundant storage) and probably irresponsible (applied kernel updates infrequently - even if only on the hypervisor level).
Now... 2 servers in 2 different basements? That could achieve five nines ;)
I agree that "behind someone's TV" would be a terrible idea.
Wait until you find out how every major Linux distributions and software that powers the internet is maintained. It is all a wildly under-funded shit show, and yet we do it anyway because letting the corpos run it all is even worse.
e.g. AS41231 has upstreams with Cogent, HE, Lumen, etc... they're definitely not running a shoestring operation in a basement. https://bgp.tools/as/41231
Some do even worse, and build on a centralized machine many maintainers have access to from their workstations, so once again any single compromised home computer backdoors the whole distro.
The trust model of most linux distros that power the internet is totally yolo and one bad maintainer workstation burns it all to the ground.
Sorry if this ruins anyones rosy worldview.
Fixing this requires universal reproducible builds, and once you have that then you no longer have single points of failure so centralized high security colo cost becomes a moot issue.
You want to host servers on your own hardare? Uh yikes. Let's unpack this. As a certified AWS Kubernetes professional time & money waster, I can say with authority that this goes against professional standards (?) and is therefore not a good look. Furthermore, I can confirm that this isn't it chief.
And you're not going to even get close to the cabinet in a data center with a set of bolt cutters.
Otoh, maybe you've got a cabinet in a DC with very secure locks from europe.... But all are keyed alike. Whoops.
A drill would be easier to bring in (especially if it just looks like a power screwdriver) and probably get in faster though. Drill around the locks/hinges until the door wiggles off.
And at most facilities you'd have a hard time even getting to the cabinet unless you were supposed to be there. To get to the one I use at work, you have to go through several biometric scans, a security guard, and about a dozen locked doors to even get to the cabinet. Then, you have an actively monitored camera pointing at you the whole time.
A contributor's home is definitely less secure than even a low-tier data center. They have even bigger security holes, like glass windows.
All the hand waving and excuses around global supply chains, quotes, etc...it took pretty long for them to acquire commodity hardware and shove it in a special someone's basement and they're trying to make it seem like a good thing?
F-Droid is often discussed in the GrapheneOS community, the concerns around centralization and signing are valid.
I understand this is a volunteer effort, but it's not a good look.
The internet is run on binaries compiled in servers in random basements and you should be thankful for those basements because the corpos are never going to actually help fund any of it.
Do you have any examples?
> and you should be thankful for those basements because the corpos are never going to actually help fund any of it.
Almost all open-source internet infrastructure I can think of is predominantly funded by corporations -- either directoy, or by paying employees to contribute to it.
I would agree, that it is not a good look for this society, to lament so much about the big evil corporations and invest so little in the free alternatives.
Having two servers in two basements not near each other would be good, having five would be better, and honestly paying money to put them in colo facilities to have more reliable power, cooling, etc. would be better still. Computer hardware is very cheap today and it doesn't cost that much money to get a substantial amount of redundancy, without being dependent on any single big company.
They can then probably whip up a new hosted server to take over within a few days, at most. Big deal.
They are not hosting a critical service, and running on donations. They are doing everything right.
As long as you don't need RAM or hard drives. It's getting more expensive all the time too. This isn't the ideal moment to replace a laptop let alone a server.
I bet the server should be quite powerful, with tons of CPU, RAM and SSD/NVMe to allow for fast builds. Memory of all kinds was getting more and more expensive this year, so the prolonged sourcing is understandable.
The trusted contributor, as the text says, is considered more trustworthy than an average colocation company. Maybe they have an adequate "basement", e.g. run their own colo company, or something.
It would be great to have a spare server, but likely it's not that simple, including the organization and the trust. A build server would be a very juicy attack target to clandestinely implant spyware.
Forget the cloud and even ignore things like redundant power and internet: colos are cheap, and every colo facility I've ever worked in had bollards around the door, a human guard, mantrap at the front door, and at least one additional badge and key required to gain access to locked cabinets on the server floor after that. You don't need bolt cutters, you're not getting close to the bolt to cut without the cops getting called at even the most basic facility if you aren't authorized to be there.
If its important enough to have your own app store instead of using BigCo's, its important enough to secure the infrastructure similarly to how a BigCo might protect their servers. The same threats exist for either.
They didn't say what conditions it's held in. You're just adding FUD, please stop. It could be under the bed, it could be in a professional server room of the company ran by the mentioned contributor.
It’s not going to even remotely rival a tier 3/4 data center in any way.
The physical security, infrastructure, and connectivity will never come close.
Why would you need all of that if what they have works? Nobody is going to raid a repo of open source software, you can just download everything for free.
But the assertion by commenters above that home-hosting is a viable or even a better option for a project like this is silly. Colocating a single server is cheaper than a single a Comcast Business internet connection.
If there's ever a need for a warrant for any of the projects, the warrant would likely involve seizure of every computer and data storage device in the home. Without a 3rd party handling billing and resource allocation they can't tell which specific device contains the relevant data, so everything goes.
So having something hosted at home comes with downsides, too. Especially if you don't control all of the data that goes into the servers on your property.
> Also 4th amendment protections so no one gets access without me knowing about it
laughs in FISA
Hahaha
at best you're getting a warrant. Slightly better you're getting a warrant _and_ a gag order. Then it escalates, and having your door kicked in at 6AM is about the best you can hope for.
But sure, you'll know about it. Most likely. Maybe.
Just don't keep anything important in there eh ?
(Note, this definitely applies to colocations too. It's just maybe a tiny bit harder to find which rack is yours, and companies of that size generally have lawyers to prevent that from happening. I'll take my chance with the hosting company.)
https://wiki.debian.org/InstructionSelection
Also, even 12-year-old hardware is wicked fast.
With f-droid their main strength has always been replicable builds. We ideally just need to start hosting a second f-droid server somewhere else and then compare the results.
Some of their points are valid but way too often they're unable to accept that different services aren't always trying to solve the same problem.
100%. But you know, sadly I've noticed that non-experts are impressed by elitism. So you don't have to be good, you just have to shit on others, and passerbys will interpret that as being very competent.
Which is super ironic, from a project which about privacy but only supports hardware built by the biggest surveillance company.
Apart from the "someone's basement", as objected to in this thread, it also doesn't say they acquired "commodity hardware"; I took it to suggest the opposite, presumably for good reason.
This seems entirely like wishful thinking. They were using a 12 year old server that was increasingly unfit for the day to day task of building Android applications. It doesn't seem like they were in a position to acquire and deploy any exotic hardware (except to the extent that really old hardware can be considered exotic and no longer a commodity). I'd be surprised if the new server is anything other than off the shelf x86 hardware, and if we're lucky then maybe they know how to do something useful with a TPM or other hardware root of trust to secure the OS they're running on this server and protect the keys they're signing builds with.
> this server is physically held by a long time contributor with a proven track record of securely hosting services.
So you are assuming it's a rando's basement when they never said anything like that.
If their way of doing business is so offensive either don't use them, disrupt them or pitch in and help.
> I understand this is a volunteer effort, but it's not a good look.
What does make a "good look" for a volunteer project?
It's an open-source project. It should be... open. Not mysterious or secretive about overdue replacements of critical infrastructure.
This is effectively a rando's basement. It doesn't matter that they've been a contributor or whatever. Individuals change, relationships sour. Securely hosting how ? By locking the front door ? By being a random tech company in the midwest ? Or by having proper access control ?
As a little reminder, F-Droid has _all_ the signing keys on its build server. Compromising that is somewhere between "oh that's awful" and "stop the world". These builds go out as automatic updates too. So uh, yeah, I'd like it if it was hosted by someone serious and not my buddy joe who's a sysadmin don't worry
And maybe that is F-Droid's point: Security through obscurity. If the build infrastructure with the signing keys is unknown, then it's that much harder for Bad Actor to do things like backdoor E2E encrypted communication apps. This is, of course, the weakness in E2E encryption in apps obtained from mainstream/commercial app stores. For all we know, these may already be backdoored depending on where it came from.
However, the obscurity makes F-Droid hard to trust as an outsider to the project.
Clearly the GrapheneOS community is clueless then.
You can host F-Droid yourself, which is the opposite of centralized. If the GrapheneOS community actually is concerned about centralization they can fork it themselves as well. Then we'll have two public repositories.
Futhermore, each author signs their own software, which again is the opposite of centralized. One authority signing everything would be centralized.
This just reads to me like they have racked a box in a colo with a known person running the shared rack rather than someone’s basement but who really knows they aren't exactly handing out details.
I can’t be the only one who read this and had flashbacks to projects that fell apart because one person had the physical server in their basement or a rack at their workplace and it became a sticking point when an argument arose.
I know self-hosting is held as a point of pride by many, but in my experience you’re still better off putting lower cost hardware in a cheap colo with the contract going to the business entity which has defined ownership and procedures. Sending it over to a single member to put somewhere puts a lot of control into that one person’s domain.
I hope for the best for this team and I’m leaning toward believing that this person really is trusted and capable, but I would strongly recommend against these arrangements in any form in general.
That said, I still think that hosting a server in a member's house is a terrible decision for a project.
True, which is why I said the important parts need to be held by the legal entity representing the organization. If one person tries to hold it hostage, it becomes a matter of demonstrating that person doesn’t legally have access any more.
I’ve also seen projects fall apart because they forgot to transfer some key element into the legal entity. A common one is the domain name, which might have been registered by one person and then just never transferred over. Nobody notices until that person has a falling out and starts holding the domain name hostage.
Personally I would feel better about round robin across multiple maintainer-home-hosted machines.
Regardless, the ongoing interest on $400K alone would be enough to pay colo fees.
I don't know what kind of rates are available to non-profits, but with 400k in hand you can find nicer rates than 3.3 (as of today, at least).
that covers quite a few colo possibilities.
Another thing overlooked in this debate: Data center costs normally increase at the rate of inflation. This is not included in most estimates. That said, I still agree with the broad sentiment here: 400K USD is plenty of money to run a colo server for 10+ years from the risk-free interest rate.
At that rate, that would buy you nearly 1000 years of hosting.
I really don’t know where the commenter above was getting the idea that $400K wouldn’t last very long
The jury's still out on whether or not this is a good thing.
I Googled for that brand and got a few hits:
The homepage now redirects here: https://patmos.tech/Another under appreciated point about that data center: It has excellent geographical location to cover North America.
Modern computers are super efficient. A 9755 has 128 cores and you can get it for cheap. If you've been doing this for a while you'd have gotten the RAM for cheap too.
If I, a normie, can have terabytes of RAM and hundreds of cores in a colo, I'm pretty sure they can unless they have some specific requests.
In any event if I was the volunteer sysadmin that had to babysit the box, I would rather have it at my home with business fiber where I am on premises most of the time because getting in and out of a colo is always a whole thing if their security is worth a damn.
Given a setup like that I can imagine 400k lasting a decade even if they are paying for the volunteers business fiber. Especially given I expect some of it is to provide a sustainable compensation to key team members as well. Every cent will count.
I don’t know where you’re pricing coloration, but I could host a single server indefinitely from the interest alone on $400K at the (very nice) data centers I’ve used.
Collocation is not that expensive.
Of course you have to buy the switches and servers…
IDK if they could bag this kind of grant every year, but isn't this the scale where cloud hosting starts to make sense?
A lot of these places are like fortresses
It could depend on your local market I presume.
Cloud hosting only makes sense at a very, very small scale, or absurdly large ones.
It has hosted quite a few famous services.
> The previous server was 12 year old hardware and had been running for about five years. In infrastructure terms, that is a lifetime. It served F-Droid well, but it was reaching the point where speed and maintenance overhead were becoming a daily burden.
lol
Saying this on HN, of course.
55 more comments available on Hacker News