You Already Have a Git Server
Posted3 months agoActive2 months ago
maurycyz.comTechstoryHigh profile
calmpositive
Debate
60/100
GitVersion ControlSelf-Hosting
Key topics
Git
Version Control
Self-Hosting
The article reveals that a Git server is not necessary to host Git repositories, as SSH access is sufficient, sparking a discussion on the simplicity and flexibility of Git.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
88
0-6h
Avg / period
16
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 26, 2025 at 6:53 AM EDT
3 months ago
Step 01 - 02First comment
Oct 26, 2025 at 8:17 AM EDT
1h after posting
Step 02 - 03Peak activity
88 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 30, 2025 at 10:12 AM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45710721Type: storyLast synced: 11/22/2025, 11:47:55 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Imagine what the equivalent argumentation for a lawyer or nurse would be. Those rules ought to apply for engineers, too.
These guys won.
Even someone who knows that git isn't GitHub might not be aware that ssh is enough to use git remotely. That's actually the case for me! I'm a HUGE fan of git, I mildly dislike GitHub, and I never knew that ssh was enough to push to a remote repo. Like, how does it even work, I don't need a server? I suspect this is due to my poor understanding of ssh, not my poor understand of git.
You do, an SSH server needs to be running on the remote if you want to ssh into it, using your ssh client - the `ssh` command on your laptop. It's just not a http server is all.
You start that server using the `sshd` [systemd] service. On VPSs it's enabled by default.
Git supports both http and ssh as the "transport method". So, you can use either. Browsers OTOH only support http.
Edit: hey this is really exciting. For a long time one of the reasons I've loved git (not GitHub) is the elegance of being a piece of software which is decentralized and actually works well. But I'd never actually used the decentralized aspect of it, I've always had a local repo and then defaulted to use GitHub, bitbucket or whatever instead, because I always thought I'd need to install some "git daemon" in order to achieve this and I couldn't be bothered. But now, this is so much more powerful. Linus Torvalds best programmer alive, change my mind.
And most things are files.
Read https://git-scm.com/docs/git-init#Documentation/git-init.txt...
Anyway it sounds like you have a lot of holes in your git-knowledge and should read some man pages
https://git-scm.com/book/ms/v2/Git-on-the-Server-The-Protoco...
From the git-fetch(1) manual page:
> Git supports ssh, git, http, and https protocols (in addition, ftp and ftps can be used for fetching, but this is inefficient and deprecated; do not use them).
You only need access to the other node repo information. There's no server. You can also use a simple path and store the other repo on drive.
There IS a server, it's the ssh daemon. That's the bit I had never thought about until now.
At that point, may as well support raw serial too.
Supporting rlogin on the other hand is probably as simple as GIT_SSH=rlogin
Clip from the interview: https://www.youtube.com/shorts/0wLidyXzFk8
Now, if it is a growing misconception among cs students or anyone doing software development or operations, that's a cause for concern.
https://www.theverge.com/22684730/students-file-folder-direc...
https://www.youtube.com/shorts/D1dv39-ekBM
https://youtube.com/watch/D1dv39-ekBM
Granted I've never tried it so take it with a grain of salt.
But more importantly, I’m not sure why I would want to deploy something by pushing changes to the server. In my mental model the repo contains the SOT, and whatever’s running on the server is ephemeral, so I don’t want to mix those two things.
I guess it’s more comfortable than scp-ing individual files for a hotfix, but how does this beat pushing to the SOT, sshing into the server and pulling changes from there?
I’ve never worked with decentralized repos, patches and the like. I think it’s a good moment to grab a book and relearn git beyond shallow usage - and I suspect its interface is a bit too leaky to grok it without understanding the way it works under the hood.
That way, I
1. didn't have to worry about sync conflicts. Once complete, just push to origin 2. had my code backed up outside my computer
I can't exactly remember, if it saves space. I assumed it does, but not sure anymore. But I feel it was quite reliable.
I gave that way up with GitHub. But thinking of migrating to `Codeberg`
With `tailscale`, I feel we have so much options now, instead of putting our personal computer out on the Internet.
I mean, it works fine for a few days or weeks, but then it gets corrupted. Doesn't matter if you use Dropbox, Google Drive, OneDrive, whatever.
It's apparently something to do with the many hundreds of file operations git does in a basic operation, and somehow none of the sync implementations can quite handle it all 100.0000% correctly. I'm personally mystified as to why not, but can attest from personal experience (as many people can) that it will get corrupted. I've heard theories that somehow the file operations get applied out of order somewhere in the pipeline.
But have you ever found a cloud sync tool that doesn't eventually corrupt with git? I'm not aware of one existing, and I've looked.
Again, to be clear, I'm not talking about the occasional rsync, but rather an always-on tool that tries to sync changes as they happen.
Good luck with iCloud!
When it is a file conflict the sync engines will often drop multiple copies with names like "file (1)" and "file (2)" and so forth. It's sometimes possible to surgically fix a git repo in that state by figuring out which files need to be "file" or "file (1)" or "file (2)" or whatever, but it is not fun.
In theory, a loose objects-only bare repo with `git gc` disabled is more append-only and might be useful in file sync engines like that, but in practice a loose-objects-only bare repo with no `git gc` is not a great experience and certainly not recommended. It's probably better to use something like `git bundle` files in a sync engine context to avoid conflicts. I wonder if anyone has built a useful automation for that.
What about SyncThing?
I did a quick search for "post-receive hook ci" and found this one: https://gist.github.com/nonbeing/f3441c96d8577a734fa240039b7...
Decades from now, git will be looked back at in a similar but worse version of the way SQL often is -- a terrible interface over a wonderful idea.
I don't think git would end up this popular if it didn't allow to be used in a basic way by just memorizing a few commands without having to understand its repository model (however simple) well.
I think your age isn't the issue, but I suspect you're in a bubble.
Git is especially prone to the sort of confusion where all the experts you know use it in slightly different ways so the culture is to just wing it until you're your own unique kind of wizard who can't tie his shoes because he favors sandals anyhow.
https://git-scm.com/book/en/v2
Unlike calculus, though, you can learn enough about Git to use it usefully in ten minutes. Maybe this sets people up for disappointment when they find out that afterwards their progress isn't that fast.
Ane example of that is the suckless philosophy where extra features comes as patches and diff.
The solution is to set team/department standards inside companies or use whatever you need as a single contributor. I saw attempts to standardize across a company that is quite decentralized and it failed every time.
This is ultimately where, and why, github succeeded. It's not that it was free for open source. It's that it ironed out lots of kinks in a common group flow.
Git is a cultural miracle, and maybe it wouldn't have got its early traction if it had been overly prescriptive or proscriptive, but more focus on those workflows earlier on would have changed history.
What? Knowing that a git repo is just a folder is nowhere near "expert" level. That's basic knowledge, just like knowing that the commits are nodes of a DAG. Sadly, most git users have no idea how the tool works. It's a strange situation, it'd be like if a majority of drivers didn't know how to change gears.
You are simultaneously saying that something is not expert level knowledge while acknowledging that most people don’t know it. Strange.
I'm not sure that's true, unless you only take certain parts of the world into consideration.
If you literally can't change gears then your choices are a) go nowhere (neutral), b) burn out your clutch (higher gears), or c) burn out your engine (1st gear). All are bad things. Even having an expert come along to put you in the correct gear once, twice, or even ten times won't improve things.
If a programmer doesn't know that git is a folder or that the commits are nodes of a DAG, nothing bad will happen in the short term. And if they have a git expert who can get them unstuck say, five times total, they can probably make it to the end of their career without having to learn those two details of git.
In short-- bad analogy.
(I started using Git in 02009, with networking strictly over ssh and, for pulls, HTTP.)
In large corps you usually have policies to not leave your laptop unattended logged in, in the office, that would be potentially even worse than that.
It's a misguided policy that hurts morale and leaves a tremendous amount of productivity and value on the floor. And I suspect that many of the policies are in place simply because a number of the rule makers aren't aware of how easy it to share the code. Look how many in this thread alone weren't aware of inherent distributability of git repositories, and presumably they're developers. You really think some aging career dev ops that worked at Microsoft for 30 years is going to make sensical policies about some software that was shunned and forbidden only a decade ago?
Depends on what you mean by "a project". If it's policy related, maybe it's company's policy that all code that is written must be stored in a certain way for multitude of reasons.
Tons of people who DO use git cli don't know git init. Their whole life was create a project on github and clone it. Anyway initting new project isn't the most "basic" thing with git, it is used less than .01% of total git commands
if you combine the above easily MOST people have no idea about git init
It's not about how long the action takes, it's about how much the team responsible for that is loaded and can prioritize things. Every team needs more round tuits. Anyone who works in an IT support role knows this. The point is that they can self-service immediately and there is no actual dependency to start writing code and using revision control, but people will trot out any excuse.
If somebody asked me if it's possible to scp my git repo over to another box and use it there or vice versa, I would have said, yes, that is possible. Although I would've felt uneasy doing that.
If somebody asked me if git clone ssh:// ... would definitely work, I wouldn't have known out of the gate, although I would have thought it would be neat if it did and maybe it does. I may have thought that maybe there must be some sort of git server process running that would handle it, although it's plausible that it would be possible to just do a script that would handle it from the client side.
And finally, I would've never thought really to necessarily try it out like that, since I've always been using Github, Bitbucket, etc. I have thought of those as permanent, while any boxes I have could be temporary, so not a place where I'd want to store something as important to be under source control.
You’ve always used GitHub but never known it could work over ssh? Isn’t it the default method of cloning when you’re signed in and working on your own repository…?
E.g. maybe the host would have to have something like apt install git-server installed there for it to work. Maybe it wouldn't be available by default.
I do know however that all info required for git in general is available in the directory itself.
We’ve gone so far with elaborate environments and sets to make it easy to learn more advanced things, that many people never learn the very basics. I see this as a real problem.
With remote, if your company stubbornly refuses to use a modern vpn like tailscale, and you can't really network between two computers easily, git format patch and git am, coupled with something like slack messages, works well enough, albeit moderately cumbersome
https://public-inbox.org/README.html
> public-inbox implements the sharing of an email inbox via git to complement or replace traditional mailing lists. Readers may read via NNTP, IMAP, POP3, Atom feeds or HTML archives.
> public-inbox stores mail in git repositories as documented in https://public-inbox.org/public-inbox-v2-format.txt and https://public-inbox.org/public-inbox-v1-format.txt
> By storing (and optionally) exposing an inbox via git, it is fast and efficient to host and mirror public-inboxes.
Hard to justify using SSH for Git. Principle of least power and all that
https://gitolite.com/gitolite/
If you think the bare bones example is interesting and want something simple just for you or a small group of people, this is one step up. There's no web interface. The admin is a git repository that stores ssh public keys and a config file that defines repo names with an ACL. When you push, it updates the authorization and inits new repositories that you name.
I put everything in repos at home and a have multiple systems (because of VMs) so this cleaned things up for me considerably.
If I was just using it for git hosting I'd probably go for something more light weight to be honest.
TIL about the update options for checked out branch. In practise though usually you want just the .git "bare" folder on server
Tip: create a `git` user on the server and set its shell to `git-shell`. E.g.:
You might also want to restrict its directory and command access in the sshd config for extra security.Then, when you need to create a new repository you run:
And use it like so: Or: This has the exact same UX as any code forge.I think that initializing a bare repository avoids the workarounds for pushing to a currently checked out branch.
However, this setup doesn't work with git-lfs (large file support). Or, at least I haven't been able to get it working.
PS: Even though git-shell is very restricted you can still put shell commands in ~/git-shell-commands
For an actually distributed large file tracking system on top of git you could take a look at git-annex. It works with standard ssh remotes as long as git-annex is installed on the remote too (it provides its own git-annex-shell instead of git-shell), and has a bunch of additional awesome features.
267 more comments available on Hacker News