Skiftos: a Hobby OS Built From Scratch Using C/c++ for Arm, X86, and Risc-V
Posted4 months agoActive4 months ago
skiftos.orgTechstoryHigh profile
excitedpositive
Debate
20/100
Operating SystemsC/c++Microkernel
Key topics
Operating Systems
C/c++
Microkernel
SkiftOS, a hobby OS built from scratch using C/C++ for multiple architectures, has been shared on HN, sparking admiration and discussion about its features, design choices, and potential applications.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
63
0-12h
Avg / period
13.1
Comment distribution92 data points
Loading chart...
Based on 92 loaded comments
Key moments
- 01Story posted
Sep 13, 2025 at 12:55 AM EDT
4 months ago
Step 01 - 02First comment
Sep 13, 2025 at 2:02 AM EDT
1h after posting
Step 02 - 03Peak activity
63 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 18, 2025 at 5:29 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45229414Type: storyLast synced: 11/20/2025, 6:36:47 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I am amazed that you also managed to write a browser engine!
```bash ./skift.sh run --release <app-name> ```
on Linux or macOS.
To see all available apps:
```bash ls ./src/apps ```
Overall it looks interesting, all the best.
As a Norwegian, the name of this system and those components sound Danish (Skift, Karm, Opstart) and Danish-inspired (Hjert). Am I right? :)
What you did here is really cool and inspiring :).
https://us.amazon.com/Developing-32-Bit-Operating-System-Cd-...
```bash ./skift.sh run --release vaev-browser -- <url-or-file> ```
The HTTP stack is super barebones, so it only supports `http://` (no HTTPS). It works with my site, but results may vary elsewhere.
Most of my time so far has gone into the styling and layout engine rather than networking.
It would be nice to have such information displayed somewhere on the site.
I'm curious, how come the app I just compiled works on macOS?
The story of 10x developers among us is not a myth... if anything, it's understated.
Very impressive!
https://serenityos.org/
There's not much to say about it because there's never been an actual disagreement in philosophy. Every OS designer knows it's better for stability and development velocity to have code run in userspace and they always did. The word microkernel came from academia, a place where you can get papers published by finding an idea, giving it a name and then taking it to an extreme. So most microkernels trace their lineage back to Mach or similar, but the core ideas of using "servers" linked by some decent RPC system can be found in most every OS. It's only a question of how far you push the concept.
As hardware got faster, one of the ways OS designers used it was to move code out of the kernel. In the 90s Microsoft obtained competitive advantage by having the GUI system run in the kernel, eventually they moved it out into a userland server. Apple nowadays has a lot of filing systems run in userspace but not the core APFS that's used for most stuff, which is still in-kernel. Android moved a lot of stuff out of the kernel with time too. It has to be taken on a case by case basis.
I don't understand that, and I also don't understand why users who enjoy text-only interaction with computers are still relying on very old designs incorporating things like "line discipline", ANSI control sequences and TERMINFO databases. A large chunk of cruft was introduced for performance reasons in the 1970s and even the 1960s, but the performance demands of writing a grid of text to a screen are very easily handled by modern hardware, and I don't understand why the cruft hasn't been replaced with something simpler.
In other words, why do users who enjoy text-only interaction with computers still emulate hardware (namely, dedicated terminals) designed in the 1960s and 1970s that mostly just displays a rectangular grid of monospaced text and consequently would be easy to implement afresh using modern techniques?
There a bunch of complexity in every terminal emulator for example for doing cursor-addressing. Network speeds are fast enough these days (and RAM is cheap enough) that cursor-addressing is unnecessary: every update can just re-send the entire grid of text to be shown to the user.
Also, I think the protocol used in communication between the terminal and the computer is stateful for no reason that remains valid nowadays.
Bear in mind, moving stuff out of the kernel is only really worth it if you can come up with a reasonable specification for how to solve a bunch of new problems. If you don't solve them it's easy to screw up and end up with a slower system yet no benefit.
Consider what happens if you are overenthusiastic and try to move your core filesystem into userspace. What does the OS do if your filesystem process segfaults? Probably it can't do anything at that point beyond block everything and try to restart it? But every process then lost its connection to the FS server and so all the file handles are suddenly invalidated, meaning every process crashes. You might as well just panic and reboot, so, it might as well stay in the kernel. And what about security? GNU Hurd jumped on the microkernel bandwagon but ended up opening up security vulnerabilities "by design" because they didn't think it through deeply enough (in fairness, these issues are subtle). Having stuff be in the kernel simplifies your architecture tremendously and can avoid bugs as well as create them. People like to claim microkernels are inherently more secure but it's not the case unless you are very careful. So it's good to start monolithic and spin stuff out only when you're ready for the complexity that comes with that.
Linux also has the unusual issue that the kernel and userspace are developed independently, which is an obvious problem if you want to move functionality between the two. Windows and macOS can make assumptions about userspace that Linux doesn't.
If you want to improve terminals then the wrong place to start is fiddling with moving code between kernel and user space. The right place to start is with a brand new protocol that encodes what you like about text-only interaction and then try to get apps to adopt it or bridge old apps with libc shims etc.
I mean, it's not necessarily true that if a filesystem process crashes, every other process crashes. Depending on the design, each FS process may serve requests for each mountpoint, or for each FS type. That already is a huge boon to stability, especially if you're using experimental FSs. On top of that, I think the broken connection could be salvageable by the server storing handle metadata in the kernel and retrieving it when the kernel revives the process. It's hardly an insurmountable problem.
Consider: crash bugs are finite. Do you spend your time on complex rearchitecting of your OS to try and fail slightly less hard when some critical code crashes, or do you spend that time fixing the bugs? If the code is big, fast changing and third party then it might make sense to put in the effort, hence FUSE and why graphics drivers often run a big chunk of code out of kernel. If the code is small, stable and performance sensitive, like a core filesystem where all your executables reside, then it doesn't make sense and stays in.
Browsers also use a micro-kernelish concept these days. But they're very deliberate and measured about what gets split out into extra processes and what doesn't.
The microkernel concept advocates for ignoring engineering tradeoffs in order to put everything into userspace all the time, and says precious little about how to ensure that translates into actual rewards. That's why it's an academic concept that's hardly used today.
Finite can still be a very large number. Clearly the former is preferable, otherwise your argument applies just as well to usermode code. Why bother having memory protection when the code should be correct anyway?
Remember the CloudStrike bug? That wouldn't have happened had the developer been able to put the driver in user mode. The module was not critical, so the system could have kept on running and a normal service could have reported that the driver had failed to start due to an error. That's much, much, much preferable to a boot loop.
But by and large, kernel code is much more tightly scoped and stable than userspace apps. The requirements for a core filesystem change very slowly and a migration from one version to another can take years. Userspace apps might update every week and still be too slow. We tolerate much more instability in the latter than the former.
The engineering costs of moving things out of the kernel can be significant. If your OS isn't totally hosed then - third party drivers excepted - there's probably a finite number of bugs you have to solve to get reliability up above your target level. It can often make sense to just sit down and fix the bugs instead of moving code out of kernel space, which will take a long time and at the end the bugs will still be there and still need to be fixed.
This argument gets a lot weaker when you can't fix the bugs, or when code changes so frequently new bugs get added at the same rate they get fixed. AV scanners and GPU drivers are good examples of that. And they do tend to get moved out of kernel space. Most of CrowdStrike doesn't run in kernel mode, and arguably Microsoft should have kicked the remaining parts out of the kernel a long time ago. A big chunk of the GPU driver was already moved.
Unfortunately by the nature of what AV scanners are trying to do they try to get everywhere. I'm sure MS would love nothing more than to boot them out of Windows but that's an antitrust issue not a technical issue.
See https://en.m.wikipedia.org/wiki/ANSI_escape_code
Every +*general-puprose OS.
Nintendo's 3DS OS and Switch 1+2 OS are bespoke and strictly microkernel-based (with the exception of DMA-330 CoreLink DMA handling on 3DS if you want to count is as such), and these have been deployed on hundreds of millions of commercially-sold devices.
Just inspiration.
Also why do OS devs seem to have a thing for making browsers? Shouldn't browsers be mostly agnostic to the OS?
The UI looks nice :)
rant over!
this is ho many times they moved it around or sold it. each one of these times they had to do due diligence before same as you do with any ip acquisition. okay if you dont want to do full opensource, how about taking a few modules and just bsd-3 or MIT-ing them. it shows Intent, how many time any VC has done this in last 20 years? none that I recall. this is the problem, big do-gooder talk about changing the world, but at the end of day they are out to JUST make money.
I dove deep into the code base. Found lib-sdl. Found impl-efi. Found co_return and co_await's. Found try's. Found composable classes. Found my codebase to be a mess compared to the elegance that is this. We are not worthy...
The modules... :chefs-kiss:
Nope. Unless your hobby OS also has a browser with a JS interpreter... which would be even more impressive.
Looking forward to seeing it included in the next CCC CTF, like SerenityOS [0].
[0] https://2019.ctf.link/internal/challenge/1fef0346-a1de-4aa4-...
I'm on macOS, and still no luck building the code. But anything which doesn't involve building a custom GCC easily gets my vote :)
contact: your e-mail
skills: project website
and you'd get hired in a ton of places.
5 more comments available on Hacker News