Using Claude Code to Modernize a 25-Year-Old Kernel Driver
Posted4 months agoActive4 months ago
dmitrybrant.comTechstoryHigh profile
supportivepositive
Debate
40/100
LlmsKernel DevelopmentLegacy Code Modernization
Key topics
Llms
Kernel Development
Legacy Code Modernization
The article discusses using Claude Code to modernize a 25-year-old kernel driver, with the HN discussion highlighting the potential of LLMs to boost productivity and enable developers to tackle complex tasks.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
20m
Peak period
130
Day 1
Avg / period
26.7
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 7, 2025 at 7:53 PM EDT
4 months ago
Step 01 - 02First comment
Sep 7, 2025 at 8:14 PM EDT
20m after posting
Step 02 - 03Peak activity
130 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 18, 2025 at 4:20 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45163362Type: storyLast synced: 11/23/2025, 1:00:33 AM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
One note: I think the author could have modified sudoers file to allow loading and unloading the module* without password prompt.
Another thought, IIRC in the plugins for Claude code in my IDE, you can "authorize" actions and have manual intervention without having to leave the tool.
My point is there were ways I think they could have avoided copy/paste.
That is a bit different than allowing unconfirmed loading of arbitrary kernel code without proper authentication.
Even a minor typo in kernel code can cause a panic; that’s not a reasonable level of power to hand directly to Claude Code unless you’re targeting a separate development system where you can afford repeated crashes.
> Use these tools as a massive force multiplier of your own skills.
Claude definitely makes me more productive in frameworks I know well, where I can scan and pattern-match quickly on the boilerplate parts.
> Use these tools for rapid onboarding onto new frameworks.
I’m also more productive here, this is an enabler to explore new areas, and is also a boon at big tech companies where there are just lots of tech stacks and frameworks in use.
I feel there is an interesting split forming in ability to gauge AI capabilities - it kinda requires you to be on top of a rapidly-changing firehose of techniques and frameworks. If you haven’t spent 100 hours with Claude Code / Claude 4.0 you likely don’t have an accurate picture of its capabilities.
“Enables non-coders to vibe code their way into trouble” might be the median scenario on X, but it’s not so relevant to what expert coders will experience if they put the time in.
One thing I love doing is developing a strong underlying data structure, schema, and internal API, then essentially having CC often one-shot a great UI for internal tools.
Being able to think at a higher level beyond grunt work and framework nuances is a game-changer for my career of 16 years.
Something so complex that we cannot model it as deterministic is hence stochastic. We can just as easily model a stochastic thing by ignoring the stochastic parts.
separating subjective appearance of things from how we can conceptualise them as models begs a deeper philosophical question of how you can talk about the nature of things you cannot perceive.
Actually no wait let’s expand it. Why not go say this to Ronnie O’Sullivan too!
The way you’re describing is such that there is no determinism behind what is being done. Simply not true.
Throwing a dart could not be further away from programming a computer. It's one of the most deterministic things we can do. If I write if(n>0) then the computer will execute my intent with 100% accuracy. It won't compare n to 0.005.
You see arguments like yours a lot. It seems to be a way of saying "let's lower the bar for AI". But suppose I have a laser guided rifle that I rely on for my food and someone comes along with a bow and arrow and says "give it a chance, after all lots of things we do are inaccurate, like throwing darts for example". What would you answer?
It’s undeniable that humans exhibit stochastic traits, but we’re obviously not stochastic processes in the same sense as LLMs and the like. We have agency, error-correction, and learning mechanisms that make us far more reliable.
In practice, humans (especially experts) have an apparent determinism despite all of the randomness involved (both internally and externally) in many of our actions.
A few days ago I lost some data including recent code changes. Today I'm trying to recreate the same code changes - i.e. work I've just recently worked through - and for the life of me I can't get it to work the same way again. Even though "just" that is what I set out to do in the first place - no improvements, just to do the same thing over again.
It feels like toil because it's not the interesting or engaging part of the work.
If you're going to build a piece of furniture. The cutting, nailing, gluing are the "boiler plate" that you have to do around the act of creation.
LLM's are just nail guns.
Sand away! Enjoy copying and pasting your nails, or having LLMs apply your varnish or whatever. I hope it brings happiness.
[0]: https://hai.stanford.edu/ai-index/2025-ai-index-report/econo...
Some amount of boilerplate probably needs to exist, but in general it would be better off minimized. For a decade or so there's sadly been a trend of deliberately increasing it.
It's rather saying that we should have parts that join without nailing by now, especially for things we do again and again and again and again.
Reason Japanese carpenters do or did that is that sea air + high humidity would absolutely rot anything with nail and screw.
No furniture is really designed from a single tree, though. They aren't massive enough.
I agree with overall sentiment. But the analogy is higly flawed. You can't compare physical things with software. Physical things are way more constrained while software is super abstract.
I very much enjoy the Japanese carpentry styles that exist though, off topic but very cool.
The other reason was that iron was very expensive in Japan as they had only low quality iron ore.
LLMs allow us to do large but cheap experiments that we would never attempt otherwise. That includes new architectures. Automation in the traditional sense is opposite of plasticity (because it's optimizing and crystalizing around a very specific process), but what we're doing with LLMs isn't that. Every new request can be different. Experiments are more possible, not less. We don't have to tear down years of scaffolding like old automated systems. We just nudge it in a new direction.
Eventually, prog-lang designers will figure out how to get llms to create new prog-langs.
I actually think I like the idea that, maybe by handling my boilerplate over to AI we can be more comfortable with having boilerplate to begin with.
No. That is a result of bad software engineer practices and stacks, not a symptom of proper abstraction.
It is possible to get much higher quality with not just oversight, but creating the alignment from the stochastic agents to have no choice but to converge towards the desired vector of work reliably.
Human in the loop AI is fine, I'm not sure that everything doesn't to be automated, it's entirely possible to get further and more reps in on a problem with the tool as long as the human is the driver and using the stochastic agent as a thinking partner and not the other way around.
These days, people mostly use things like GHC.Generics (generic programming for stuff like serialization that typically ends up being free performance-wise), newtypes and DerivingVia, the powerful and very generalized type system, and so on.
If you've ever run into a problem and thought "this seems tedious and repetitive", the probability that you could straightforwardly fix that is probably higher in Haskell than in any other language except maybe a Lisp.
There are? For example, rails has had boilerplate generation commands for a couple of decades.
And then… that just kind of dropped out of the discussion. Throw things at the wall as fast as possible and see what stuck, deal with the consequences later. And to be fair, there were studies showing that choice of language didn’t actually make as big of difference as found in the emotions behind the debates. And then the web… committee designed over years and years, with the neve the ability to start over. And lots of money meant that we needed lots of manager roles too. And managers elevate their status by having more people. And more people means more opportunity for specializations. It all becomes an unabated positive feedback loop.
I love that it’s meant my salary has steadily climbed over the years, but I’ve actually secretly thought it would be nice if there was bit of a collapse in the field, just so we could get back to solid basics again. But… not if I have to take a big pay cut. :)
Here’s an incomplete list for those traits. For unusual, there’s many of the FP languages, Ada, APL, Delphi/Object Pascal, JS, and Perl. For duck typing, there’s Ruby, Python, PHP, JS, and Perl. For only interpreted, there are Ruby, PHP, and Perl (and formerly for some time Python and JS). For syntax that’s not necessarily odd (but may be) but lots of people find distasteful there’s Perl, any form of Lisp, APL, Haskell, the ML family, Fortran, JS, and in some camps Python, PHP, Ruby, Go, or anything from the Pascal family. For big languages with lots of interacting parts there’s Perl, Ada, PHP, Lisp with CLOS, Julia, and PHP. For slowdowns, there’s Julia, Python, PHP, and Ruby. The runtime for Perl is actually pretty fast once it’s up and running, but having to build the app before running it on every invocation makes for a slow start time.
All that said, certain orgs do impressive projects pretty quickly with some of these languages. Some do impressively quick work with even less popular languages like Pike, Ponie, Elixir, Vala, AppScript, Forth, IPL, Factor, Raku, or Haxe. Notice some of those are very targeted, which is another reason boilerplate is minimal. It’s built into the language or environment. That makes development fast, but general reuse of the code pretty low.
Haskell mostly solves boilerplate in a typed way and Lisp mostly solves it in an untyped way (I know, I know, roughly speaking).
To put it bluntly, there's an intellectual difficulty barrier associated with understanding problems well enough to systematize away boilerplate and use these languages effectively.
The difficulty gap between writing a ton of boilerplate in Java and completely eliminating that boilerplate in Haskell is roughly analogous to the difficulty gap between bolting on the wheels at a car factory and programming a robot to bolt on the wheels for you. (The GHC compiler devs might be the robot manufacturers in this analogy.) The latter is obviously harder, and despite the labor savings, sometimes the economics of hiring a guy to sit there bolting on wheels still works out.
You’re asking to shift this job from the editor (you) to the viewer (the browser).
Document/template inclusion model should be OK now in modern era thanks to HTTP/3. Not really sure how that should ideally look like though.
You dont understand how things evolve.
There have been plenty of platforms that got rid of boilerplate - e.g. ruby on rails about 20 years ago
But once they become the mainstream, people can get a competitive edge by re-adding loads of complexity and boilerplate again. E.g. complex front end frameworks like react.
If you want your startup to look good you've got to use the latest trendy front end thingummy
Also to be fair, its not just fashion. Features that would have been advanced 20 years ago become taken for granted as time goes on, hence we are always working at the current limit of complexity (and thats why we're always overrun with bugs and always coming up with new platforms to solve all the problems and get rid of all thr boilerplate so that we can invent new boilerplate)
i think it has. How much easier is it today than yester-decade to write, and deploy an application to multiple platforms (and have it look/run similarly)?
How little knowledge it requires now than before?
Lisp completely eliminates boilerplate and has been around for decades, but hardly anyone uses it because programs that use macros to eliminate boilerplate aren't easy to read.
In fact, collectively created 1000s of them and all of them a various flavor of mid.
Now we have a way we can get computers to do it!
Because everyone needs a boilerplate but it's a different boilerplate for everyone unless you're doing the most basic toy apps
Python’s subprocess for example has a lot of args and that reflects the reality that creating processes is finicky and there a lot of subtly different ways to do it. Getting an llm to understand your use case and create a subprocess call for you is much more realistic than imagining some future version of subprocess where the options are just magically gone and it knows what to do or we’ve standardized on only one way to do it and one thing that happens with the pipes and one thing for the return code and all the rest of it.
There is no software you could possibly write that works for everything thatd be as good as "Give me an internal dashboard with these features"
They weren’t just saying ‘AI writes the boilerplate for me.’ They were saying: once you’ve written the same glue the 3rd, 4th, 5th time, you can start folding that pattern into your own custom dev tooling.
AI not as a boilerplate writer but as an assistant to build out personal scaffolding toolset quickly and organically. Or maybe you think that should be more systemized and less personal?
I've felt this learning just this week - it's taken me having to create a small project with 10 clear repetitions, messily made from AI input. But then the magic is making 'consolidation' tasks where you can just guide it into unifying markup, styles/JS, whatever you may have on your hands.
I think it was less obvious to me in my day job because in a startup with a lack of strong coding conventions, it's harder to apply these pattern-matching requests since there are fewer patterns. I can imagine in a strict, mature codebase this would be way more effective.
Piling shit on top of shit only pays off on very short time scales - like a month or two. Because once you revisit that shit code all your time savings are out the window. If you have to revisit it more than once you probably slowed yourself down already.
"Use these tools as a massive force multiplier of your own skills" is a great way to formulate it. If your own skills in the area are near-zero, multiplying them by a large factor may still yield a near-zero result. (And negative productivity.)
It seems to me that LLMs help the most at the initial step of getting into some rabbit hole - when you're getting familiar with the jargon, so you can start reading some proper resources without being confused too much. The sooner you manage to move there, the better.
I was already knowledgeable enough in these topics to catch these, but some were dangerously subtle. Really, the only way to use LLMs to actually learn anything beyond trivial is to actively question everything it prints out and never move forward until you actually grasp the thing and can verify it. It still feels helpful to me to use it this way, but it's hard to tell how it compares to learning from a good and trustworthy resource in terms of efficiency. It's hard to unlearn something and try to learn it again another way to compare ;P
It seems to me that if you have been pattern matching the majority of your coding career, then you have a LLM agent pattern match on top of that, it results in a lot of headaches for people who haven't been doing that on a team.
I think LLM agents are supremely faster at pattern matching than humans, but are not as good at it in general.
just points to the fact that they've no idea what they're doing and would produce different, pointless code by hand, though much slower. this is the paradigm shift - you need a much bigger sieve to filter out the many more orders of magnitude of crap that inexperienced operators of LLMs create.
You cannot outsource thinking to LLMs, at least not yet, if ever. You have to be part of the whole process. You need to have knowledge. If you have no idea what it is doing or what you want it to do, you are going to have a difficult time.
The programming language eliminates some (incorrect syntax) while the type system get rid of others (contract error). We also have linter that helps us with harmful patterns. But the range of errors is still enormous. So what’s the probability of having the LLMs be error free or as close as possible to the intended result?
We as humans have reduced the probability of error by having libraries of correct code (or outsourcing the correction of code), thus having a firmer and cognitively manageable foundation to create new code. As well as not having to rely on language to solve problems.
I just don’t see it like this; code is craft, and there are ten ways to solve any given problem. Reasonable people can select different tradeoffs, and taste is also a big factor.
Maybe if you are working in very low-level algorithmic, compiler, or protocol development it’s less ambiguous. But almost all software is written many layers above that.
I’m sure if you already sat down and thought through every detail, you might find LLMs slow you down vs typing. Many people use the process of writing, or the process of iterating with customers, to flesh out the ambiguous detail; in which case improving cycle time can improve your time to PMF.
In my case it is not slower, so it works for me. I cannot speak for others.
Maybe if all you do is code, but that’s not how most people work. Being able write I need these things done in this way and then attend a meeting or start researching the next thing is valuable. And because of my other obligations there’s no way I could do more without Claude.
Also new languages - our team uses Ruby, and Ruby is easy to read, so I can skip learning the syntax and get the LLM to write the code. I have to make all the decisions, and guide it, but I don't need to learn Ruby to write acceptable-level code [0]. I get to be immediately productive in an unfamiliar environment, which is great.
[0] acceptable-level as defined by the rest of the team - they're checking my PRs.
> Also new languages - our team uses Ruby, and Ruby is easy to read, so I can skip learning the syntax and get the LLM to write the code.
If Ruby is "easy to read" and assuming you know a similar programming language (such as Perl or Python), how difficult is it to learn Ruby and be able to write the code yourself?
> ... but I don't need to learn Ruby to write acceptable-level code [0].
Since the team you work with uses Ruby, why do you not need to learn it?
> [0] acceptable-level as defined by the rest of the team - they're checking my PRs.
Ah. Now I get it.
Instead of learning the lingua franca and being able to verify your own work, "the rest of the team" has to make sure your PR's will not obviously fail.
Here's a thought - has it crossed your mind that team members needing to determine if your PR's are acceptable is "a bad thing", in that it may indicate a lack of trust of the changes you have been introducing?
Furthermore, does this situation qualify as "immediately productive" for the team or only yourself?
EDIT:
If you are not a software engineer by trade and instead a stakeholder wanting to formally specify desired system changes to the engineering team, an approach to consider is authoring RSpec[0] specs to define feature/integration specifications instead of PR's.
This would enable you to codify functional requirements such that their satisfaction is provable, assist the engineering team's understanding of what must be done in the context of existing behavior, identify conflicting system requirements (if any) before engineering effort is expended, provide a suite of functional regression tests, and serve as executable documentation for team members.
0 - https://rspec.info/features/6-1/rspec-rails/feature-specs/fe...
No, not at all.
What I was speaking about was if the person to whom I replied is not a s/w engineer, then perhaps a better contribution to their project would be to define requirements in the form of RSpec specifications (since Ruby is in use) and allow the engineering team to satisfy them as they determine appropriate.
I have seen product/project managers attempt to "contribute" to a development effort much like what was described. Usually there is a power dynamic such that engineers cannot overtly tell the manager(s), "you define the 'what' and we will define the 'how'." Instead, something like the PR flow described is grudgingly accepted and then worked around.
To address your comments about PRs: without the LLM I would be submitting shitty PRs with lots of basic Ruby mistakes. With the LLM I am submitting PRs that are on a par with everyone else's PRs (Ruby has many ways of doing the same thing, so most suggested changes to my PRs are the usual "or you could do it this way and that might be more elegant" discussions). It's not that the rest of the team are picking up my slack, it's actually better this way.
I was a bit sceptical when I started, and like you I assumed that I would end up having to learn Ruby, but in fact it's working well.
As a s/w engineer with 30+ years of experience, I assume you agree that in order to become proficient in a programming language one must go through the process of learning its syntax and idioms. Yet when you say:
This contradicts my understanding of what you originally stated: Regarding: IMHO, this is how s/w engineers learn quickest assuming an environment which supports an open learning process. There are no shortcuts to achieving understanding.Maybe we just have very different opinions on the learning process and/or maybe I lack the context required to understand your situation. In any event, best of luck in your endeavours.
EDIT:
For some reason I cannot reply to your reply to this message in order to share this resource:
I found it a very entertaining read and one of the best language tutorials I have ever found. Hopefully you find it as useful as well.0 - https://poignant.guide/book/chapter-1.html
I think the key point here is that I'm not trying to learn Ruby. We're trying to get a single project done in Ruby. I'm the best person to do the project, Ruby is the best language to do it in, but I don't know Ruby.
If I was trying to learn Ruby, this is not the way I'd do it, and I'd go up the learning curve as normal, writing all those shitty PRs and making all the mistakes as normal. As you say, there are no shortcuts to achieving understanding.
Now I can reply to your message (can't say why I couldn't before, so moving on).
Below is the added content in the event you were unaware of the previous message edit. In addition are three other resource links which may be beneficial to your project. The last one, nokogiri[3], is least likely to be applicable in general but is simply too cool to omit.
I found it a very entertaining read and one of the best language tutorials I have ever found. Hopefully you find it as useful as well.0 - https://poignant.guide/book/chapter-1.html
1 - https://www.rubyguides.com/2018/07/rspec-tutorial/
2 - https://github.com/cucumber/cucumber-ruby
3 - https://nokogiri.org/
This reminds of some of the comments made by reviewers during the infamous Schön scientific fraud case. The scientific review process is designed to catch mistakes and honest flaws in research. It is not designed to catch fraud, and the evidence shows that it is bad at it.
Another applicable example would be the bad patches fiasco with the Linux kernel. (And there is going to be a session at the upcoming maintainers' summit about LLM-generated kernel patches.)
I lead the engineering team at my org and we hire almost exclusively for c++ engineers (we make games). Our build system by happenstance is written in c#, as are all the automation scripts. Out of our control to change. Should we require every engineer to be competent and write fluent c# or should we let them just get on with their value adds?
I would expect every engineer to be able to read C#. It’s not that hard.
Reading code doesn't mean you can write it, as any programmer will tell you.
If I want to know if a string in ruby begins with another string, is the method starts_with or start_with or startwith like python or is it like perl where I have to use some completely different method? I don't know, better google it.
But if I'm reading and see `str.start_with?("https://")` I know instantly what it's doing.
Great skill multiplier, right?
Then I need to expend extra time following everything it did so I can "fix" the problem.
A lot of people have the try and see if it works approach. That can be insanely wasteful in any moderately complex system. The scientist way is to have a model that reduce the system to a few parameters. Then you’ll see that a lot of libraries are mostly surface works and slighlty modified version of the same thing.
One thing where it hasn't shone is configuring my production deployment. I had set this project up with a docker-compose, but my selected CI/CD (Gitlab) and my selected hosting provider (DigitalOcean) seemed to steer me more towards Kubernetes, which I don't know anything about. Gitlab's documentation wanted me to setup Flux (?) and at some point referred to a Helm chart (?)... All words I've heard but their documentation is useless to newcomers ("manage containers in production!": yes, that's obviously what I'm trying to do... "Getting started: run this obscure command with 5 arguments": wth is this path I need to provide? what's this parameter? etc.) I honestly can't believe how complex the recommended setup is, to ultimately run 2 containers that I already have defined in ~20 lines of docker-compose...
Claude got me through it. Took it about 5-6 hours of trying stuff, build failing, trying again. And even then, it still doesn't deploy when I push. It builds, pushes the new container images, and spins up a new pod... which it then immediately kills because my older one is still running and I only want one pod running... Oh well, I'll just keep killing the old pod until I have some more energy to throw at it to try and fix it.
TL;DR: it's much better at some things than others.
Some folks seem to like Docker Swarm before kubernetes as well and I've found it's not bad for personal projects for sure.
AI will always return the average of it's corpus given the chance (or not clear direction in the prompt). I usually let my opinions rip and say to avoid building myself a stack temple to my greatness. It often comes back with a nice lean stack.
I usually avoid or minimize Javascript libraries for their brittleness, and the complexity can eat up more of the AI's context and awareness to map the abstractions vs something it knows incredibly well.
Python is great, but web stuff is still emerging, FastAPI is handy though, and putting something like Pico/HTMX/alpine.js on the front seems reasonable.
Laravel is also really hard to overlook sometimes when working with LLMs on quick things, there's so much working code out there that it can really get a ton done for an entire production environment with all of the built in tools.
Happy to learn about what other folks are using and liking.
It took a few prompts but I know enough about FFS (the Amiga filesystem) to guide it, and it created exactly the tool I wanted.
"force multiplier of your own skills" is a great description.
158 more comments available on Hacker News