Ubuntu Lts Releases to 15 Years with Legacy Add-on
Postedabout 2 months agoActiveabout 1 month ago
canonical.comTechstory
calmpositive
Debate
0/100
UbuntuLts ReleasesLinux DistributionsCanonical
Key topics
Ubuntu
Lts Releases
Linux Distributions
Canonical
Canonical expands Ubuntu LTS support to 15 years with a Legacy add-on.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
3d
Peak period
87
72-84h
Avg / period
26.7
Key moments
- 01Story posted
Nov 20, 2025 at 7:47 AM EST
about 2 months ago
Step 01 - 02First comment
Nov 23, 2025 at 12:06 AM EST
3d after posting
Step 02 - 03Peak activity
87 comments in 72-84h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 27, 2025 at 10:57 AM EST
about 1 month ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45992035Type: storyLast synced: 11/22/2025, 6:20:59 AM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Should be mandatory for home automation systems. Support must outlive the home warranty.
>30-day trial for enterprises. Always free for personal use. >Free, personal subscription for 5 machines for you or any business you own
This "Pro" program also being free is a suprise to be sure, but a welcome one.
And besides the GUI, all unixes were way more cutting edge than anything windows except NT. Only when that went mainstream with XP it became serious.
I know your 20 year timeframe is after XP's release, but I just wanted to point out there was a time when the unixes were way ahead. You could even get common software like WP, Lotus 123 and even internet explorer and the consumer outlook (i forget the name) for them in the late 90s.
Could you please elaborate?
IBM made it super suit and tie. Geriatric colour schemes with dark colours, formal serif fonts and anything cool removed.
Functionally it was the same (even two or three features were added) but it went from "designed for people" to "designed for business". Like everything that IBM got their hands on in those days (these days they make nothing of consequence anymore anyway, they're just a consulting firm).
It was really disappointing to me when we got the "upgrade". And HP was really dismissive of VUE because they wanted to protect their collaboration deal.
I think 10.30 was peak HP-UX. 11 and 11i were the decline.
Being stuck in Ubuntu 14.04 you can actually take a look out the window and see what you are missing by being stuck in the past. It hurts.
There are more people like that than one might think.
There's a sizable community of people who still play old video games. There are people who meticulously maintain 100 year old cars, restore 500 year old works of art, and find their passion in exploring 1000 year old buildings.
The HN front page still gets regular posts lamenting loss of the internet culture of the 80s and 90s, trying to bring back what they perceive as lost. I'm sure there are a number of bearded dudes who would commit themselves to keeping an old distro alive, just for the sake of not having to deal with systemd for example.
I went to the effort of reverse engineering part of Rollercoaster Tycoon 3 to add a resizeable windowed mode and fix it's behaviour with high poll rate mice... It can definitely be interesting to make old games behave on newer platforms.
I don't think so: there are Debian forks that aspire to fight against the horrors of GNOME, systemd, Wayland and Rust, but they don't attract people to work on them.
The forks are all volunteer projects (except Ubuntu), so having slightly different opinions isn’t considering capitalism as a driving force.
Sure, you won't get the niceties of modern developments, but at least you have access to all of the source code and a working development environment.
The biggest problem is fixing security flaws with patches that dont have 'simple' fixes. I imagine that they are going to have problems with accurately determining vulnerability in older code bases where code is similar, but not the same.
That sounds like a fun job actually.
However, things get annoying once something ends up on some priority list (like the Known Exploited Vulnerabilities list from CISA), you ship the software in a much older version, and there is no reproducer and no isolated patch. What do you do then? Rebase to get the alleged fix? You can't even tell if the vulnerability was present in the previous version.
There are known exploited vulnerabilities without PoC? TIL and that doesn't sound fun at all indeed.
At last count I'd written close to 2000 reproducers and approx 400 of those were local privesc for product security.
Security teams are usually highly discouraged from sharing exploits/reproducers as they have leaked in the past. My spectre/meltdown ended up on the web and someone else took credit, sad.
Imagine a piece of software that is on some LTS, but it's not that popular. Bash is going to be used extensively, but what about a library used by one package? And the package is used by 10k people worldwide?
Well, many of those people have moved on to a newer version of a distro. So now you're left with 18 people in the world, using 10 year old LTS, so who finds the security vulnerabilities? The distro sure doesn't, distros typically just wait for CVEs.
And after a decade, the codebase is often diverged enough, that vulnerability researchers, looking at newer code, won't be helpful for older code. They're basically unique codebases at that point. Who's going through that unique codebase?
I'd say that a forked, LTS apache2 (just an example) on a 15 year old LTS is likely used by 17 people and someone's dog. So one might ask, would you use software which is a security concern, let's say a http server or what not, if only 18 people in the world looked at the codebase? Used it?
And are around to find CVEs?
This is a problem with any rarely used software. Fewer hands on, means less chance of finding vulnerabilities. 15 year old LTS means all software is rare.
And even though software is rare, if an adversary finds out it is so, they can then play to their heart's content, looking for a vulnerability.
The Pro subscription isn’t free and clearly Canonical think they will have enough uptake on old versions to justify the engineering spend. The market will tell them if they’re right soon. It will be interesting to watch. So far it seems clear they have enough Pro customers to think expanding it is profitable.
Likewise, the number of black hats searching for vulnerabilities in these versions is probably zero, since there isn't a deployment base worth farming.
Unless you're facing something targeted at you that an adversary is going to go to huge expense to try to find fresh vulnerabilities specifically in the stack you're using, you're probably fine.
I agree with your sentiment that no known vulnerabilities doesn't mean no vulnerabilities, but my point is that the risk scales down with the deployment numbers as well.
And always keeping up with the newest thing can be more dangerous in this regard: new vulnerabilities are being introduced all the time, so your total exposure window could well be larger.
(Lucky for you if you excluded anything close to browsers and GUIs from your LTS offering.)
I remember a long time ago one of our client was a bank, they had 2 datacenters with a LACP router, SPARC machines, Solaris, VxFS, Sybase, Java app. They survived 20 years with app, OS and hardware upgrades and 0 second of downtime. And I get lectured by a 3 years old developer that I should know better.
If its that easy, then why aren't they doing it instead of you? Yeah, I thought so.
This is where devops came from. Developers saw admins and said I can do that in code! Every time egotistical, eager to please developers say something is easy, business says ok, do it.
This is also where agile (developers doing project management) comes from.
I also love doing stuff that has long term stability written all over it. In my 20 year career of trying to do that through various roles, I've learnt that it comes with a number of prerequisites:
1. Minimising & controlling your dependencies. Ensuring code you own is stable long term is an entirely different task to ensuring upstream code continues to be available & functional. Pinning only goes sofar when it comes to CVEs.
2. Start from scratch. The effort to bring an inherited codebase that was explicitly not written with longevity in mind into line with your own standards may seem like a fun challenge, but it becomes less fun at a certain scale.
3. Scale. If you're doing anything in (1) & (2) to any extent, keep it small.
Absolutely none of the above is remotely applicable to a project like Ubuntu.
Would it be existing teams in the main functional areas (networking, file systems, user space tools, kernel, systemd &c) keeping the packages earmarked as 'legacy add-on' as they age out of the usual LTS, old LTS, oldold LTS and so on?
Or would it in fact be a special team so people spending most of their working week on the legacy add-on?
Does Canonical have teams that map to each release, tracking it down through the stages or do they have functional teams that work on streams of packages that age through?
But that's not what happens here, this is probably mostly backporting security fixes to older version. I haven't done that to any meaningful amount, but why wouldn't you find a sense of purpose in it? And if you do, why wouldn't it be fun?
if you venture even five feet into the world of enterprise software (particularly at non tech companies) you will discover that fifteen years isnt a very long time. when you spend many millions on a core system that is critical to your business you want it to continue working for many, many years.
https://documentation.ubuntu.com/ubuntu-for-developers/refer...
14.04 LTS has Python 3.4 as well as Python 2.7.
E.g. https://github.com/ActiveState/cpython
There are others.
[0]https://access.redhat.com/support/policy/updates/errata
So now, what do they do? Spend thousands of hours upgrading the soon-to-be-replaced fleet anyway, or ask their vendor if they could, pretty please, extend LTS for another two years?
If Ubuntu can spread the cost between enough (or large enough) customers, why not?
Containers reuse host system's new kernel, while inside I get Ubuntu 22.04. I don't see a good reason, if 22.04 will get 15-year life support, to upgrade it much. It's a perfect combination for me, keeping the project on 22.04 essentially forever, as long as my 22.04 build-container can still build the new version.
In a cluster mode, you can move container into another machine without downtime, back it up in full etc., also via one command.
In theory when using ZFS or btrfs you can do incremental backup of the snapshot (send the diff only), but I never tried it.
Imagine the world of pain when the time comes to upgrade the software to Ubuntu 37.04.
OTOH, there is a desire from a group of kernel developers to implement to implement the code they contribute to the project in Rust. The want new shit to be working, and write support for it in a language they consider suitable for implementing the support faster, more maintainable and safer than in C. Should those people be held back by architecture support for architectures that haven’t seen new hardware in decades? Would that imply that the kernel developers cannot decide to drop support for old architectures? What would any such requirement mean for the long term future of the Linux kernel?
The major argument you get from "why are you using Windows 7" is exactly this, companies in infrastructure argue that they still get a supported operating system in return (despite the facts, despite EOL, despite reality of MS not patching actually, and just disclosing new vulnerabilities).
And currently there's a huge migration problem because Microsoft Windows 11 is a non-deterministic operating system, and you can't risk a core meltdown because of a popup ad in explorer.exe.
I have no idea why Microsoft is sleeping at the wheel so much, literally every big industry customer I've been at in Europe tells me the exact same thing, and almost all of them were Windows customers, and are now migrating to Debian because of those reasons.
(I'm proponent of Linux, but if I were a proponent of Windows I'd ask myself wtf Microsoft is doing for the last 10 years since Windows 7)
its compability its one of the best
After all no game from >10 years ago runs any longer.
stop lying, I can run original doom, sim city and red alert on my windows 10
You just need to click run compability as windows xxx then you can run it
On the client side where this “non-deterministic” OS issue is far more advanced, moving away is so rare it’s news when it happens. On the data center side I’ve seen it more as consolidation of the tech stack around a single offering (getting rid of the few Windows holdouts) and not substantially Windows based companies moving to Linux.
Even Azure, the new major revenue stream of Microsoft is built on Linux!
Exactly, and has been for some time now. MS wasn’t asleep at the wheel, they just stopped caring about your infra. The money’s in the cloud now, especially the SaaS you neither own nor control.
My question was if these large companies moving away from Windows are just clearing the last remnants of the OS rather than just now shifting their sizable Windows footprint to Linux.
I’m trying to understand what was OP reporting. On the user side almost nobody is moving their endpoints to Linux, on the DC side almost nobody has too many Windows machines left to move to Linux after years of already doing this. The trend was apparent for years and years.
On the plus side, businesses and administrations work with dates in the future a lot (think contract life times, leases, maintenance schedules etc.), so hopefully that flushes out many of the bugs ahead of time.
SecureMetrics will scan your system, find an "old" ssh version and flag you for non-compliance, even though your ssh was actually patched through LTS maintenance. You will then need to address all the vulnerabilities they think you have and provide "proof" that you are running a patched version (I've been asked for screenshots…).
Took us a while to find the right ones.
Even worse, someone is overzealous, because you will get SecureMetrics on your back even if you are below the PCI thresholds.
there's the CVE tracker you can use to ~argue~ establish that the versions you're using either aren't affected or, have been patched.
https://ubuntu.com/security/cves
https://ubuntu.com/security/CVE-2023-28531
so ymmv
(To be fair to Cannonical, the upgrade from 20.04 to 24.04 through 22.04 went decently well. Despite some UEFI register running out of memory and the installation being interrupted, it resumed every time to complete upgrade. Three servers and a laptop came back up with full functionality. Even Unity seems to work.)
But not for 14.04. 14.04 was released before all this container nonsense and it is a coherent userspace canonical packages. I can tell you from person experience the last decade (using the free version) that it's worked flawlessly.
But the availability of 15 years LTS is also a good argument for Linux in some corporate decision making.
If you are not able to upgrade your stuff every 2 to 3 years, then you will not be able to upgrade your stuff after 5, 10 or 15 years. After so long time, that untouched pill of cruft will be considered as legacy, built by people gone long ago. It will be a massive project, an entire rebuild/refactor/migration of whatever you have.
"If you do not know how to do planned maintenance, then you will learn with incidents"
Keeping your infrastructure/code somehow uptodate ensures: - each time you have to upgrade, this is not a big deal - you have less breaking changes at each iteration, thus less work to do - when you must upgrade for some reasons, the step is, again, not so big - you are sure you own the infrastructure. That current people owns it (versus people who left the company 8 years ago) - you benefits from innovation (yes, there is) and/or performance improvements (yes, there is)
Keeping your stuff rotting in a dark room brings nothing good
I'd much rather stand up a replacement system adjacent to the current one, and then switch over, than run the headache of debugging breaking changes every single release.
To me, this is the difference between an update and an upgrade. An update just fixes things that are broken. And upgrade adds/removes/changes features from how they were before.
I'm all for keeping things up to date. And software vendors should support that as much as possible. But forcing me to deal with a new set of challenges every few weeks is ridiculous.
This idea of rapid releases with continuous development is great when that's the fundamental point of the product. But stability is a feature too, and a far more important one in my opinion. I'd much rather a stable platform to build upon, than a rickety one that keeps changing shape every other week that I need to figure out what changed and how that impacts my system, because it means I can spend all of my time _using_ the platform rather than fixing it.
This is why bleeding edge releases exist. For people who want the latest and greatest, and are willing to deal with the instability issues and want to help find and squash bugs. For the rest of us, we just want to use the system, not help develop it. Give me a stable system, ship me bug fixes that don't fundamentally break how anything works, and let me focus on my specific task. If that costs money, so be it, but I don't want to have to take one day per week running updates to find something else is broken and have to debug and fix it. That's not what I'm here to do.
And as for cleaning the house - we always have the option of hiring a cleaner. That costs us money, but they keep the house cleanliness stable whilst we focus on something else to make enough money to cover the cleaner's cost plus some profit.
And also because, for the others, you have to migrate everybody from the "old" to the "new"; Large project, low value, nobody cares, "just to your job and don't bother us with your shit"
There is an argument for staying on the latest stable version.
To be specific, interface names changed like at least 2 times in the last 4 releases.
What you would do for anything important is build a new separate system and then migrate to that once it is working. You can then migrate back if you discover issues too.
Perhaps this is a side effect of dealing with software development ecosystems with huge dependency trees?
There's a lot of software not like that at all. No dependencies. No internet connection. No multi kilobyte lock files detailing long lists of continual software churn and bug fixes.
OS is not a physical house with life waste.
Rest of your message doesn’t make any sense for majority of industry. For anything dealing with manufacturing stability is much more important that marginal performance gains. Any downtime is losing money.
What part of that process needs to change every 2-3 years? Because some 'angel investor' says we need growth which means pushing updates to make it appear like you're doing something?
old.reddit has worked the same for the last 10 years now, new.reddit is absolutely awful. That's what 2-3 years of 'change' gets you.
In fact, this website itself remains largely the same. Why change for the sake of it?
Why not cleaning the room only once every 2-3 years ?
"because that's what you do" is not a valid justification.
Then one day people's health or econoly dwindle, they need to move to a place without stairs or to a city center clother to amenities such as groceries, pharmacy and healthcare without relying on a car they cannot drive safely anymore, and moveming becomes a huge task. Or they die and their survivors have to take on the burden of emptying/donating/selling all that shit accumulated over the years.
Every move I assessed what I really needed and what I didn't and I think my life is better thanks to that.
I understand this is a YMMV thing. I am not saying everyone should move every couple of years. But to many people that isn't that big of a deal and it can be also considered in a very positive way.
Or they could spend a weekend and get rid of that stuff for 10% of the stress of moving.
Remove that, tell everybody : "hey, for 30min of your time, you can get a new car every 6 months"
See how everybody will get new cars :)
It's the kind of rhetoric that enables shoving down user-hostile features during a simple update. And breaking many use cases. Quite common in the FOSS/Linux mentality, not so much on the rest of the world.
Major operating system version upgrades can be more akin to upgrading all the furniture and electronics in my house at the same time.
Operating systems in particular need to manage the hardware, manage memory, manage security, and otherwise absolutely need to shut up and stay out of the fucking way. Established software changes SLOWLY. It doesn't need to reinvent itself with a brand new dichotomy every 3 years.
Nobody builds a server because they want to run the latest version of Python. They built it to run the software they bought 10 years ago for $5m and for which they're paying annual support contracts of $50k. They run what the support contracts require them to run, and they don't want to waste time with an OS upgrade because the cost of the downtime is too high and none of the software they use is going to utilize any of the newly available features. All it does is introduce a new way for the system to fail in ways you're not yet familiar with. It adds ZERO value because all we actually want and need is the same shit but with security patches.
Genuinely I want HN to understand that not everyone is running a 25 person startup running a microservice they hope to scale to Twitter proportions. Very few people in IT are working in the tech industry. Most IT departments are understaffed and underfunded. If we can save three weeks of time over 10 years by not having to rebuild an entire system every 3 years, it's very much worth it.
Here, I'm in charge of some low level infrastructure components (the kind on which absolutely everything rely on, 5sec of downtime = 5sec of everything is down)
On one of my scope, I've inherited from a 15 years-old junkyard
The kind with a yearly support
The kind that costs millions
The kind that is so complex, that has seen so less evolutions other the years that nobody knows it anymore (even the people who were there 15y ago)
The kind that slows everybody else because it cannot meet other teams' needs
Long story short, I've got a flamethrower and we are purging everything
Management is happy, customers are happy too, my mates also enjoy working with sane tech (and not braindamaged shit)
The one that sucks was a so-so compromise back in the day, and became a worse and worse compromise as better solutions became possible. It's holding the users back, and is a source of regular headaches. Users are happy to replace it, even at the cost of a disruption. Replacing it costs you but not replacing it also costs you.
The one that works just works now, but used to, too. Its users are fine with it, feel no headache, and loathe the idea to replace it. Replacing it is usually costly mistake.
It slowly rot, like everything else
Of course, you can have stuff running is constraint environment
In my personal experience, this could mean that you're really good or that you're completely incompetent and unaware that computers need to be plugged to a power outlet to function.
Oopsie you got pwned and now your database or factory floor is down for weeks. Recovery is going to require specialists and costs will be 10 times what an upgrade would have cost with controlled downtime.
In a factory, access is the primary barrier.
It's like an onion, outer surface has to be protected very well, but as you get deeper in the zone where less and less services have access then the risk / urgency is usually lowered.
Many large companies are consciously running with security issues (even Cloudflare, Meta, etc).
Yes, on the paper it's better to upgrade, in the real world, it's always about assessing the risk/benefits balance.
Sometimes updates can bring new vulnerabilities (e.g. if you upgrade from Windows 2000 to the "better and safer" Windows 11).
In your example, you have the guarantee to down the factory floor (for an unknown amount of time, what if PostgreSQL does not reboot as expected, or crashes during runtime in the updated version).
This is essentially an (hopefully temporary) self-inflicted DoS.
Versus an almost non-existent risk if the machine is well isolated, or even better, air-gapped.
Anyone else remember stuxnet?
There's a difference between old software and old OS. Unless you've got new hardware, chances are you never really need a new OS.
Also, this is the most compelling reason I've seen so far to pay a subscription. For any business that merely relies upon software as an operations tool, it's far more valuable business-wise to have stuff that works adequately and is secure, than stuff that is new and fancy.
Getting security patches without having feature creep trojan-horsed into releases is exactly what I need!
This happens so often its basically a failure of capitalism.
If you can get away with one or zero overhauls of your infra during your tenure then that's probably a hell of a lot easier than every two to three years.
† https://www.zippia.com/chief-technology-officer-jobs/demogra...