Deprecate Like You Mean It
Key topics
The art of deprecation is being rethought, with some commenters jokingly suggesting that deprecation warnings should be as alarmist as possible, with examples ranging from "I will literally kill you" to "your parents should be ashamed." However, others counter that such drastic measures could lead to intermittent failures that are difficult to track down, eroding trust in CI systems. A more straightforward approach is also proposed: maintainers should just break the API if they're going to deprecate it, rather than introducing subtle warnings. The discussion highlights the tension between driving system change and avoiding unnecessary disruption.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
29m
Peak period
143
Day 1
Avg / period
22.9
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 11, 2025 at 10:52 AM EST
22 days ago
Step 01 - 02First comment
Dec 11, 2025 at 11:21 AM EST
29m after posting
Step 02 - 03Peak activity
143 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 24, 2025 at 12:55 AM EST
10 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
How do you know? This is a wild assertion. This idea is terrible. I thought it was common knowledge that difficult to reproduce, seemingly random bugs are much more difficult to find and fix than compiler errors.
If you're ready to break your api, break your api. Don't play games with me. If more people actually removed deprecated APIs in a timely manner, then people will start taking it more seriously.
> In case the sarcasm isn’t clear, it’s better to leave the warts. But it is also worthwhile to recognise that in terms of effectiveness for driving system change, signage and warnings are on the bottom of the tier list. We should not be surprised when they don’t work.
At the same time, it's crazy that urllib (the library mentioned in the article), broke their API on a minor version. Python packaging [documentation](https://packaging.python.org/en/latest/discussions/versionin...) provides the sensible guideline that API breaks should be on major versions.
But more to the point, go out of your way to avoid breaking backwards compatibility. If it's possible to achieve the same functionality a different way just the deprecated function silently use the new technique.
My biggest problem with the whole static typing trend is that it makes developers feel safe to break backwards compatibility when it would be trivial to keep things working.
I'm convinced this isn't possible in practice. It doesn't matter how often you declare that something isn't maintained, the second it causes an issue with a [bigger|more important|business critical] team it suddenly needs become maintained again.
If it's important, they'll pay. Often you find out it wasn't that important, and they're happy to figure it out.
In many ways, the decision is easier because it should be based on a business use case or budget reason.
I don't agree. Some programming languages started supporting a deprecated/obsolete tagging mechanism that is designed to trigger warnings in downstream dependencies featuring a custom message. These are one-liners that change nothing in the code. Anyone who cares about deprecating something has the low-level mechanisms to do so.
It's far better to plan the removal of the code (and the inevitable breaking of downstream users systems) on your own schedule than to let entropy surprise you at some random point in the future.
Deprecation messages show up as compiler warnings. As a package maintainer, your job does not include taking over project management work in projects that depend on your package.
More, in many things, we have actively decided not to do something anymore, and also highly suggest people not mess with older things that did use it. See asbestos. Removing it from a building is not cheap and can be very dangerous.
It's not a force of nature. Bitrot is: many software developers deliberately choosing to break backward compatibility in very small ways over and over. Software written in 1995 should still work today. It's digital. It doesn't rot or fall apart. It doesn't degrade. The reason it doesn't work today is decisions that platforms and library maintainers deliberately made. Like OP. Deprecate like you mean it. That's a choice!
If we want to solve bitrot, we need to stop making that choice.
https://blog.hiler.eu/win32-the-only-stable-abi/
That's an incredibly ignorant claim. Just run "git log" in glibc, it won't take you very long to prove yourself wrong.
For example, ruby deprecated the `File.exists?`, and changed it to `File.exist?`, because enough people felt that the version with the `s` didn't make sense grammatically (which I disagree with, but that is not germane to my point).
For a long time, you would get warning that `exists?` was deprecated and should be replaced by `exist?`.... but why? Why couldn't they just leave `exists?` as an alias to `exist?`? There was not cost, the functions are literally identical except for one letter. While the change was trivial to fix, it added annoyance for no reason.
Although, luckily for me, with Ruby I can just make exists? an alias myself, but why make me do that?!? What is the point of removing the method that has been there forever just because you think the s is wrong?
You get to use open source projects for free, and a lot of people do ongoing maintenance on them which you benefit from for free. In return, sometimes you are expected to modify your code which depends on all that functionality because it makes the maintainer's life easier.
Personally, I see that as a very reasonable trade-off.
Of course it's relevant. It's a laughably trivial example compared to the other one in this thread.
I also have to hope all the dependencies I use did that, too.
But my real question is why? Why make me do it at all?
You were free to show up and argue against it, as was anyone else. Did you?
I am not arguing that they don’t have the RIGHT to make the change, or that they owe me personally anything. I am not even THAT mad. I still love Ruby the most of any language, and generally think they make the right decisions.
I am simply annoyed by this decision.
And yes, I argued against this change when it was first proposed (as did many others). They obviously were not convinced.
Again, I am not arguing that they should be FORCED to do what I want, or that they did something shady or nefarious by making this change. I am not asking for any remedy.
I am simply saying I disagree with this type of change (changing a method name because some people feel the new name makes more grammatical sense, but not changing the method itself at all). The reason I commented was because this is not a “we have to deprecate things for progress” situation. They aren’t limited in any way by the current method, the syntax isn’t cumbersome (and isn’t changing), there is no other capability that is held back by having to maintain this method. It is literally just “I read it as asking ‘Does this file exist?’ rather than asking ‘This file exists?’”
Again, they are obviously free to disagree with me, which they do. I am simply arguing that we shouldn’t break syntax just because you like the way it reads better without an s.
Are there changes I disagree with? Of course. But I'd rather live in a world that moves forward and occasionally breaks me, than one where I have perfect compatibility but am stuck on code lacking the new innovations my competitors benefit from.
The whole idea behind deprecating things is to give people time to make the changes before they become breaking.
I went and looked: exists? was marked as deprecated in 2013 and removed in 2022. That's enormously generous, my previous comparison with the disttools debacle in python was inaccurate. You had a decade!
10 years is not an especially long time period for a software project to be maintained. There's a reason the Linux project is so emphatic that it never breaks user space.
Now multiply that by.... every past history of every API and it makes adopting something really difficult as a newcomer.
aka Common Lisp.
But it wasn't up to me. It's not my project. I'm using somebody else's project, and at the end of the day it's their decision, because they own it. Unless it's impossible to work around, I feel like I have to either respect that, or switch to an alternative.
You're free to maintain a patch on top of Ruby to add the alias and run that on your machines, btw. It would probably be very simple, although certainly not as simple as aforementioned sed command...
Comments like this are honestly just asshole-ish.
It’s wrong to shut down discussion like this with comments like “it’s their code”, “make your own fork”, etc. because Ruby is supposed to be part of the open source community, which implies collaboration, give and take, discussion of pros and cons, etc.
What you are doing is ignoring this major aspect of a programming language and taking this weird anti social stance on it.
I didn't say to fork it. Do you really not appreciate the difference between rebasing a trivial patch forever, and maintaining a wholesale fork forever?
But this is a discussion forum, and I am asking for people who agree with the decision to explain why they agree. Again, they don’t have to answer me if they don’t want to. I am just saying, “if anyone knows an argument for this type of change, I would love to hear it”
Saying they don’t have to explain their reasoning is true but not really relevant to our conversation. I am not asking THEM, I am asking HN readers.
This sort of choice is very common in Ruby, you can have different style choices for the same functionality. You can have blocks made with {} or do and end. You can do conditionals in the form of “a if b” or “if b then a”. You can call methods with or without parentheses.
These are all Ruby style choices, and we already have a way to enforce a particular style for a particular project with formatters like rubocop.
See you think it's Progress, but it's actually Regress. It's not a moving forward, but backward.
Do not break contract. Do not break API. Do not break muscle memory. Everytime you do, kittens die horribly. Just Say No!
In this case it was 2 functions with 1 line of code each. https://github.com/urllib3/urllib3/pull/3732/files
I don't see the connection you're drawing here.
Because it lays out the contract you have to meet on the interface. No contract? No enforced compatibility.
But it seems to make library developers more comfortable with making breaking changes. It's like they're thinking 'well it's documented, they can just change their code when they update and get errors in their type checker/linter.' When I think they should be thinking, 'I wonder what I could do to make this update as silent and easy as possible.'
Of course, we all have different goals, and I'm grateful to have access to so many quality libraries for free. It's just annoying to have to spend time making changes to accommodate the aesthetic value of someone else's code.
Not even JS alone. I blame the enforcement of semantic versioning, as if a version of code simply had to be a sequence of meaningful numbers.
If you're using a language with a complete type system, sure. But who uses those?
If you are using the languages people actually use in the real world, not really. Consider a simple example where the contract is that you can only return an integer value from 1 to 10. In most languages people actually use, you're going to be limited to using an integer type that is only constrained by how many bits it is defined to hold, which can be exploited later to return 11 unbeknownst to the caller's expectations. There are a small number of actually-used languages that do support constraining numeric types to a set of values like that, but even they fall apart as soon as you try to do something slightly more complex.
This is what tests are for. They lay out the contract with validation, while also handily providing examples for the user of your API to best understand how it is intended to be used.
TypeScript, for example, is one of the most widely used languages in the world. It has an incredibly powerful type system which you can use to model a lot of your invariants. By leaning on patterns such as correct-by-construction and branding, you can carry around type-level evidence that e.g. a number is within a certain range, or that the string you are carrying around is in fact a `UserId` and not just any other random string.
Can you intentionally break these guarantees if you go out of your way? Of course. But that's irrelevant, in the same way it is irrelevant that `any` can be used to break the type system guarantees. In practice, types are validated at the boundaries and everything inside can lean on those guarantees. The fact that someone can reach in and destroy those guarantees intentionally doesn't matter in practice.
There are languages with proper type systems that can actually define full contracts, but nobody uses them in the real world. But without that, you haven't really defined a usable contract as it pertains to the discussion here. You have to rely on testing to define the contract.
And Typescript most definitely does. Testing is central to all Typescript applications.
You can define it as:
Or, if you are feeling saucy, you can define it as: But nether of these prove to me, the user of your API, that what is contained in the EmailAddress type is actually a RFC-compliant email address. The only way for me to have confidence in your promise that EmailAddress is RFC-compliant is to read tests at the points where I consume EmailAddress.That isn't true for languages with better type systems. In those you can define EmailAddress such that it is impossible for you to produce anything that isn't RFC-compliant. But Typescript does not fit into the category of those languages. It has to rely on testing.
At some point, you will have to write a function where you validate/parse some arbitrary string, and it then returns some sort of `Email` type as a result. That function will probably return something like `Option<Email>` because you could feed it an invalid email.
The implementation for that function can also be wrong, in exactly the same way the implementation for the typescript equivalent could be wrong. You would have to test it just the same. The guarantees provided by the typescript function are exactly equivalent, except for the fact that you do technically have an escape hatch where you can "force" the creation of a branded `Email` without using the provided safe constructor, where the other language might completely prevent this - but I've already addressed this. In practice, it doesn't matter. You only make the safe constructor available to the user, so they would have to explicitly go out of their way to construct an invalid branded `Email`, and if they do, well, that's not really your problem.
The compiler would give you an error if you got the syntax wrong, and in isolation it's fair that you could, say, get the domain name wrong as long as it is syntactically valid. I suppose what I failed to convey, making some assumptions about your experience with type systems, is that the types would not just specify RFC-compliance. You would also spec out other dependencies such that you also couldn't define the wrong domain name without a compiler error.
You could get the contract wrong, of course. Maybe this is your intent. But the idea has always been here that I would also read the contract. If we don't both read and agree to the terms of the contract, there is no contract.
.NET Framework -> Core was more persuasive, but I stand by the overall point that compatibility is more about project philosophy than "static vs dynamic typing" and indeed I think Framework/Core illustrates just that: Framework favored preserving compatibility, Core does not
I absolutely see the connection. One of the advantages of static typing is that it makes a lot of refactoring trivial (or much more than it would be otherwise). One of the side effects of making anything more trivial is that people will be more inclined to do it, without thinking as much about the consequences. It shouldn’t be a surprise that, absent other safeguards to discourage it, people will translate trivial refactoring into unexpected breaking changes.
Moreover, they may do this consciously, on the basis that “it was trivial for me to refactor, it should be trivial to adapt downstream.” I’ll even admit to making exactly that judgment call, in exactly those terms. Granted I’m much less cavalier about it when the breaking changes affect people I don’t interface with on a regular basis. But I’m much less cavalier about that sort of impact across the board than I’ve observed in many of my peers.
While I am happy to see types in Python and Javascript (as in Typescript) I see far more issues how people use these.
In 99% of the time, people just define things as "string" or if doesn't cover, "any". (or Map/Object etc)
Meanwhile most of these are Enum keys/values, or constants from dependencies. Whenever I see a `region: string` or a `stage: string`; a part of me dies. Because these need to be declared as `region: Region` or `stage: Stage`. Where the "Region" and "Stage" are proper enums or interfaces with clear values/variables/options. This helps with compile (or build) time validation and checking, preventing issues from propagating to the production (or to the runtime at all)...
No matter what others say, the pipelines are long.
The delays for release, getting into a distribution, then living out its lifetime... they are significant.
If it's no longer being maintained then put a depreciation warning and let it break on its own. Changing a deprecated feature just means you could maintain it but don't want to.
Alternatively if you want to aggressively push people to migrate to the new version, have a clear development roadmap and force a hard error at the end of the depreciation window so you know in advance how long you can expect it to work and can document your code accordingly.
This wishy-washy half-broken behaviour doesn't help anyone
Better to give an actual timeline (future version & date) for when deprecated functionality / functions will be removed, and in the meantime, if the language supports it, mark those functions as deprecated (e.g. C++ [[deprecated]] attribute) so that developers see compilation warnings if they failed to read the release notes.
But yes, that would be the worst idea ever.
Instead, if you must, add a sleep within the function for 1 ms in the first release, 2 ms in the second release, and so on. But maybe just fix the tooling instead to make deprecations actually visible.
Degrading performance exponentially (1ms, 2ms, 4ms, 8ms...) WILL create a 'business need', without directly breaking critical functions. Without this degradation, there is no reason to remove the deprecated code, from a business perspective.
if people are meant to depend on your endpoints, they need to be able to depend on all of them
you will always have ppl who don't respond to deprecation notices, the best you can do is give them reliable information on what to expect -- if they hide the warnings and forget, that's their business
but intentionally making problems without indication that its intentional results in everyone (including your own team) being frustrated and doing more work
you cannot force ppl to update their code and trying to agitate them into doing it only serves to erode confidence in the product, it doesn't make the point ppl think it makes, even if the court of public opinion sides with you
cover your bases and make a good faith effort to notify and then deal with the inevitable commentary, there will always be some who miss the call to update
One day I came back from holidays. I had just broken a big go-live where the release number passed x. Date missed, next possibility in a few weeks. The team was pissed.
Yes they COULD have fixed the warnings. But breaking the go live was quite of of proportion for not doing so.
Could they not have rolled back?
I have not had any real problems yet myself, but its worrying.
It does use major.minor.bugfix versioning, but without clarity about when to expect breaking changes.
With the pace of 3.x releases it has become more of a problem.
I agree, but I think there's a bigger, cultural root cause here. This is the result of toxicity in the community.
The Python 2 to 3 transition was done properly, with real SemVer, and real tools to aid the transition. For a few years about 25% of my work as a Python dev was transitioning projects from 2 to 3. No project took more than 2 weeks (less than 40 hours of actual work), and most took a day.
And unfortunately, the Python team received a ton of hate (including threats) for it. As a natural reaction, it seems that they have a bit of PTSD, and since 3.0 they've been trying to trickle in the breaking changes instead of holding them for a 4.0 release.
I don't blame them--it's definitely a worse experience for Python users, but it's probably a better experience for the people working on Python to have the hate and threats trickle in at a manageable rate. I think the solution is for people like us who understand that breaking changes are necessary to pile love on doing it with real SemVer, and try to balance out the hate with support and
I had a client who in 2023 still was on 2.7.x, and when I found a few huge security holes in their code and told them I couldn't ethically continue to work on their product if they wouldn't upgrade Python, Django, and a few other packages, and they declined to renew my contract. As far as I know, they're still on 2.7.x. :shrug:
At least for me, the real blocker was broad package support.
Maintainers should think carefully about whether their change induces lots of downstream work for users. Users will be mad if they perceive that maintainers didn’t take that into account.
To be clear: I literally do not remember a single example of this breaking anything after running 2to3. There was some practical benefit (such as being able to use print in callbacks) and I don't think it breaking existing code is meaningful given how thoroughly automated the fix was.
I do get the impression that a lot of the complaints are from people who did not do any upgrades themselves, or if they did, didn't use the automated tools. This is just such an irrelevant critique. This is a quintessential example of bikeshedding: the only reason you're bringing up `print` is because you understand the change, not because it's actually important in any way.
> Maintainers should think carefully about whether their change induces lots of downstream work for users. Users will be mad if they perceive that maintainers didn’t take that into account.
Sure, but users in this case are blatantly wrong. You can read the discussions on each of the breaking changes, they're public in the PEPs. The dev team is obviously very concerned with causing downstream work for users, and made every effort, very successfully, to avoid such work.
If your impression is that maintainers didn't take into account downstream work for users, and your example is print, which frankly did not induce downstream work for users, you're the problem. You're being pretty disrespectful to people who put a lot of work into providing you a free interpreter.
More interesting is how long it took core libraries to transition. That was my primary blocker. My guess is that there were fairly substantial changes to the CPython API that slowed that transition.
Other changes to strings could be actually dangerous if you were doing byte-level manipulations. Maybe tools could help catch those situations. Even if they did, it took some thought and not just find/replace to fix. The change was a net benefit, but it’s easy to see why people might be frustrated or delay transition.
Your definition of "core libraries" is likely a lot broader than mine. I'm old, and I remember back in the day when Perl developers started learning the hard way that CPAN isn't the Perl standard library.
JavaScript's culture has embraced pulling in libraries for every single little thing, which has resulted in stuff like the left pad debacle, but that very public failing is just the tip of the iceberg for what problems occur when you pull in a lot of bleeding edge libraries. The biggest problems, IMO, are with security.
I've come onto a number of projects to help them clean up codebases where development had become slow due to poor code quality, and the #1 problem I see is too many libraries. Libraries don't reduce complexity, they offload it onto the library maintainers, and if those library maintainers don't do a good job, it's worse than writing the code yourself. And it's not necessarily library maintainers' fault they don't do a good job: if they stop getting paid to maintain the library, or never were paid to maintain it in the first place, why should they do a good job of maintaining it?
The Python 2 to 3 transition wasn't harder for most core libraries than it was for any of the rest of us: if anything, it was easier for them because if they're a core library they don't have as many dependencies to wait on.
There are exceptions, I'm sure, but I'll tell you that Django, Pillow, Requests, BeautifulSoup, and pretty much every other library I use regularly, supported both Python 2 AND 3 before I even found out that Python 3 was going to have significant breaking changes. On the flip side, many libraries I had to upgrade had been straight up abandoned, and never transitioned from 2 to 3 (a disproportionate number of these were OAuth libraries, for some reason). I take some pride in the fact that most of the libraries that had problems with the upgrade were ones that had been imported when I wasn't at the company, or ones that I had fought against importing because I was worried about whether they would be maintained. It's shocking how many of these libraries were fixable not with an upgrade, but with removing the dependency writing a <100 lines of my own code including tests.
I'd hope the lesson we can take away from this isn't, "don't let Python make any breaking changes", but instead, "don't import libraries off Pypi just to avoid writing 25 lines of your own code".
Did you ever look into why the transition took so long for OAuth libraries? Did you consider just rewriting one yourself?
I did take the approach of writing my own OAuth using `requests`, which worked well, but I don't think I ever wrote in such a general way to make it a library.
Part of the problem is that OAuth isn't really a standard[1]. There are well-maintained libraries for Facebook and Google OAuth, but that's basically it--everyone else's OAuth is following the standard, but the standard is too vague so they're not actually compatible with each other. You end up hacking enough stuff around the library that it's easier to just write the thing yourself.
The problem with the Google and Facebook OAuth libraries is that there were a bunch of them--I don't think any one of them really became popular enough to become "the standard". When Python 3 came out, there were a bunch of new Google and Facebook OAuth libraries that popped up. I did actually port one Facebook OAuth library to Python3 and maintain it briefly, but the client dropped support for Facebook logins because too few users were using it, and Facebook kept changing data usage requirements. When the client stopped needing the library, I stopped maintaining it. It was on Github publicly, but as far as I know I was the only user, and eventually when I deleted the Repo nobody complained.
I don't say anything unless asked, but if asked I always recommend against OAuth unless you're using it internally: why give your sign up data to Google or Facebook? That's some of your most valuable data.
[1] https://thenewstack.io/oauth-2-0-a-standard-in-name-only/
Code that is not being maintained is not usually suitable for use, period.
Notably, even this policing doesn’t fix the whining. The whining will just be about what TFA is whining about. You’re just moving the whining around.
It also does nothing to actually force people to upgrade. Instead, people can just cap against the minor version you broke your package on. Instead of being user hostile, why not make the user’s job easier?
Correctly following SemVer disincentivizes unnecessary breaking changes. That’s a very good thing for users and ultimately the health of the package. If you don’t want to backport security fixes, users are free to pay, do it themselves, or stop using the library.
Lots of people still complained about 2.0.
> It is important to know that NumPy, like Python itself and most other well known scientific Python projects, does not use semantic versioning. Instead, backwards incompatible API changes require deprecation warnings for at least two releases.
We’ve sent out industry alerts, updated documentation and emailed all user. The problem is the contact information goes stale. The developer who initially registered and set up the keys, has moved on. The service has been running in production for years without problems and we’ve maintained backwards compatibility.
So do we just turn it off? We’ve put messages in the responses. But if it’s got 200ok we know no one is looking at those. We’ve discussed doing brownouts where we fail everything for an hour with clear error messages as to what is happening.
Is there a better approach? I can’t imagine returning wrong data on purpose randomly. That seems insane.
Instead of "deprecate like you mean it" the article should be: "Release software like you mean it" and by that, I mean: Be serious. Be really, really sure that you are good with your API because users are going to want to use it for a lot longer than you might think.
But, perfection isn't realistic. If you don't have a plan for when you get things wrong, you're failing to plan for the inevitable.
Clients weren't happy, but ultimately they did all upgrade. Our last-to-upgrade client even paid us to keep the API open for them past the date we set--they upgraded 9 months behind schedule, but paid us $270k, so not much to complain about there.
We did roll this out in our test environment a month in advance, so that users using our test environment saw the break before it went to prod, but predictably, none of the users who were ignoring the warnings for the year before were using our test environment (or if they were, they didn't email us about it until our breaking change went to prod).
Keep the servers running, but make the recalcitrant users pay for the costs and a then some more. It is actually a common strategy. Big, slow companies often have trouble with deprecation, but they also have deep pockets, and they will gladly pay a premium so that they can keep the API stable at least for some time.
If you ask for money, you will probably get more reactions too.
That sounds like the best option. People are used to the idea that a service might be down, so if that happens, they’ll look at what the error is.
> In case the sarcasm isn’t clear, it’s better to leave the warts. But it is also worthwhile to recognise that in terms of effectiveness for driving system change, signage and warnings are on the bottom of the tier list. We should not be surprised when they don’t work.
> In case the sarcasm isn’t clear, it’s better to leave the warts.
What if we found that a highway overpass construction material was suboptimal, and we want people to use superior materials, so, every now and then, we send a chunk of concrete plummeting down to the ground, to kill a motorist?
Thanks to deprecating like we mean it, they're going to replace that overpass sooner than they would otherwise. You'll thank me later.
From the https://sethmlarson.dev/deprecations-via-warnings-dont-work-... that the post opens with:
> This API was emitting warnings for over 3 years in a top-3 Python package by downloads urging libraries and users to stop using the API and that was not enough. We still received feedback from users that this removal was unexpected and was breaking dependent libraries.
Entirely predictable.
Even many of those who saw the deprecation logging, and bothered to make a conscious decision, didn't think you'd actually break the API.
> We ended up adding the APIs back and creating a hurried release to fix the issue.
Entirely predictable.
Save yourself some anguish, and don't break API unnecessarily. Treat it like a guarantee, as much as possible.
If it's a real problem for ongoing development, consider using SemVer and multiple versions, like the linked article suggests. (With the deprecated branch getting minimal maintenance: maybe only select bug fixes, or only critical security fixes, maybe with a sunset on even those, and a deprecation warning for the entire library when it's no longer supported.)
And it should be explicitly mentioned in the deprecation warnings.
(You don't want to break systems, but you want something people who care about the system will investigate, and will quickly find and understand the source of and understand what to do.)
Software developers have enough treadmills they need to stay on. Deliberately breaking backward compatibility in the name of "deprecating something old" doesn't have to be one of them. Please don't be that platform or library that deprecates and removes things and makes me have to dust off that old software I wrote in 2005 to move over to a different set of APIs just to keep it working.
But intentionally breaking my users runtime in a way that's really hard and annoying to find? Is the author OK? This reads like a madman to me.
What I want from code is for it to a) work, and b) if that's not possible, to fail predictably and loudly.
Returning the wrong result is neither of the above. It doesn't draw attention to the deprecation warnings as OP intended--instead, it causes a mysterious and non-deterministic error, literally the worst kind of thing to debug. The idea that this is going to work out in any way calls into question the writer's judgment in general. Why on earth would you intentionally introduce the hardest kind of bug to debug into your codebase?
I expected this to suggest a tick-tock deprecation cycle, where one version deprecated and the next removed, but this is definitely an idea that belongs on the domain "entropicthoughts.com"
Does this mean that people and places shouldn't migrate out of older practices? No. But people have different priorities. And sure, we may treat "squeaky wheel policies" as a bad idea, but quite frankly that is far and away the most common policy out there.
To that end, please don't go out of your way to insist that your priority is everyone else's priority.
16 more comments available on Hacker News