Stepping Down as Mockito Maintainer After Ten Years
Key topics
The Mockito maintainer has stepped down after a decade, sparking a lively discussion about the library's impact on testing practices. Many commenters shared horror stories about overly complex test setups and brittle mocks that hinder refactoring, with some attributing these issues to poor usage rather than the library itself. The conversation also took a lighthearted turn, with commenters poking fun at the library's name, which some found amusingly awkward in certain languages. As the debate rages on, it becomes clear that the Mockito maintainer's departure has tapped into a deeper conversation about testing best practices.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
53m
Peak period
84
0-6h
Avg / period
16
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 28, 2025 at 3:14 PM EST
4d ago
Step 01 - 02First comment
Dec 28, 2025 at 4:08 PM EST
53m after posting
Step 02 - 03Peak activity
84 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 31, 2025 at 8:05 AM EST
2d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Testing is hard. I’ve tried with AI today: No, it is still not capable of handling that kind of (straightforward) task (Using Claude).
So Mockito and friends are a nice alternative to that.
That is just my experience and opinion though, and there are definitely more valid or equally valid alternatives.
I’ve been on projects where mocking _literally made the project less reliable_ because people ended up “testing” against mocks that didn’t accurately reflect the behavior of the real APIs.
It left us with functionality that wasn’t actually tested and resulted in real bugs and regressions that shipped.
Mocking is one of these weird programmer pop-culture memetic viruses that spread in the early 2000s and achieved complete victory in the 2010s, like Agile and OOP, and now there are entire generations of devs who it’s not that they’re making a bad or a poorly argued choice, it’s that they literally don’t even know there are other ways of thinking about these problems because these ideas have sucked all the oxygen out of the room.
Ha.
I think there's room to argue "Agile" is a popular bastardisation of what's meant by "agile software development", and with "OOP" we got the lame Java interpretation rather than the sophisticated Smalltalk interpretation. -- Or I might think that these ideas aren't that good if their poor imitations win out over the "proper" ideas.
With mocking.. I'm willing to be curious that there's some good/effective way of doing it. But the idea of "you're just testing that the compiler works" comes to mind.
Sometimes the right tool for the job is objects, sometimes it's functional, sometimes you do want encapsulation but it's better as structs and using composition over inheritance. When everything looks like a `class Hammer extends Tool`…
You should be using it in rare cases when you want to verify very complex code that needs to be working with strict requirements (like calling order is specified, or some calls cannot be made during execution of the method).
Usually it is used for pointless unit tests of simple intermediary layers to test if call is delegated correctly to deeper layer. Those tests usually have negative value (they test very little but make any modification much harder).
This is kinda how we build software right? A little bit of "our logic" (calculation), represented as objects/actors/modules which "do things", but intermingled with million-LoC dependencies like databases and web servers.
After a while it gets frustrating setting up the robot hand and OCR equipment for each test case. Maybe it's just easier to test manually, or skip testing entirely.
At this point you can have an epiphany, and realise you only care about the numbers going in and out of the calculator, not the button pushes and pixels.
Mockito swoops in and prevents you from having that epiphany, by making it easier to keep doing things the stupid way.
Instead isolating the calculation from any IO, you can now write things like: when(finger.pushbutton(1)).then(calculator.setState(1)) when(calculator.setAnswer(3)).then(camera.setOcr(3))
(I've mostly worked in Java, but it seems like other languages typically don't let you intercept calls and responses this way)
But this name is weird in the specific language it’s imitating (both the -ito termination for diminutives and the drink on which I assumed the name is based are Spanish).
On the one hand, you should just design things to be testable from the start. On the other... I'm already working in this codebase with 20 years of legacy untestable design...
Probably because they zealously followed "Effective Java" book.
You write an adapter.
Not mentioning of course needless copy-pasting dosens of members in the adapter. And it must be in prod code, not tests, even though it's documentation would say "Adapter for X, exists only for tests, to be able to mock X".
Moreover, that wrapper library is now a pretty large piece of code, and we'd want to maintain and test as well. But cannot without hacks.
Mockito shouldn't change whether or not this is possible; the code shouldn't have the prod creds (or any external resource references) hard coded in the compiled bytecode.
The cost is the pain - sometimes nightmarish - for other contributors to the code base since tests depending on mocking are far more brittle.
Someone changes code to check if the ResultSet is empty before further processing and a large number of your mock based tests break as the original test author will only have mocked enough of the class to support the current implementation.
Working on a 10+ year old code base, making a small simple safe change and then seeing a bunch of unit tests fail, my reaction is always “please let the failing tests not rely on mocks”.
So this change doesn't allow an empty result set, something that is no longer allowed by the new implementation but was allowed previously. Isn't that the sort of breaking change you want your regression tests to catch?
(I'm not being snarky, I don't understand your point and I want to.)
1. Initially codeUnderTest() calls a dependency's dep.getFoos() method, which returns a list of Foos. This method is expensive, even if there are no Foos to return.
2. Calling the real dep.getFoos() is awkward, so we mock it for tests.
3. Someone changes codeUnderTest() to first call dep.getNumberOfFoos(), which is always quick, and subsequently call dep.getFoos() only if the first method's return value is nonzero. This speeds up the common case in which there are no Foos to process.
4. The test breaks because dep.getNumberOfFoos() has not been mocked.
You could argue that the original test creator should have defensively also mocked dep.getNumberOfFoos() -- but this quickly becomes an argument that the complete functionality of dep should be mocked.
If tests (authored by someone else) break, I now have to figure out whether the breakage is due to the fact that not enough behavior was mocked or whether I have inadvertently broken something. Maybe it’s actually important that code avoid using “isEmpty”? Or do I just mock the isEmpty call and hope for the best? What if the existing mocked behavior for size() is non-trivial?
Typically you’re not dealing with something as obvious.
For example, one alternative is to let my IDE implement the interface (I don’t have to “write” a complete implementation), where the default implementations throw “not yet implemented” type exceptions - which clearly indicate that the omitted behavior is not a deliberate part of the test.
Any “mocked” behavior involves writing normal debuggable idiomatic Java code - no need to learn or use a weird DSL to express the behavior of a method body. And it’s far easier to diagnose what’s going on or expected while running the test - instead of the backwards mock approach where failures are typically reported in a non-local manner (test completes and you get unexpected invocation or missing invocation error - where or what should have made the invocation?).
My test implementation can evolve naturally - it’s all normal debuggable idiomatic Java.
In my view, one of the biggest mistakes when working with Mockito is relying on answers that return default values even when a method call has not been explicitly described, treating this as some kind of "default implementation". Instead, I prefer to explicitly forbid such behavior by throwing an `AssertionError` from the default answer. Then, if we really take "one method" literally, I explicitly state that `next()` must return `false`, clearly declaring my intent that I have implemented tests based on exactly this described behavior, which in practice most often boils down to a fluent-style list of explicitly expected interactions. Recording interactions is also critically important.
How many methods does `ResultSet` have today? 150? 200? As a Mockito user, I don't care.
Instead your mocks are all just inline in the test code: ephemeral, basically declarative therefore readily readable & grokable without too much diversion, and easily changed.
A really good usecase for Java's 'Reflection' feature.
Mocking's killer feature is the ability to partially implement/extend by having some default that makes some sense in a testing situation and is easily instantiable without calling a super constructor.
Magicmock in python is the single best mocking library though, too many times have I really wanted mockito to also default to returning a mock instead of null.
Yeah, it's funny, I'm often arguing in the corner of being verbose in the name of plain-ness and greater simplicity.
I realise it's subjective, but this is one of the rare cases where I think the opposite is true, and using the 'magic' thing that shortcuts language primitives in a sort-of DSL is actually the better choice.
It's dumb, it's one or two lines, it says what it does, there's almost zero diversion. Sure you can do it by other means but I think the (what I will claim is) 'truly' inline style code of Mockito is actually a material value add in readability & grokability if you're just trying to debug a failing test you haven't seen in ages, which is basically the usecase I have in mind whenever writing test code.
But when there are many tests where I instantiate a test fixture and return it from a mock when the method is called, I start to think that an in memory stub would have been less code duplication and boilerplate... When some code is refactored to use findByName instead of findById and a ton of tests fail because the mock knows too much implementation detail then I know it should have been an in memory stub implementation all along.
I prefer Mockito's approach.
I think all the dependencies of a class should define behaviour not implementation so it’s not tightly coupled and can be modified in the future. If you have a class that injects LookUpService, why not put an interface LookUpper in front of it? It’s a layer of indirection but we have IDEs now and reading the interface should be easier or at least provide context.
Wouldn't this hold back enterprise adoption, the same way breaking changes meant that Java 8 was widely used for a long time?
Most places are just about getting rid of 8 for 17.
> but when it was communicated with Mockito I perceived it as "Mockito is holding the JVM ecosystem back by using dynamic attachment, please switch immediately and figure it out on your own".
Who did the communication? Why is dynamic attachment through a flag a problem, and what was the solution? Why is "enable a flag when running tests" not a satisfactory solution?
> While I fully understand the reasons that developers enjoy the feature richness of Kotlin as a programming language, its underlying implementation has significant downsides for projects like Mockito. Quite frankly, it's not fun to deal with.
Why support Kotlin in the first place? If it's a pain to deal with, perhaps the Kotlin user base is better served by a Kotlin-specific mocking framework, maintained by people who enjoy working on those Kotlin-specific code paths?
Some complexities are discovered along the way, people don't know everything when they start.
They could also drop the support after some time, but then it would have created other set of problems for adoption and trustworthiness of the project.
If you've got a running java process on your local machine right now, you can use 'jconsole' to see the stack traces of all threads, inspect various memory statistics, trigger an immediate garbage collection or heap dump, and so on. And of course, if the tool is an instrumenting profiler - it needs the power to modify the running code, to insert its instrumentation. Obviously you need certain permissions on the host to do this - just like attaching gdb to a running process.
This capability is used not just by for profiling, debugging and instrumentation but also by mockito to do its thing.
Java 21 introduced a warning [1] saying this will be disabled in a forthcoming version, unless the process is started with '-XX:+EnableDynamicAgentLoading' - whereas previously it was enabled by default and '-XX:+DisableAttachMechanism' was used to disable it.
The goal of doing this is "platform integrity" - preventing the attachment of debugging tools is useful in applications like DRM.
[1] https://openjdk.org/jeps/451
Not security, but integrity, although security (which is the #1 concern of companies relying on a platform responsible for trillions of dollars) is certainly one of the primary motivations for integrity, others being performance and correctness. Integrity is the ability of code to locally declare its reliance on some invariant - e.g. that a certain class must not be extended, that a method can only be called by other methods in the same class, or that a field cannot be reassigned after being assigned in the constructor - and have the platform guarantee that the invariant is preserved globally throughout the lifetime of the program, no matter what other code does.
This is obviously important for security as it significantly reduces the blast radius of a vulnerability (some attacks that can be done in JS or Python cannot be done in Java), but it's also important for performance, as the compiler needs to know that certain optimisations preserve meaning. E.g. strings cannot be constant-folded if they can't be relied upon to be truly immutable.
In Java, we've adopted a policy we call Integrity by Default (https://openjdk.org/jeps/8305968), which means that code in one component can violate invariants established by code in another component but only if the application is made aware of it and allows is. What isn't allowed is for a library - which could be some fourth-level dependency - to decide for itself, without the application's knowledge, that actually strings should be mutable. We were, and are, open to any ideas as long as this principle is preserved.
Authors of components that do want to do such things find the policy inconvenient because their consumers need to do something extra that isn't required when using normal libraries. But this is a classic case of different users having conflicting requirements. No matter what you do, someone will be inconvenienced. We, the maintainers of the JDK, have opted for a solution that we believe minimise the pain and risk overall, when integrated over all users: Integrity is on by default, and components that wish to break it need an explicit configuration option to allow that.
> built on a solid foundation with ByteBuddy
ByteBuddy's author acknowledges that at least some aspects of ByteBuddy - and in particular the self-loading agent that Mockito used - wasn't really a solid foundation, but it is now: https://youtu.be/AzfhxgkBL9s?t=1843.
Examples?
I keep a large production Java codebase and its deployments up-to-date. Short of upstreaming fixes to every major dependency, the only feasible way to continue upgrading JDK versions has often been to carry explicit exceptions to new defaults.
JPMS is a good example: --add-opens was required for many applications for a long time, and while there has been real progress (e.g. Arrow), it remains valuable today for important infra like Hadoop, Spark, and Netty.
If widely used testing and instrumentation libraries like Mockito are unable to offer a viable alternative in response to JEP 451, my response would be to re-enable dynamic agent attachment rather than re-architect large, long-lived test suites. I can't speak for others, but if this reaction holds broadly it would seem to defeat the point of by-default changes.
Of course, just keep in mind that all these changes were and are being done in response to feedback from others. When you have such a large ecosystem, users can have contradictory demands and sometimes it's impossible to satisfy everyone simultaneously. In those cases, we try to choose whatever we think will do the most good and the least harm over the entire ecosystem.
> JPMS is a good example: --add-opens still remains valuable today for important infra like Hadoop, Spark, and Netty. If other, even more core projects (e.g. Arrow) hadn't modernized, the exceptions would be even more prolific.
I think you have answered your own question. Make sure the libraries you rely on are well maintained, and if not - support them financially. BTW, I think that Netty is also abandoning its hacking of internals.
> If libraries so heavily depended upon like Mockito are unable to offer a viable alternative in response to JEP 451
But they have, and we advised them on how: https://github.com/mockito/mockito/issues/3037
The main "ergonomic" issue was lack of help from build tools like Gradle/Maven.
Presumably this might miss some edge case (where something else also needs the flag?) though an explicit allow of the mockito agent in the jvm arg would have solved for that.
You can and should explicitly specify Mockito as an agent in the JVM configuration, as it is one.
Also, F Kotlin and their approach of "we'll reinvent the will with slightly different syntax and call it a new thing". Good riddance I say, let them implement their mockk, or whatever it is called, with ridiculous "fluent" syntax.
90% of the time, needing to use a mock is one of the clearest code warning smells you have of there being an issue in the design your code.
It took a while, but the industry seems to be finally (although slowly) coming to this realization. And hopefully with it almost all of this can go away.
Even funnier, this was all hypothetical and yet taken as gospel. We hadn't even written the tests yet, so it was impossible to say whether they were slow or not. Nothing had been measured, no performance budget had been defined, no prototype of the supposedly slow tests had been written to demonstrate the point.
We ended up writing - no joke - less than 100 tests total, almost all of which hit the database, including some full integration tests, and the entire test suite finished in a few seconds.
I'm all for building in a way that respects performance as an engineering value, but we got lost somewhere along the way.
Hopefully one day you'll back at that, and realise what an immature attitude that was.
Unless that whole paragraph was a figure of speech, in which case what I said doesn't apply, and we can both go about our days.
Why fake it when an integration test tests the real thing.
I’ve seen what you clearly have. Mocked ResultSets, mocked JDBC templates. “When you get SQL, it should be this string. These parameters should be set. Blah blah.”
It’s so much work. And it’s useless. Where does that SQL to check come from? Copy and paste, so it won’t catch a syntax error.
Test data is faked in each result. You can’t test foreign keys.
Just a bad idea. You’re so right. I find it odd some people are so anti-mock. Yeah it gets abused but that’s not the tool’s fault.
But DB calls are not a good spot to mock out.
What you're describing is a very limited subset of testing, which presumably is fine for the projects you work on, but that experience does not generalise well.
Integration testing is of course useful, but generally one would want to create unit tests for every part of the code, and by definition it's not a unit test if hits multiple parts of the code simultaneously.
Apart from that, databases and file access may be fast but they still take resources and time to spin up; beyond a certain project and team size, it's far cheaper to mock those things. With a mock you can also easily simulate failure cases, bad data, etc. - how do you test for file access issues, or the database server being offline?
Using mocks properly is a sign of a well-factored codebase.
You seem to say it's not worth writing a lot of tests, but then you talk about tests breaking due to bad changes - if you don't write those tests in the first place, then how do you get into that situation?
I didn't word my earlier comment very well - I don't mean to advocate for 100% coverage, which I personally think is a waste of time at best, and a false comfort at worst. Is this what you're talking about? What I wanted to say is that unit tests should be written for every part of the code that you're testing, i.e. break it into bits rather than test the whole thing in one lump, or better, do both - unit tests and integration tests.
This is what I mean about a well-factored code base; separate things are separate, and can be tested independently.
> Unit tests have proven more harmful than helpful
Why? I find them very useful. If I change a method and inadvertently break it somehow, I have a battery of tests testing that method, and some of them fail. Nothing else fails because I didn't change anything else, so I don't need to dive into working out why those other tests are failing. It's separation of concerns.
i have concluded a unit needs to be large. Not a single class or function but a large collection of them. When 'archtecture astronaughts' draw their boxes they ar drawing units. Often thousands of functions belong to a unit. even then though often it is easier to use the real other unit than a test double.
If your unit is thousands of functions wide then you have a monolith, and there are widely discussed reasons why we try to avoid those.
The common pitfall with this style of testing is that you end up testing implementation details and couple your tests to your code and not the interfaces at the boundaries of your code.
I prefer the boundary between unit and integration tests to be the process itself. Meaning, if I have a dependency outside the main process (eg database, HTTP API etc) then it warrants an integration test where i mock this dependency somehow. Otherwise, unit tests test the interfaces with as much coverage of actual code execution as possible. In unit tests, out of process dependencies are swapped with a fake implementation like an in-memory store instead of a full of fledged one that covers only part of interface that i use. This results in much more robust tests that I can rely on during refactoring as opposed to “every method or function is a unit, so unit tests should test these individual methods”.
Well-factored codebase doesn’t need mocks.
now I do take care to avoid writing tests that depend on other code's results that would be likely to change. This hasn't proven to be the problem I've so often been warned about though.
...or, you tested each thing individually, so only related tests break and you can quickly zero in on the problem. Isn't that easier?
More broadly, suppose foo() has an implementation that depends on Bar, but Bar is complicated to instantiate because it needs to know about 5 external services. Fortunately foo() only depends on a narrow sliver of Bar's functionality. Why not wrap Bar in a narrow interface—only the bits foo() depends on—and fake it?
I'm not a maximalist about test doubles. I prefer to factor out my I/O until it's high-level enough that it doesn't need unit tests. But that's not always an option, and I'd rather be flexible and use a test double than burden all my unit tests with the full weight of their production dependencies.
you could extend this to say 85% of the tome just write the code directly to prod and dont have any tests. if you broke something, an alarm will go off
To easily simulate failure cases, a range of possible inputs, bad data etc.
To make the testing process faster when you have hundreds or thousands of tests, running on multiple builds simultaneously across an organisation.
Off the top of my head :-)
You can do all that without mocks as well.
Making the tests run faster at the expense of better tests seems counterproductive.
Now you should think of reasons why you should not isolate.
OK; it's your choice to do what you think is right.
> and comparing it to scientific experiments doesn’t really apply.
Why not? I think it's a fairly apt comparison; you have a theory ("this piece of code does the following things"), and write tests to prove it.
> You can do all that without mocks as well.
OK, but mocks make it easier and cleaner - so why wouldn't I do that?
> Making the tests run faster at the expense of better tests seems counterproductive.
Smaller, more focused, cleaner tests are better in my opinion; speed is a beneficial side effect.
> Now you should think of reasons why you should not isolate.
Why? That's your argument - it's not on me to prove it for you. If you can give me some good reason why mocking out the interfaces you are not testing is a bad idea, and some better alternative, then we can have a discussion about it.
Your position is reasonable and I do think isolation can be beneficial, but I still wouldn’t use mocking to do it.
>Smaller, more focused, cleaner tests are better in my opinion.
Cleaner is subjective. I can write “small” and “focused” functional tests that are also quick to run.
I am of the opinion that functional tests provide more value. They are testing more of the actual code than an approximation, which in turn gives a better indicator that it works. Functional tests are less likely to change unless the input/output changes.
Now let’s say you mock something in your function. Let’s say you make a change to that but the input and output are the exact same. Now you have to update your test.
Not to labour the point here, but no, the primary reason you isolate variables in a scientific experiment is that you want to ensure you're only testing the thing you intend to test. A medical study is a good example - you want to be sure that the effect you observed was due to the drug you're testing, and not some unrelated lifestyle factor.
Thanks for sharing your views on the rest; there was just one thing I wanted to expand on:
> Now let’s say you mock something in your function. Let’s say you make a change to that but the input and output are the exact same. Now you have to update your test.
I think the scenario you're describing here is: a function's dependencies have changed, but the inputs and outputs of that function have not; therefore even though the behaviour is the same, the tests still need to be updated. Is that right? In which case I would say: of course you need to update the tests - the dependencies have changed and therefore the behaviour of the function depends on different things and you need to model the behaviour of those new things in order to properly test the original function. To me this objection only holds if you are mainly focussed on code coverage; however, to me, good testing exercises the same code paths in multiple different ways to stress the code and ensure that the results are correct given all possible inputs. The dependencies of a function are also inputs of a kind.
>Is that right? In which case I would say: of course you need to update the tests.
That is right. I think it is bad for you to need to update a test where the input and output are the same. Your mock is there for you to essentially ignore, but now you need to update the test. You now do not know if you introduced a bug.
You are losing out on encapsulation, the test should not know about the internals, generally speaking.
>The dependencies of a function are also inputs of a kind.
Typically that should not be a concern to the caller of the function.
I've been using Mockito for about 4 years, all in Kotlin. I always found it to be "plenty good" for like 99% of the cases I needed it; and things more complicated or confusing or messy were usually my fault (poor separation of concerns, etc).
I regularly found it quite helpful in both its spy() and mock() functionality.
I never found it meaningfully more or less useful than MockK, though I have heard MockK is the "one that's better for Kotlin". It's mostly just vocabulary changes for me, the user.
I'm going to have to monitor Mockito's future and see if I'll need to swap to MockK at some point if Mockito becomes unmaintained.
Pretty understandable.
Here are some examples that have hit the graveyard: It's been 2 years since exception handling in switch was proposed [0], 3 years since null-restricted types were proposed [1], 4 years since string templates [2], 8 years since concise method bodies [3].
I really hope that is just resource drain from Valhalla and that they'll be able
[0] https://inside.java/2023/12/15/switch-case-effect/
[1] https://openjdk.org/jeps/8303099
[2] https://openjdk.org/jeps/430
[3] https://openjdk.org/jeps/8209434
PS: personally I really like Swift, I would suggest giving it a try for fun
I don’t think it’s for me though. It’s very Apple centric and I’m a Windows gamer and PowerShell user at heart.
Based on your other comment you prefer "fat" languages like kotlin and C# - and that's fair. I find languages with less, but more powerful features far more elegant and while Java has its fair share of historic warts, modern additions are made in a very smart way.
E.g. switch expressions seamlessly support sum and product types, meanwhile kotlin's `when` really is just syntactic sugar.
All in all, with too many features you have to support and understand the complete matrix of their interactions as well, and that gets complicated quickly. And Kotlin is growing towards that.
What do you mean by broken exception system?
Java is not a small language that you can throw features around, they have to take into consideration final goal of it and decades of development.
Yes, even Mark Reinhold admitted that in the last "Ask the Architects" interview.
>null restricted type will come with valhalla
Will they? It's been 10+ years of Valhalla. Why is a compiler construct even behind project Valhalla? Kotlin has showed you don't need it to do them.
> string templates had a try
Yes they were over engineered and they failed to deliver a basic feature.
> Java is not a small language that you can throw features around, they have to take into consideration final goal of it and decades of development.
Yes I agree, but it shouldn't take YEARS to ship anything. 11 years for a JSON api?! Come on.
Anything? Or maybe something that you specifically want?
It added records when I wanted them, it added streams and lambdas. I don't care for string templates (I thing those are ugly in any language).
null-restricted types would be nice but you have to understand that designing a language is not throwing every possible feature on top of it (like kotlin and resulting lack of readability it has and hard time updating to newer JDK), you have to design it, think of the possibilities, what users really want etc.
Valhalla type system results in addition of null-restricted types, as a natural evolution of the language.
String templates are valuable syntactic sugar, but that's it.
There are a bazillion JSON libraries with at least 3-4 absolutely stellar ones. I don't really see it that big of a limiter.
And if you mean checked exceptions, that's controversial to claim it's all bad. But some ergonomics improvements would be better.
When used as a different way of writing an if else if can be a code smell. When used with an exhaustive list of an enum or sealed class prevents bugs in production.
Java was stagnant and ripe for being kicked off the top.
Scala was the hack that showed why Java was starting to suck but Scala suffered from no direction. Every single idea a PhD ever had was implemented in the language with no thought other than it seems cool to be able to do that too. Which is why all the Scala codebases fell apart, because you could write anything, anyway you want, and it was impossible to maintain without extremely strict guidelines on what parts of the language you were allowed to use. Also the build times were atrocious.
Kotlin designers evaluated language features across the ecosystem and chose what made sense to make Java better when it came out and now has easily surpassed Java. They are still thoughtful in what they choose to add to the language and it is now very powerful. However they smartly imitate Python by having clear guidelines on what choices you should be making when writing code so that other developers can easily dive in.
I'd really like to see a Python framework that embraces class-based controllers to do away with this problem; beyond that writing a dependency injection system is not too difficult of a task (the last time I did it for a hobby project many years ago, it was around ~150 lines of code).
Well, I absolutely disagree with this take. Scala actually builds on top of a couple of its powerful primitives. Especially Scala 3 is a beautiful language.
> Kotlin designers evaluated language
Well, I like scala, and most of the time I understand it, but I can't say the same with Kotlin. I would say quite the opposite is true.
If I was OP I'd retire happy knowing that a very thankless job is well done! Given what it does: the more outrage the better. Projects like Mockito call out the lazy and indolent for whom they are and the hissing and spitting in return can simply be laughed at.
10 years this bloke has given his time and effort to help people. He states: nearly a third of his life.
I'll raise a glass and say "was hale" or perhaps wassail as an Englander might.
So funny, he essentially works for free for 10 years, then finally burns out because he doesn't want to put up with a bunch of annoying work? This is why you shouldn't work on open source unless you have a business strategy to get paid. Tons of stuff in life is 100x more annoying and exhausting if you aren't making any money. If he was making $1 million per year from this I doubt his energy would be drained.
54 more comments available on Hacker News