OOP is shifting between domains, not disappearing
Mood
informative
Sentiment
positive
Category
tech_discussion
Key topics
Object-Oriented Programming
Programming Paradigms
Software Development
Discussion Activity
Very active discussionFirst comment
37m
Peak period
127
Day 1
Avg / period
127
Based on 127 loaded comments
Key moments
- 01Story posted
Nov 20, 2025 at 3:15 PM EST
3d ago
Step 01 - 02First comment
Nov 20, 2025 at 3:53 PM EST
37m after posting
Step 02 - 03Peak activity
127 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 21, 2025 at 7:39 AM EST
2d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Are we talking about using classes at all? Are we arguing about Monoliths vs [Micro]services?
I don't really think about "OOP" very often. I also don't think about microservices. What some people seem to be talking about when they say they use "OOP" seems strange and foreign to me, and I agree we shouldn't do it like that. But what _other_ people mean by "OOP" when they say they don't use it seems entirely reasonable and sane to me.
Why even comment in an article about those topics then?
I think in terms of language features and patterns which actually mean something. OOP doesn't really mean anything to me, given that it doesn't seem to mean anything consistent in the industry.
Of course I work with classes, inheritance, interfaces, overloading, whatever quite frequently. Sometimes, I eschew their usage because the situation doesn't call for it or because I am working in something which also eschews such things.
What I don't do is care about "OOP" is a concept in and of itself.
For example, I hate Java because of OOP, but strong typing can make a lot of bad in a language tolerable. Does the writer of the article agree with me? They don't seem able to understand whether they do.
Using classes hasn't been a part of the definition of OOP since the Treaty of Orlando. Pre-ECMAScript-2015 JS is a mainstream OOP language that doesn't have classes, just prototypes. (Arguably ECMAScript 2015 classes aren't really classes either.)
I see what you did there
Anecdotally, I've replaced OOP with plain data structures and functions.
I think this is why FP is becoming more popular these days but I'm not sure some people get why. The problem with OOP is you take a data set and spread it all over a 'system' of stateful (mutable) objects and wonder why it doesn't/can't all fit back into place when you need it to. OOP looks great on paper and I love the premise but...
With FP you take a data set and pass it through a pipeline of functions that give back the same dataset or you take a part of that data out, work on it and put it straight back. All your state lives in one place, mutable changes are performed at the edges, not internally somewhere in a mass of 'instances'.
I think micro services et al try to alleviate this by spreading the OO system's instances into silos but that just moves the problems elsewhere.
Agreed. I think objects/classes (C++) should be for software subsystems and not so much for user data. Programs manipulate data, not the other way around - polymorphism and overloading can be bad for performance.
Outside that usecase, I think polymorphism via inheritance is generally a mistake.
Programs manipulate data and datastructures organize that data in a way that's algorithmically efficient.
The main issue with OOP is that without a very clear abstraction, it can be almost impossible to reason about code as you end up needing to know too much about the hierarchy of code to correctly understand what will happen next. As it turns out, most programmers are pretty bad at managing that abstraction boundary.
In moderation, an object is a data structure with associated functions (methods) that acts as a kind of namespace. If your data structure and functions are separate, you might start having function name collisions.
Hopefully we won't see a prohibition against OOP.
Though OOP is just one step - structured programming works on the same problem.
Even plain objects though, the point isn't the inheritance! The point is to put an interface on the data. Inheritance is sometimes useful because, but there is a reason we keep screaming "prefer composition to inheritance" (even though few listen)
Sounds like C.
When was the last time you did OO against a .h file without even needing access to the .c file?
> And so, the process/network boundary naturally became that highest and thickest wall
I've had it both ways. Probably everyone here has. It's difficult to make changes with microservices. You gotta open new routes, and wait for people to start using those routes before you close the old ones. But it's impossible to make changes to a monolith: other teams aren't using your routes, they're using your services and database tables.
Data hiding is just one of the concepts of OOP. Polymorphism is another one.
How that's implemented is another question. You can do OOP in plain C, several libraries kinda did that, like GTK. Other languages tried to support these concepts with less boilerplate, giving rise to classes and such. But OOP is not about language features, it's fundamentally a way of designing software.
C is capable enough to program in a way that is basically OOP and without using non-idiomatic code. The C++ object system wasn't created in a vacuum.
As a total beginner to the functional programming world, something I've never seen mentioned at length is that OOP actually makes a ton of sense for CRUD and database operations.
I get not wanting crazy multi tier class inheritance, that seems like a disaster.
In my case, I wanted to do CRUD endpoints which were programmatically generated based on database schema. Turns out - it's super hard without an ORM or at least some kind of object layer. I got halfway through it before I realized what I was making was actually an ORM.
Please feel free to let me know why this is all an awful idea, or why I'm doing it wrong, I genuinely am just winging it.
Not at all OOP is great at simulations, videogames, emergent behaviour in general. If you do crud with oop you will complain about overengineering.
If I have to build an app, I'm going for rails. If I'm building a back end, I'm reaching for Go. If I need to integrate with Python libraries, Django is great.
But ask me again when I get to the other side of some OCAML projects
I was mainly doing this in Go, posted more in a side post.
I've heard this a lot in my career. I can agree that most object-oriented languages have had to do a lot of work to make CRUD and database operations easy to do, because they are common needs. ORM libraries are common because mapping between objects and relations (SQL) is a common need.
It doesn't necessarily mean that object-oriented programming is the best for CRUD because ORMs exist. You can find just as many complaints that ORMs obfuscate how database operations really work/think. The reason you need to map from the relational world to the object world is because they are different worlds. SQL is not an object-oriented language and doesn't follow object-oriented ideals. (At least, not out of the box as a standardized language; many practical database systems have object-oriented underpinnings and/or present object-oriented scripting language extensions to SQL.)
> it's super hard without an ORM or at least some kind of object layer
This seems like you might have got caught in something of a tautological loop situation that because you were working in a language with "object layers" it seemed easiest to work in one, and thus work with an ORM.
It might also be confusing the concepts of "data structure" and "object". Which most object-oriented languages generally do, and have good reason to. A good OOP language wants every data structure to be an object.
The functional programming world still makes heavy use of data structures. It's hard to program in any language without data structures. FP CRUD can be as simple as four functions `create`, 'read`, `update`, and `delete`, but still needs some mapping to data structures/data types. That may still sound object-oriented if you are used to thinking of all data structures as "objects". But beyond that, it should still sound relatively "easy" from an FP perspective: CRUD is just functions that take data structures and make database operations or make database operations and return data structures.
A difference between FP and OOP's view of data structures is where "behaviors" live. An object is a data structure with "attached" behaviors which often modify a data structure in place. FP generally relies on functions that take one data structure and return the next data structure. If you aren't using much in the way of class inheritance, if your "objects" out of your ORM have few methods of their own, you may be closer to FP than you think. (The boundary is slippery.)
I mean, I think this is likely the case. So, I tried this, for example in Go, which is not really a proper functional programming language as I understand it, but is definitely not object-oriented.
So for my use case, I wanted to be able to take a database schema and programmatically create a set of CRUD endpoints in a TUI. Based on my pretty limited knowledge of Go, I found this to be pretty challenging. At first, I built it with Soda / Pop, the ORM from Buffalo framework. It worked fairly well.
Then I got frustrated with using Soda outside Buffalo, and yoinked the ORM to try and remove a layer. Using vanilla Go, it seems like the accepted pattern is that you create separate functions for C R U and D, as you referred to. However, it seems like this is pretty challenging to do programmatically, particularly without sophisticated metaprogramming, and even if you had a language which had complex macros or something, that is objectively significantly harder than object.get() and object.save().
Finally, I put GORM back in, and it worked fine. And GORM is a nice library, even though I think having an ORM is not the "Go" way of doing things in the first place. But also, Gorm is basically using function magic to feel like OOP. And maybe the problem with this idea is that it's not "Proper Go" to make a thing like this, it would be better to just code it. There's an admin panel in the Pagoda go stack which relies on ent ORM to function as well. I can only assume the developer motivations but I assume they are along the same lines as my experience.
I certainly don't think any of this requires insane class inheritance, and maybe that's all people are talking about with OOP. But I still think methods go a long way in this scenario.
In the real world, in business logic, objects do things. They aren't just data structures.
To summarize, CRUD seems pretty easy in any language, programmatically doing CRUD seems super hard in FP. Classes make that a lot easier. Maybe we shouldn't do that ever, and that's fine, but I'm a Django guy, I love my admin panels. Just my experience.
Methods at all make a language OOP. Class inheritance is almost a side quest in OOP. (There are OOP languages with no class inheritance.)
Go seems quite object-oriented to me. I would definitely assume it is easier to use an ORM in Go than to not use an ORM.
I don't use a lot of Go, so I can't speak to anything about what the "proper Go" way of doing things is.
I could try to describe some of the non-ORM, functional programming ways of working with databases as I've seen in languages like F#, Haskell, or Lisp, but I'm not sure how helpful that would be to show that CRUD is not "super hard" in FP especially because you won't be familiar with those languages.
The thing I'm mostly picking up from your post here is that you like OOP and are comfortable with it, and that's great. Use what you like and use what you are comfortable with. OOP is great in that a lot of people also like it and feel comfortable with it.
I think Go is pretty much an OOP like programming language. While maybe it does not "look like" an OOP language it seems to me to allow a wide range of constructs and concepts from OOP.
I am not a Go programmer just reading about it, so I could be wrong.
OOP is nothing but trouble when you try to do some advanced database operations. Select some columns, aggregate them. That is hard in OOP. Throw in window functions and OOP just decides you don't exist.
It's fashionable to dunk on OOP (because most examples - like employee being a subtype of person - are stupid) and ORM (because yes you need to hand write queries of any real complexity).
But there's a reason large projects rely on them. When used properly they are powerful, useful, time-saving and complexity-reducing abstractions.
Code hipsters always push new techniques and disparage the old ones, then eventually realise that there were good reasons for the status quo.
Case in point the arrival of NoSQL and wild uptake of MongoDB and the like last decade. Today people have re-learned the value of the R part of RDBMS.
You don’t want lazy loading. You don’t want to load 1 thing. You don’t want to update 1 thing.
You want to actually exploit RETURNING and not have the transaction fail on a single element in batch.
If you care about performance you do not want ORM at all. You want to load the response buffer and not hydrate objects.
If you ignore ORM you will realize CRUD is easy. You could even batch the actual HTTP requests instead of processing them 1 by 1. Try to do that with a bunch of objects.
I would personally never use ORM or dependency injection (toposort+annotations). Both approaches in my opinion do not solve hard problems and in most cases you don’t even want to have the problems they solve.
Business logic ran fine on ancient mainframes. It can run fine on Raspberry Pis.
CRUD is super easy. It's also not super resource intensive.
I know that's the path that led us all down into Java OOP / start menu is a react native component, but it is actually true.
ORM adds a convenience layer. It also adds some decent protection against SQL injects OOTB and other dev comforts.
Is that trade off worth it? Probably not. But sometimes it's the best tool for the job
The only thing that matters is what your users feel when using your product. Everything else is a waste of time. OOP, FP, language choice, it's all just fluff.
1. migration
2. Validation when inserting
3. Validation when loading
3.1. Serialization
4. Joins
5. Abstracts away db fully if you want, not use db specific features
6. Lazy loading as an encapsulation promoting mechanism
None of these things are especially hard and I’d argue query builders that compose and some other tools deal with these points in a simpler and more efficient manner. Migrations in most cases require careful consideration with multiple steps. Simple cases are simple without ORM.
I’m pretty confident most users of ORM are dealing with problems inflicted by ORM behavior, not db. The biggest infliction is natural push towards single-entity logic that is prevalent in OOP and ORM design.
You can go further and ask me if I imply lazy loading is mandatory.
Imagine what happens when lazy loading turns off. You lose encapsulation. How will OOP work now if you have to reason about your whole call stack and know what exactly has to load up front?
Why can lazy loading be turned off? What if I write my code BAU and then realize I need to turn it off?
That's because you are wrong. There's nothing in relational databases mapping that make objects a better target than even the normal data structures you see in functional languages.
> In my case, I wanted to do CRUD endpoints which were programmatically generated based on database schema.
What is a pure transformation.
The problem is that CRUD applications are an incredibly bad explored area. There only mature projects out there are the OOP ORM ones. That's not because OOP is inherently better suited to the application, it's because there's simply not a lot of people willing to risk into working at that problem.
(And the reason people are not willing can be because developers don't choose their tools through rational evaluation, or may be some irrational one IDK. Mine is certainly because I know if I built an amazing system, nobody would come.)
I don't know! Can you explain it, or how I would use it for this application?
> And the reason people are not willing can be because developers don't choose their tools through rational evaluation, or may be some irrational one IDK.
I think "do the tools exist in the world" is a pretty rational evaluation. I'd love to see the FP equivalent!
A pure function is a function whose output only depends on the paramters and not has any internal state. Which in my opinion is a somewhat useless distinction, because you can make your state/instance/whatever the first parameter, (what C and Python do) and tada, every function that doesn't use globals (or static in C), is a pure function.
a pure function doesn't change its inputs, it simply uses them to craft it's outputs.
bringing in an object or function via parameter to perform these side effects still leaves the function impure.
Microservices are more about making very concrete borders between components with an actual network in between them... and really a contract that has to be negotiated across teams. I feel the best thing this did was force a real conversation around the API boundary and contract, monoliths turn to a big ball of mud once a change slips through that passes in an entire object when just a field is needed, and after a few of these now everything is fairly tightly coupled- modern practices with PRs could prevent a lot of this, but there is still a lot of rubber stamping going on and they don't catch everything. Objects themselves are fine ideas, and I think OOP is great when you focus on composition over inheritance, and bonus points if the objects map cleanly into a relational database schema- once you are starting getting inheritance hierarchies, they often do not. If I had to guess, your experience with OOP is mostly using ORMs where you define the data and it spits out a table for you and some accessor methods, and that works... until it doesn't. At a certain level of complexity the ORM falls apart, and what I have seen in nearly every place I have worked at- is that at some point some innocuous change gets included and now all of a sudden a query does not use an index properly, and it works fine in dev, but then you push it to prod and the DB lights on fire and its really difficult to understand what happened. The style of programming you are talking about would be derided by some old heads as "C with objects" and not "really" OOP. But I do think you are onto something by taking the best parts and avoiding the bad.
"Micro" services aren't great when they are taken to their utmost tiny size, but the idea of a problem domain being well constrained into a deployable unit usually leads to better long term outcomes than a monolith, though its also very true that for under $10k you can get 32 cores of xeons and about 256 gigs of ram, and unless you are building something with intense compute requirements, that is going to get you a VERY long way in terms of concurrent users.
OMG! As an interviewer I would have asked you to elaborate on that response. Perfect opportunity to see if and how the candidate thinks and how deep some pocket of understanding goes.
They are a solution to communication and organizational challenges, not a technical one.
As every other solution they have cons, some of which you have outlined.
“We can move faster” (but at the cost of our product being slower).
Dependency injection has to be the most successful one, but there's at least another dozen good ideas that came from OO world and has been found to be solid.
What has rarely proven to be a good idea instead is inheritance at behavior level. It's fine for interfaces, but that's it. Same for stateful classes, beyond simple data containers like refs.
You can even have classes in functional programming word, it's irrelevant, it's an implementation detail, what matters is that your computations are pure, and side effects are implemented in an encoded form that can be combined in a pure way (an IO or Effect data type works, but so can a simple lazy function encoding).
OOP can be just about structuring code, like the Java OOP fundamentalism, where even a function must be a Runnable object (unless it's changed since Oracle took over). If there's anything that is not an object, it's a function!
Some things are not well-suited to OOP, like linear processing of information in a server. I suspect this is where the FP excitement came from. In transforming information and passing it around, no state is needed or wanted, and immutability is helpful. FP in a UI or a game is not so fun (witness all the hooks in React, which in anything complicated is difficult to follow), since both of those require considerable internal state.
Algorithms are a sort of middle ground. Some algorithms require keeping track of a bunch of things, others more or less just transform the inputs. OOP (internal to the algorithm) can make the former much clearer, while it is unhelpful for that latter.
The author is complaining about bloat.
The thing is, in this case, the bloat has highly tangible costs: Spreading an application across multiple computers unnecessarily adds both operation costs and development costs.
The problem is that if you grow your monolith to the point that it becomes necessary to split it up, now you generally have a long, expensive and error-prone migration on your hands. If you architect you system with this eventual transition in mind from the start, though, it can be much less painful.
Personally I'm against the practice of enforced state-hiding. I prefer the convention that state ought not to be depended upon, but if you do want to depend on it (sometimes important for composition or inheritance) you have tests to catch changes in the behavior of that state.
And in practice there's often a state-machine of some kind hidden away inside various object types that you cannot avoid depending on.
Then you are against message passing paradigms. Message passing paradigm implies you can't just go and set state/create state directly, you need to send a message and then the receiver decides what to do with that message.
Its like saying that you like functional programming with mutable state, mutable state makes it no longer functional programming even if you use functions here and there.
Similarly message passing allows an actor to decide how to respond to an event, to forward a message, to ignore it, etc. That's possible but more difficult to achieve with Java/C++ "straightjacket" style OOP. Important patterns for GUIs like the Observer pattern are just simpler with a message passing paradigm.
I rather liked the old post "Object Oriented Programming is an Expensive Disaster that Must End" written over 10 years ago.
https://medium.com/@jacobfriedman/object-oriented-programmin...
Many complained the post was too long, and then debated all kinds of things brought up in the article (such is the way of the internet).
But the one thing I really liked is how it laid out that everyone has a different definition of what OOP is and so it is difficult to talk about.
You can't make the all generic implementation, than you will get a more complicated formulation of a turing machine. Software is useful in that it narrows down the expressiveness of computation to a single problem. A generic implementation is able to express anything and thus nothing, i.e. it doesn't contain the information of the problem anymore.
Why, functions have state -- their closure.
https://people.csail.mit.edu/gregs/ll1-discuss-archive-html/...
One principal I pursue aggressively in UI design is NOT hiding state!
A client had written a script to calibrate an embedded system across several operating parameters. The program consisted of a TUI to capture setup parameters, then ran 5-layers of nested FOR loop for several hours. This original program lacked even a basic progress bar. You never knew what it was doing or when it would be done - it hid a lot of internal state!
You might say, "the TUI program was hiding relevant state; your program should only hide the irrelevant state." But as I iterated on the program I realized that literally all of the program state was relevant! So I meticulously crafted a GUI to display every operational detail on screen at all times. This included the state of each FOR loop, a plot of the intermediate results, the state of each TCP connection, the state of the unit under test, and the user-selected test parameters, the full path of the calibration files the test relied upon.
While the program had some "magic" (it would auto-recall the default parameters associated with the model under test, and would auto detect equipment types with a network scan) I future-proofed it by ensuring that ALL of the parameters were both visible (not hidden!) and human-overridable AND resettable (A reset-to-default option would appear when the user changed defaults, in case the selection was unintended.) The GUI is also wire silent (no network packets transmitted) until directed by the user to scan for/connect to equipment, and it gives a visual indicator while waiting for network responses.
The GUI also tracked test progress as a first-class file-saveable object, so not only was there a progress bar, but tests could be resumed if equipment connectivity was lost mid-test. And a 95% confidence interval estimate, shown as "between 9 and 13 minutes remaining" to help production techs plan their day.
Even when state is read-only, I still don't hide it. Once a test has commenced, the selectable test parameters cannot be changed, but displaying them on screen confirms to users that their 5-hour test was not started with the wrong parameters.
My big takeaway from this effort is that ANY hidden/inaccessible state is a liability. A user should be able to observe at all times what their program is up to.
OOP is not better suited for user-interfaces than the alternatives.
> Some things are not well-suited to OOP, like linear processing of information in a server. I suspect this is where the FP excitement came from
'Hiding' isn't good enough.
OO sells the dream of 'black-box' objects, where you only need to be concerned with the inputs and outputs, and only the object cares about its internals.
But what do programmers stick inside objects? The world! Files are outside the black-box, but inside a FileWriter object. Likewise with external webservers and databases.
That's where the FP hype comes from - or at least Haskell. I can encapsulate functions perfectly (and actually stop caring about the internals!)
If I try to stuff an Airplane inside flight-recorder, I get a compile error.
But they come at great cost. If you don't actually HAVE the problems they solve, do everything in your power to avoid them. If you can just throw money at larger servers, you should not use microservices.
I have been programming since 1967. Early in my college days, when I was programming in FORTRAN and ALGOL-W, I came across structured programming. The core idea was that a language should provide direct support for frequently used patterns. Implementing what we now call while loops using IFs and GOTOs? How about adding a while loop to the language itself? And while we're at it, GOTO is never a good idea, don't use it even if your language provides it.
Then there were Abstract Datatypes, which provided my first encounter with the idea that the interface to an ADT was what you should program with, and that the implementation behind that interface was a separate (and maybe even inaccessible) thing. The canonical example of the day was a stack. You have PUSH and POP at the interface, and the implementation could be a linked list, or an array, or a circular array, or something else.
And then the next step in that evolution, a few years later, was OOP. The idea was not that big a step from ADTs and structured programming. Here are some common patterns (modularization, encapsulation, inheritance), and some programming language ideas to provide them directly. (As originally conceived, OOP also had a way of objects interacting, through messages. That is certainly not present in all OO languages.)
And that's all folks.
All the glop that was added later -- Factories, FactoryFactories, GoF patterns, services, microservices -- that's not OOP as originally proposed. A bunch of often questionable ideas were expressed using OO, but they were not part of OO.
The OOP hatred has always been bizarre to me, and I think mostly motivated by these false associations. The essential OOP ideas are uncontroversial. They are just programming language constructs designed to support programming practices that are pretty widely recognized as good ones, regardless of your language choices. Pick your language, use the OO parts or not, it isn't that big a deal. And if your language doesn't have OO bits, then good programming often involves reimplementing them in a systematic way.
These pro- and anti-OOP discussions, which can get pretty voluminous and heated, seem a lot like religious wars. Look, we can all agree that the Golden Rule is a pretty good idea, regardless of the layers of terrible ideas that get piled onto different religions incorporating that rule.
Good luck with that if you're a C programmer.
Likewise, I see these patterns as equivalent style choices, since the problem fundamentally dictates the required organization and data flow, because the same optimal solution will be visible to any skilled developer, with these weak style choices of implementation being the only freedom that they actually have.
For example, these two are exactly the same:
state = concept_operation(state, ...args)
and class Concept:
def operation(self, ...args)
self.state = <whatever with self.state>
and an API call to https://url/concept/operation with a session ID where the state is held.I suspect people who get emotional about these things haven't spent too much time in the others, to understand why they exist with such widespread use.
It's like food. If you go anywhere and see the common man eating something, there's a reason they're eating it, and that reason is that's it's probably pretty ok, if you just try it. There's a reason they're eating it, and it's not that they're idiots.
This comment i posted in an earlier thread rehashing Inheritance vs. Composition for the gazillionth time is highly relevant here - https://news.ycombinator.com/item?id=45943135
OOD/OOP has not gone away, has not shifted etc. but is alive and well; just packaged under different looking gloss. The meta-principles behind it are fundamental to large scale systems development, namely; Separation-Of-Concerns, Modularization, Reuse and Information-Hiding.
I've seen code from people who got the FP "buzz" and suddenly everything had to be FP. It was a nightmare.
The reality is as it always has been, the best programmers write simple, understandable code, regardless of the paradigm they choose.
This has nothing to do with OOP, and can be made out of structured-programming components, or even purely functional components. In fact, stateless services are a staple of horizontal scaling, and could be a poster child of FP taking over the real world (along with React).
What made OOP problematic was mostly shared and concealed mutable state, and the ill-conceived idea of inheritance. Both of these traits are being actively eschewed in most modern software: mutable state is largely separated into databases, inheritance is often rejected in favor of composition. These are all practical, non-ideological choices, ways to relieve well-known pains. It this regard, OOP is on its way out, even in strongholds where it's ingrained into the very fabric, like JVM languages.
The complaints of the author are mostly about component-based architecture, with limited compile-time trust (know thy vendor) and large run-time distrust (every network request should be seen as a potentially malicious request). This is the price that we pay for having so many ready-made building blocks to choose from, and for the ability to make our services available (or access someone else's services) anywhere on the planet, 24/7.
Highly scalable architectures with their complexity were invented by companies who needed them, like, well, Google or Amazon. There is no good way to serve billions of requests daily. If you serve mere millions, you may not even need all that, and make do with a few beefier boxes. It has nothing to do with OOP, again.
Fake history; the term "software crisis" was coined in 1968:
https://en.wikipedia.org/wiki/Software_crisis
I get that the writing is tongue-in-cheek, but telling just-so stories about how things were so much better in the old days doesn't help anyone.
I also resent our modern problems, but I don't kid myself that I'd enjoy vintage problems any better.
Silly link, though. I highly suggest going back and clicking on the link.
I think the author is correctly picking up on how messy changes in best common practice can be. Also, different communities / verticals convert to the true religion on different schedules. The custom enterprise app guys are WAAAAY different than games programmers. I'm not sure you'll ever get those communities to speak the same language.
OOP is dead. Long live OOP.
And this has tangible costs, too. I saved more than $10k a month in hosting costs for a small startup by combining a few microservices (hosted on separate VMs) into a single service. The savings in development time by eliminating all of the serialization layers is also appreciable, too.
I read it twice; twice I got nothing.
It is incoherent. Vaguely attempting to take a swing at ... modularity???
Cloud: Separating resources from what gets deployed is a classic separation of concerns.
I don’t miss the days where I had to negotiate with the IT team on hardware, what gets run, and so on.
Personally, I believe the next evolution is a rebalkanization into private clouds. Mid-to-large companies have zero reason to tie their entire computing and expose information to hosting third parties.
OpenAPI: The industry went through a number of false starts on formal remoting calls (corba, dcom, soap). Those days sucked.
The RESTful APIs caught on, and of course at some point, the need for a formal contract was recognized.
But note how decoupled it is from the underlying stack: It forces the engineers to think about the contract as a separate concern.
The problem here is how fragile the web protocol and security actually is, but the past alternatives offer no solution here.
I'd say that right now the edge is with non-oop.
But dare I say too, llms will make this battle mean less than it did 5 years ago. Look, I'm not looking to battle llms vs not - but the bottom line is that nearly all code in a 10 years time will be written and/or managed 99% by llms.
Also even those recent ones that boost about not being OOP, have type systems that from computer science OOP type theory point of view, support OOP concepts (OOP is not class based inheritance and nothing else).
Naturally since not everyone is doing type theory studies on CS degrees, or attending CS degree to start with, then we get all these discussions about what is OOP or not.
I will say that service-oriented architecture does have some advantages, and thus sometimes it's the right choice. Parallelism is pretty free and natural, you can run services on different machines, and that can also give you scalability if you need it. However, in my experience that architecture tends to be used in myriad situations where it clearly isn't needed and is a net negative. I have seen it happen.
OK, I'm out.
> To put it quite bluntly: as long as there were no machines, programming was no problem at all; when we had a few weak computers, programming became a mild problem, and now we have gigantic computers, programming has become an equally gigantic problem.
Typo: dessert
Protocols have their issues, though[0]. Not exactly the same results.
[0] https://littlegreenviper.com/the-curious-case-of-the-protoco...
29 more comments available on Hacker News
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.