Cognitive Load Is What Matters
Original: Cognitive load is what matters
Key topics
The debate around cognitive load in coding rages on, with commenters weighing in on whether AI systems face similar challenges. While some argue that AI's "cognitive load" is fundamentally different from humans', others counter that attention mechanisms in AI are, in fact, a concrete instantiation of cognitive load. As the discussion unfolds, a consensus emerges that simplicity is key, with many pointing out that smart authors often write simpler, not more complex, code - although there's a caveat that some smart people might enjoy over-complicating things. The thread also highlights the delicate balance between keeping data structures complex and algorithms simple, with some warning that this approach can backfire if not done carefully.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
137
0-12h
Avg / period
20
Based on 160 loaded comments
Key moments
- 01Story posted
Aug 30, 2025 at 8:58 AM EDT
4 months ago
Step 01 - 02First comment
Aug 30, 2025 at 10:06 AM EDT
1h after posting
Step 02 - 03Peak activity
137 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 7, 2025 at 5:04 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
If you try to do it algorithmically, you arguably won't find a simple expression. It's often glossed over how readability in one axis can drive complexities along another axis, especially when composing code into bite-size readable chunks the actual logic easily gets smeared across many (sometimes dozens) of different functions, making it very hard to figure out what it actually does, even though all the functions check all the boxes for readability, having a single responsibility, etc.
E.g. is userAuthorized(request) is true but why is it true? Well because usernamePresent(request) is true and passwordCorrect(user) is true, both of which also decompose into multiple functions and conditions. It's often a smaller cognitive load to just have all that logic in one place, even if it's not the local optimum of readability it may be the global one because needing to constantly skip between methods or modules to figure out what is happening is also incredibly taxing.
Things make more sense when the data structure lives in a world where most, if not all illegal atates become unrepresentable. But given that we often end un building APIs in representations with really weak type systems, doing that becomes impossible.
To clarify, when I say "not-that-smart-people", I don't mean "stupid people". You need to be beyond some basic level of intelligence in order to have the capability to overcomplicate a codebase. For lack of a better metric, consider IQ. If your IQ is below 80, you are not going to work day-to-day overcomplicating a codebase. You need to be slightly above average intelligence (not stupid, but also "not-that-smart") to find yourself in that position.
If you make a change at the wrong place, you add more complexity than if you put the change in the right place. You often see the same thing with junior developers, in that case due to a limited mental model of the code. You give them a task that from a senior developer would result in a 2 line diff and they come back changing 45 lines.
I suspect that we agree with each other and you misread my earlier comment.
The auth example may not be. You may need to do validatePassword(user) for passwordCorrect(user) to be true, which then forces you to open up a hole in the abstraction that is userAuthorized(request) and peak inside. userAuthorized() has leaked out its logic, it has failed as an abstraction. Its a box with 3 walls and no roof that blocks visibility to important logic rather than hides away the complexity.
Read the fine print.
Well, i say that. Clearing the read buffer that sometimes gets stuck with empty characters based on carriage return semantics does force me in a bit.
Also effort, there are smart people who couldn't be bothered to reduce extraneous load for other people, because they already took the effort to understand it, but they don't have the theory-of-mind to understand that it's not easy for others, or can't be bothered to do so.
> I have only made this letter longer because I have not had the time to make it shorter. - Blaise Pascal
Good rule of thumb I find is, did the new change make it harder or easier to reason about the change / topic?
If we go back to the concept of cognitive load, it's fine cognitive load goes up if the solution is necessarily complex. It's the extraneous bit that we should work to minimize, reduce if possible.
I don't like to generalize.
You are lucky then. I've definitely worked with super smart engineers who chose incredibly complicated solutions over more simpler and pragmatic solutions. As a result the code was generally hard to maintain and specially difficult to understand.
It is a real thing. And it generally happens with "the smart ones" because people who don't know how to make things complicated generally stick with simpler solutions. In my experience.
The people writing the complex code generally seem to think they're smart.
That was me, once. And I was smart, but I was also applying my smarts very, very poorly.
I'm both bothered and intrigued by the industry returning to, what I call, "pile-of-if-statements architecture". It's really easy to think it's simple, and it's really easy to think you understand, and it's really easy to close your assigned Jira tickets; so I understand why people like it.
People get assigned a task, they look around and find a few places they think are related, then add some if-statements to the pile. Then they test; if the tests fail they add a few more if-statements. Eventually they send it to QA; if QA finds a problem, another quick if-statement will solve the problem. It's released to production, and it works for a high enough percentage of cases that the failure cases don't come to your attention. There's approximately 0% chance the code is actually correct. You just add if-statements until you asymptotically approach correctness. If you accidentally leak the personal data of millions of people, you wont be held responsible, and the cognitive load is always low.
But the thing is... I'm not sure there's a better alternative.
You can create a fancy abstraction and use a fancy architecture, but I'm not sure this actually increases the odds of the code being correct.
Especially in corporate environments--you cannot build a beautiful abstraction in most corporate environments because the owners of the business logic do not treat the business logic with enough care.
"A single order ships to a single address, keep it simple, build it, oh actually, a salesman promised a big customer, so now we need to make it so a single order can ship to multiple addresses"--you've heard something like this before, haven't you?
You can't build careful bug-free abstractions in corporate environments.
So, is pile-of-if-statements the best we can do for business software?
I'm not sure if that's anywhere in the rating of quality of business software. Things that matter:
1. How fast can I or someone else change it next time to fulfill the next requirements?
2. How often does it fail?
3. How much money does the code save or generate by existing.
Good architecture can affect 1 and 2 in some circumstances but not every time and most likely not forever at the rate people are starting to produce LLM garbage code. At some point we'll just compile English directly into bytecode and so architecture will matter even less. And obviously #3 matters by far the most.
It's obviously a shame for whoever appreciates the actual art / craft of building software, but that isn't really a thing that matters in business software anyway, at least for the people paying our salaries (or to the users of the software).
You’ll enjoy the Big Ball of Mud paper[1].
Real world systems are prone to decay. You first of all start with a big ball of mud because you’re building a system before you know what you want. Then as parts of the system grow up, you improve the design. Then things change again and the beautiful abstraction breaks down.
Production software is always changing. That’s the beauty of it. Your job is to support this with a mix of domain modeling, good enough abstraction, and constructive destruction. Like a city that grows from a village.
[1] https://laputan.org/mud/mud.html
[2] my recap (but the paper is very approachable, if long) https://swizec.com/blog/big-ball-of-mud-the-worlds-most-popu...
I think anyone that thinks mudball is OK because business is messy has never seen true mudball code.
I've had to walk out of potential work because after looking at what they had I simply had to tell them I cannot help you, you need a team and probably at minimum a year to make any meaningful progress. That is what mudballs leads to. What this paper describes is competent work that is pushed too quickly for cleaning rough edges but has some sort of structure.
I've seen mudballs that required 6-12 months just to do discovery of all the pieces and parts. Hundreds of different version of things no central source control, different deployment techniques depending on the person that coded it even within the same project.
I’ve seen and created some pretty bad stuff. Point is not that it’s okay, but that that’s the job: managing, extending, and fixing the mess.
Yes a perfect codebase would be great, but the code is not perfect and there’s a job to do. You’re not gonna rebuild all of San Francisco just to upgrade the plumbing on one street.
Much of engineering is about building systems to keep the mess manageable, the errors contained, etc. And you have to do that while keeping the system running.
I've seen numerous places trying to hire someone to fix a 5-10 year mudball that has reached a point where progress is no longer possible without breaking something else which breaks something else and so on.
There is an endgame to the mudball and it does end in complete and total development stopping and systems that are constantly going offline and take weeks to get restarted. Most of the time the place will say: "Oh we've already had several consultants tell us the same thing" The same thing being the situation is hopeless and they are facing years of simply untangling the mess they made.
Usually the mudball is held together by a chain of increasingly shorter senior positions that keep jumping the sinking ship faster and faster. Finally they can no longer convince anyone sane to take on the ticking time bomb they have created and they turn to consultants.
Also my advice is often you should bring back person X that was at least familiar with the system at whatever salary they require. I am inevitably told that that person will literally not even take calls or emails from the company any more, every time. Thats how bad a real world mudball is.
https://s3.amazonaws.com/systemsandpapers/papers/bigballofmu...
It would just keep adding what it called "heuristics", which were just if statements that tested for a specific condition that arose during the bug. I could write 10 tests for a specific type of bug, and it would happily fix all of them. When I add another one test with the same kind of bug it obviously fails, because the fix that Codex came up with was a bunch of if statements that matched the first 10 tests.
I am convinced this behaviour and the one you described are due to optimising for swe benchmarks that reward 1-shotting fixes without regard to quality. Writing code like this makes complete sense in that context.
Thank you for giving a perfect example of what I was describing.
The thing is, you actually can make the software work this way, you just have to add enough if-statements to handle all cases--or rather, enough cases that the manager is happy.
The model of having a circle of ancient greybeards in charge of carefully updating the sacred code to align with the business requirements, while it seems bizarre bordering on something out of WH40K, actually works pretty well and has worked pretty well everywhere I've encountered it.
Attempts to refactor or replace these systems with something more modern has universally been an expensive disaster.
Project Manager: "Can we ship an order to multiple addresses?"
Grey Beard: "No. We'd have to change thousands of random if-statements spread throughout the code."
Project Manager: "How long do you think that would take?"
Grey Beard: "2 years or more."
Project Manager: "Okay, we will break you down--err, I mean, we'll need to break the task down. I'll schedule long meetings until you relent and commit to a shorter time estimate."
Grey Beard eventually relents and gives a shorter time estimate for the project, and then leaves the company for another job that pays more half-way through the project.
Project Manager: "Can we ship an order to multiple addresses? We need it in 2 weeks and Grey Beard didn't want to do it"
Eager Beaver: "Sure"
Of course Eager Beaver didn't learn from this experience because they left the company a few months ago thinking their code was AWESOME and bragging about this one nicely scalable service they made for shipping to multiple addesses.
Meanwhile Grey Beard is the one putting out the fires, knowing that any attempt to tell Project Manager "finding and preventing situations like this was the reason why I told my estimate back then" would only be received with skepticism.
/s
Your counter example assumes the people managing the code base are incompetent.
Wouldn't the rewrite fail for the exact same reason if the company only employs incompetent tech people?
Sales contracts with weird conditions and odd packaging and contingencies? Pile of if statements.
The other great model for business logic is a spreadsheet, which is well modeled by SQL which is a superset of spreadsheet functionality.
So piles of if’s and SQL. Yeah.
Elegant functional or OOP models are usually too rigid unless they are scaffolding to make piles of conditions and relational queries easier to run.
One would imagine by now we would have some incredibly readable logical language to use with the SQL on that context...
But instead we have people complaining that SQL is too foreign and insisting we beat it down until it becomes OOP.
To be fair, creating that language is really hard. But then, everybody seems to be focusing on destroying things more, not on constructing a good ecosystem.
If you find yourself sprinkling ifs everywhere, try to lift them up, they’ll congregate at the same place eventually, so all of your variability is implemented and documented at a single place, no need to abstract anything.
It’s very useful to model your inputs and outputs precisely. Postpone figuring out unified data types as long as possible and make your programming language nice to use with that decision.
Hierarchies of classes, patterns etc are a last resort for when you’re actually sure you know what’s going on.
I’d go further and say you don’t need functions or files as long as your programming is easy to manage. The only reason why you’d need separate files is if your vcs is crippled or if you’re very sure that these datetime handlers need to be reused everywhere consistently.
Modern fullstack programming is filled with models, middleware, Controllers , views , … as if anyone needs all of that separation up front.
If your code ever has the possibility of changing, your early wins by having no abstraction are quickly paid for, with interest, as you immediately find yourself refactoring to a higher abstraction in order to reason about higher-order concepts.
In this case, the abstraction is the simplicity, for the same reason that when I submit this comment, I don't have to include a dictionary or a definition of every single word I use. There is a reason that experienced programmers reach for abstractions from the beginning, experience has taught them the benefits of doing so.
The mark of an expert is knowing the appropriate level of abstraction for each task, and when to apply specific abstractions. This is also why abstractions can sometimes feel clumsy and indirect to less experienced engineers.
Even file interfaces in most programming languages don’t come with pipelining. Most are leaky abstraction.
Most abstractions also deal with 1 thing instead of N things. There’s no popular http server that supports batch request processing.
Async-await is a plague of an abstraction.
Abstracting something like trivial if statements is not a problem. The best transaction of all, passing a function to a function is underused.
Certainly, there are such people who simply don't care.
However I would also say that corporations categorically create an environment where you are unable to care - consider how short software engineer tenures are! Any even somewhat stable business will likely have had 3+ generations of owner by the time you get to them. Owner 1 is the guy who wrote 80% of the code in the early days, fast and loose, and got the company to make payroll. Owner 2 was the lead of a team set up to own that service plus 8 others. Owner 3 was a lead of a sub-team that split off from that team and owns that service plus 1 other related service.
Each of these people will have different styles - owner 1 hated polymorphism and everything is component-based, owner 2 wrapped all existing logic into a state machine, owner 3 realized both were leaky abstractions and difficult to work with, so they tried to bring some semblance of a sustainable path forward to the system, but were busy with feature work. And owner 3 did not get any Handoff from person 2 because person 2 ragequit the second enough of their equity vested. And now there's you. You started about 9 months ago and know some of the jargon and where some bodies are buried. You're accountable for some amount of business impact, and generally can't just go rewrite stuff. You also have 6 other people on call for this service with you who have varying levels of familiarity with the current code. You have 2.25 years left. Good luck.
Meanwhile I've seen codebases owned by the same 2 people for over 10 years. It's night and day.
Employees aren’t fired. They leave for a 10% increase. Employees are the ones who seek always more in a short-termist way.
They are acting rationally given companies don't seem to value long term expertise.
Now, if it was a worker- owned cooperative, that would be a different thing.
I once tried to explain to a product owner that we should be careful to document what assumptions are being made in the code, and make sure the company was okay committing to those assumptions. Things like "a single order ships to a single address" are early assumptions that can get baked into the system and can be really hard to change later, so the company should take care and make sure the assumptions the programmers are baking into the system are assumptions the company is willing to commit to.
Anyway, I tried to explain all this to the product owner, and their response was "don't assume anything". Brillant decisions like that are why they earned the big bucks.
IMO a lot of (software) engineering wisdom and best practices fails in the face of business requirements and logic. In hard engineering you can push back a lot harder because it's more permanent and lives are more often on the line, but with software, it's harder to do so.
I truly believe the constraints of fast moving business and inane, non sensical requests for short term gains (to keep your product going) make it nearly impossible to do proper software engineering, and actually require these if-else nests to work properly. So much so that I think we should distinguish between software engineering and product engineering.
They fail on reality. A lot of those "best" practices assume, that someone understands the problem and knows what needs to be built. But that's never true. Building software is always an evolutionary process, it needs to change until it's right.
Try to build an side project, that doesn't accept any external requirements, just your ideas. You will see that even your own ideas and requirements shift over time, a year (or two) later your original assumptions won't be correct anymore.
The issue is when the evolution is random and rife with special cases and rules that cannot be generalized... the unknown unknowns of reality, as you say.
Then, you just gotta patch with if elses.
You’ve just described the universe. It’s full of randomness.
Add to the fact that they're the professor of many software engineering courses and you start to see why so many new grads follow SOLID so dogmatically, which leads to codebases quickly decaying.
When the problem itself is technical or can be generalised then abstractions can eliminate the need for 1000s of if-statement developers but if the domain itself is messy and poorly specified then the only ways abstractions (and tooling) can help is to bake in flexibility, because contradiction might be a feature not a bug...
I'm firmly in the "DI||GTFO" camp, so I don't meant to advocate for the Factory pattern but saying that only abstractions that you like are ok starts to generate PR email threads
I don't think it is malice or incompetence, but this happens too often to feel good.
I don't see the problem. Okay, so we need to support multiple addresses for orders. We can add a relationship table between the Orders and ShippingAddresses tables, fix the parts of the API that need it so that it still works for all existing code like before using the updated data model, then publish a v2 of the api with updated endpoints that support creating orders with multiple addresses, adding shipping addresses, whatever you need.
Now whoever is dependent on your system can update their software to use the v2 endpoints when they're ready for it. If you've been foolish enough to let other applications connect to your DB directly then those guys are going to have a bad time, might want to fix that problem first if those apps are critical. Or you could try to coordinate the fix across all of them and deploy them together with the db update.
The problems occur when people don't do things properly, we have solutions for these problems. It's just that people love taking shortcuts and that leads to a terrible system full of workarounds rather than abstractions. Abstractions are malleable, you can change them to suit your needs. Use the abstractions that work for you, change them if they don't work any more. Design the code in such a way that changing them isn't a gargantuan task.
Which items ship to each of those locations and in which quantities?
What is the status of each of those sub-orders in the fulfillment process?
Should the orders actually ship to those addresses or should the cartons just be packed and marked for those locations for cross-docking and the shipments should be split across some number of regional DC's based on proximity to the final address?
Many things need to be updated in the DB schema+code. And if you think this isn't a very good example, it's a real life example of orders for large retailers.
The details don't really matter for my main point though. The point is you don't solve this problem with workarounds. You find a way to redesign the system to suit your new needs - assuming the business thinks that's worth it. That's like our whole job, we build systems and when they need to change we change them. The only question is how we make those changes - do we do it properly or do we add workarounds until our codebase is a big pile of workarounds?
If the abstraction doesn't fit a new problem, it should be easy to reassemble the components in a different way, or use an existing abstraction and replace some components with something that fits this one problem.
The developers shouldn't be forced to use the abstractions, they should voluntarily use them because it makes it easier for them.
An underappreciated value. I call this composability and it is one of my primary software development goals.
Of course, the disadvantage is the exponential growth. 20 ifs means a million cases (usually less because the conditions aren't independent, but still).
Then I have a flat list of all possible cases, and I can reconstruct a minimal if tree if I really want to (or just keep it as a list of cases - much easier to understand that way, even if less efficient).
Making invalid data unrepresentable simplifies so much code. It's not always possible but it is way underused. You can do some of it with object encapsulation too, but simple enums with exhaustive switch statements enforced by the compiler (so if it changes you have to go handle the new case everywhere) is often the better option.
That's the dream. Error handling is what crushes it :)
Instead, at least one implementer needs to get hands dirty on what the application space really is. Very dirty. So dirty that they actually start to really know and care about what the users actually experience every day.
Or, more realistically for most companies, we insist on separate silos, "business logic" comes to mean "stupid stuff we don't really care about", and we screw around with if statements. (Or, whatever, we get hip to monads and screw around with those. That's way cooler.)
Sometimes last mile software turns into these abstractions but often not.
I’ve worked with very smart devs that try to build these abstractions too early, and once they encounter reality you just have a more confusing version of if statement soup.
The way I've been thinking about it is about organization. Organize code like we should organize our house. If you have a collection of pens, I guess you shouldn't leave them scattered everywhere and in your closet, and with your cutlery, and in the bathroom :) You should set up somewhere to keep your pens, and other utensils in a kind of neat way. You don't need to spend months setting up a super-pen-organizer that has a specially sculpted nook for your $0.50 pen that you might lose or break next week. But you make it neat enough, according to a number of factors like how likely it is to last, how stable is your setup, how frequently it is used, and so on. Organizing has several advantages: it makes it easier to find pens, shows you a breath of options quickly, keeps other places in your house tidier and so less cognitively messy as well. And it has downsides, like you need to devote a lot of time and effort, you might lose flexibility if you're too strict like maybe you've been labeling stuff in the kitchen, or doing sketches in your living room, and you need a few pens there.
I don't like the point of view that messiness (and say cognitive load) is always bad. Messiness has real advantages sometimes! It gives you freedom to be more flexible and dynamic. I think children know this when living in a strict "super-tidy" parent house :) (they'd barely get the chance to play if everything needs to be perfectly organized all the time)
I believe in real life almost every solution and problem is strongly multifactorial. It's dangerous to think a single factor, say 'cognitive load', 'don't repeat yourself', 'lesser lines of code', and so on is going to be the single important factor you should consider. Projects have time constraints, cost, need for performance; expressing programs, the study of algorithms and abstractions itself is a very rich field. But those single factors help us improve a little on one significant facet of your craft if you're mindful about it.
Another factor I think is very important as well (and maybe underestimated) is beauty. Beauty for me has two senses: one in an intuitive sense that things are 'just right' (which capture a lot of things implicitly). A second and important one I think is that working and programming, when possible, should be nice, why not. The experience of coding should be fun, feel good in various ways, etc. when possible (obviously this competes with other demands...). When I make procedural art projects, I try to make the code at least a little artistic as well as the result, I think it contributes to the result as well.
[1] a few small projects, procedural art -- and perhaps a game coming soon :)
These individual functions are easier to reason about since they have specific use cases, you don't have to remember which combinations of conditions happen together while reading the code, they simplify control flow (i.e. you don't have to hack around carrying data from one if block to the next), and it uses no "abstraction" (interfaces) just simple functions.
It's obviously a balance, you'll still have some if statements, but getting rid of mutually exclusive conditions is basically a guaranteed improvement.
You're right that the business logic is gonna be messy, and that's because nobody really cares, and they can offload the responsibility to developers, or anyone punching it in.
On the other hand, separating "good code" and "bad code" can have horrible outcomes too.
One "solution" I saw in a fintech I worked at, was putting the logic in the hands of business people itself, in the form of a decision engine.
Basically it forced the business itself to maintain its own ball of mud. It was impossible to test, impossible to understand and even impossible simulate. Eventually software operators were hired, basically junior-level developers using a graphical interface for writing the code.
It was rewritten a couple times, always with the same outcome of everything getting messy after two or three years.
It also depends how big the consequences to failure/bugs are. Sometimes bugs just aren't a huge deal, so it's a worthwhile trade-off to make development easier in change for potentially increasing the chance of them appearing.
There is also a discussion between the author of Clean Code and APOSD:
https://github.com/johnousterhout/aposd-vs-clean-code
Microsoft had three personas for software engineers that were eventually retired for a much more complex persona framework called people in context (the irony in relation to this article isn’t lost on me).
But those original personas still stick with me and have been incredibly valuable in my career to understand and work effectively with other engineers.
Mort - the pragmatic engineer who cares most about the business outcome. If a “pile of if statements” gets the job done quickly and meets the requirements - Mort became a pejorative term at Microsoft unfortunately. VB developers were often Morts, Access developers were often Morts.
Elvis - the rockstar engineer who cares most about doing something new and exciting. Being the first to use the latest framework or technology. Getting visibility and accolades for innovation. The code might be a little unstable - but move fast and break things right? Elvis also cares a lot about the perceived brilliance of their code - 4 layers of abstraction? That must take a genius to understand and Elvis understands it because they wrote it, now everyone will know they are a genius. For many engineers at Microsoft (especially early in career) the assumption was (and still is largely) that Elvis gets promoted because Elvis gets visibility and is always innovating.
Einstein - the engineer who cares about the algorithm. Einstein wants to write the most performant, the most elegant, the most technically correct code possible. Einstein cares more if they are writing “pythonic” code than if the output actually solves the business problem. Einstein will refactor 200 lines of code to add a single new conditional to keep the codebase consistent. Einsteins love love love functional languages.
None of these personas represent a real engineer - every engineer is a mix, and a human with complex motivations and perspectives - but I can usually pin one of these 3 as the primary within a few days of PRs and a single design review.
- Mort wants to climb the business ladder.
- Elvis wants earned social status.
- Einstein wants legacy with unique contributions.
- Amanda just wants group cohesion and minimizing future unpredictability.
I have been most of my career working with C++. You all may know C++ can be as complex as you want and even more clever.
Unless I really need it, and this is very few times, I always ask myself: will this code be easy to understand for others? And I avoid the clever way.
I think the personas have some validity but I don't agree with the primary focus/mode.
For example, I tend to be a mort because what gets me up in the morning is solving problems for the enterprise and seeing that system in action and providing benefit. Bigger and more complex problems are more fun to solve than simpler ones.
I think if I were to make three strawmen like this I would instead talk about them as maximizing utility, maintainability, and effectiveness. Utility because the "most business value" option doesn't always make the software more useful to people. (And I will tend to prioritize making the software better over making it better for the business.) Maintainability because the thing that solves the use case today might cause serious issues that makes the code not fit for purpose some time in the future. Effectiveness because the basket of if statements might be perfect in terms of solving the business problem as stated, but it might be dramatically slower or subtly incorrect relative to some other algorithm.
Mort is described as someone who prioritizes present business value with no regard to maintainability or usefulness.
Elvis is described as someone who prioritizes shiny things, he's totally a pejorative.
Einstein is described as someone who just wants fancy algorithms with no regard for maintainability or fitness to the task at hand. Unlike Elvis I think this one has some value, but I think it's a bit more interesting to talk about someone who is looking at the business value and putting in the extra effort to make the perfectly correct/performant/maintainable solution for the use case, rather than going with the easiest thing that works. It's still possible to overdo, but I think it makes the archetype more useful to steelman the perspective. Amanda sounds a bit more like this, but I think she might work better without the other three but with some better archetypes.
If there is no inherent complexity, a Mort will come up with the simplest solution. If it's a complex problem needing trade-offs the Mort will come up with the fastest and most business centric solution.
Or would you see that Amanda refactoring a whole system to keep it simple above all whatever the deadlines and stakes ?
For example, suppose you have an application that connects to 17 queues and processes a different type of request from each. You could do this in lots of lines as follows:
Or you could do this: The former - despite being "warts and all" - is more prone to bugs getting missed in development and review.To quote OP: "None of these personas represent a real engineer - every engineer is a mix, and a human with complex motivations and perspectives"
The originating example for an Amanda is someone who used her brain to recognize that the existing code was clumsily modeling a state machine and clarified the code by reframing it in terms of well-known vocabulary. It's technically an abstraction but because every dev is taught in advance how they work it's see-through and reduces cognitive load even when you must peel back the abstraction to make changes.
I appreciate both for different reasons.
There will be people looking at pure Green and pure Blue and ask for an Emerald color to get RGBE instead, but that's not how the RGB framework works. And I can't get rid of the feeling that Amanda is that Emerald color people are clamoring for.
I also kinda get why Microsoft got rid of the system for something more abstract.
MFC may have been a steaming pile of doodoo, but at least the tools for developing on the OS were generally free and had decent documentation
Later, Xcode (or Project Builder) became pretty much free with the first release of MacOS X. You could buy a Mac and install all the tools to develop software. Very much in the spirit of NeXT. I am sure something similar happened for Microsoft around the same time.
And now of course all the tools both native from vendors + a large selection of additional third party tools are basiclly free for all major platforms.
(Disregarding things like 'app store fees' or 'developer accounts' which exists for both Apple and Microsoft but are not 100% required to build stuff.)
Elvis: A famous rock star
Enstein: A famous physicist
Amanda: ???
Mort, Elvis, Enstein are referencing things I've heard of before. What is Amanda referencing? is there some famous person named Amanda? Is it slang I'm unaware of?
Am I missing a reference? If not, may I suggest “Ada”?
https://en.wikipedia.org/wiki/Ada_Lovelace
Or even better, “Grace”. Seems to fit your description better.
https://en.wikipedia.org/wiki/Grace_Hopper
https://www.youtube.com/watch?v=gYqF6-h9Cvg
We would all like our coworkers to never make bad decisions. :)
The kind of psycho-bullshit that we should stay away from, and wouldn't happen if we respected each other. Coming from Microsoft is not surprising though.
Anyway, their sibling comment told me what I wanted to know, so in that way I'm wasting more of my time contributing to this
By not putting reductionist labels on them.
For one, many studies of identical twins raised in separate households show they have the same personality type at a much higher rate than chance.
Two, there are incredibly strong correlations in the data. In different surveys of 100k+ people, the highest earning type has twice the salary of the lowest type. This is basically impossible by chance.
The letters (like ENTJ) correlate highly to the variables of Big 5, the personality system used by scientists. Its just that it's bucketed into 16 categories vs being 5 sliding scales.
Scientific studies are looking for variables that can be tracked over time reliably, so Big 5 is a better measure for that.
But for personal or organizational use, the category approach is a feature, not a bug. It is much more help as a mental toolkit than just getting a personality score on each of the 5 categories.
1) As I mentioned, it has a lot of statistically significant correlations, including to all the variables of the Big 5. Example: Surveys show that % of the overall population that is each type (like INFJ) is very consistent across time and populations.
2) Beyond that, youre right, there are a lot of personality systems with pros and cons. But Myers-Briggs has by far the must supporting materials, tools, ease of use, and so on. I think its the quickest to make useful to the average person.
3) I've found it really helpful as a lens for self analysis in my own life.
But since nobody has mentioned the alternative yet, the framework used by anyone in any scientific capacity is the Big Five: https://en.wikipedia.org/wiki/Big_Five_personality_traits
The link between programming and conscientiousness seems fairly straightforward. To fully translate Mort/Elvis/Einstein into some kind of OCEAN vector would take a little more effort.
And you can improve everything with a system. A team of morts forced into a framework where testers/qa/code review find and make them fix the problems along the way before the product is shipped is an incredibly powerful thing to behold.
"so, there's 3 boxes. no more, no less. why? i have a gut feeling. axis? on a case by case basis. am i willing to put my money where my mouth is? heallnaw!"
I see the ideal as a combination of Mort and Einstein that want to keep it simple enough that it can be delivered (less abstraction, distilled requirements) while ensuring the code is sufficiently correct (not necessarily "elegant" mind you) that maintenance and support won't be a total nightmare.
IMO, seek out Morts and give them long term ownership of the project so they get a little Einstein-y when they realize they need to support that "pile of if statements".
As an aside, I'm finding coding agents to be a bit too much Mort at times (YOLO), when I'd prefer they were more Einstein. I'd rather be the Mort myself to keep it on track.
Sometimes teams are quite stuck in their ways because they don’t have the capacity or desire to explore anything new.
For example, an Elvis would probably introduce containers which would eliminate a class of dependency and runtime environment related issues, alongside allowing CI to become easier and simpler, even though previously using SCP and Jenkins and deploying things into Tomcat mostly worked. Suddenly even the front end components can be containers, as can be testing and development databases, everyone can easily have the correct version locally and so on.
An unchecked Elvis will eventually introduce Kubernetes in the small shop to possibly messy results, though.
Elvis and Einstein joined powers to create 14 new javascript package managers over a handful of years while Mort tore his hair out.
I spent time at Microsoft as well, and one of the things I noticed was folks who spent time in different disciplines (e.g. dev, test, pgm) seemed to be especially great at tailoring these qualities to their needs. If you're working on optimizing a compiler, you probably need a bit more Einstein and Mort than Elvis. If you're working on a game engine you may need a different combination.
The quantities of each (or whether these are the correct archetypes) is certainly debatable, but understanding that you need all of them in different proportions over time is important, IMHO.
Also, many developers are suffering from severe cognitive load that is incurred by technology and tooling tribalism. Every day on HN I see complaints about things like 5 RPS scrapers crippling my web app, error handling, et. al., and all I can think about is how smooth my experience is from my particular ivory tower. We've solved (i.e., completely and permanently) 95% of the problems HN complains about decades ago and you can find a nearly perfect vertical of these solutions with 2-3 vendors right now. Your ten man startup not using Microsoft or Oracle or IBM isn't going to make a single fucking difference to these companies. The only thing you win is a whole universe of new problems that you have to solve from scratch again.
362 more comments available on Hacker News