Lessons From 14 Years at Google
Key topics
As the tech industry buzzes with the latest trends and innovations, a treasure trove of insights from 14 years at Google is sparking a lively discussion. Commenters are raving about the collection of heuristics, with many agreeing that the principles, such as "Novelty is a loan you repay," are timeless and universally applicable, not just for engineers, but also for designers and product managers. However, not everyone is convinced, with some arguing that true learning comes from experience, and that these lessons may not be as relevant for those who don't have the same opportunities or job satisfaction. The debate highlights the tension between embracing new technologies and sticking with familiar, tried-and-true approaches, with some commenters pointing out that Google's own success was built on innovative, outside-the-box thinking.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
37m
Peak period
50
0-3h
Avg / period
14.5
Based on 160 loaded comments
Key moments
- 01Story posted
Jan 4, 2026 at 10:23 AM EST
4d ago
Step 01 - 02First comment
Jan 4, 2026 at 11:01 AM EST
37m after posting
Step 02 - 03Peak activity
50 comments in 0-3h
Hottest window of the conversation
Step 03 - 04Latest activity
Jan 5, 2026 at 9:17 PM EST
2d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The two that stand out are
> Novelty is a loan you repay in outages, hiring, and cognitive overhead.
and
> Abstractions don’t remove complexity. They move it to the day you’re on call.
as a warning against about being too, too clever.
But at the same time lessons aren't learned by reading what someone else has to say. They're learned by experience, and everyone's is different. An engineer with "14 years at Google" hardly makes them an expert at giving career advice, but they sure like to write like it does.
This type of article reads more like a promotion piece from self-involved people, than heartfelt advice from someone knowledgeable. This is evident from the author's "bio" page, written in 3rd person, and full of aggrandizing claims of their accomplishments, and photos with famous people they've met. I'm conditioned to tune out most of what these characters have to say.
If this is the type of people who excel in Big Tech, it must be an insufferable place to be.
15 Years worth of jobs and none gel. I'm a contractor now which feels more me. I have a contract length, don't have to deal with red tape political bullshit.
Turn up, do work and leave when contract had ended.
The only difference is you don't get job security, pension or any perks. But you do get a lump sum where you can then decide what's best.
Not just engineers, but basically everyone involved in creating products including designers and PMs.
Every single bullet point here is gold.
Very correlated with the quality of the message I'd imagine.
in the first item, LLMs don't use incomplete sentence fragments?
> It’s seductive to fall in love with a technology and go looking for places to apply it. I’ve done it. Everyone has. But the engineers who create the most value work backwards: they become obsessed with understanding user problems deeply, and let solutions emerge from that understanding.
I suppose it can be prompted to take on one's writing style. AI-assisted, ok sure, but hmm so any existence of an em-dash automatically exposes text as AI-slop? (ironically I don't think there are any dashes in the article)
EDIT: ok the thread below, does expose tells. https://news.ycombinator.com/item?id=46490075 - yep there's definitely some AI tells. I still think it's well written/structured though.
> It's not X... it's Y.
That one I can't unsee.
> His story isn’t just about writing code, but about inspiring a community to strive for a better web. And perhaps the most exciting chapter is still being written, as he helps shape how AI and the web will intersect in the coming decade. Few individuals have done as much to push the web forward while uplifting its developers, and that legacy will be felt for a long time to come.
> 3. Bias towards action. Ship. You can edit a bad page, but you can’t edit a blank one.
I have my own version of this where I tell people that no amount of good advice can help you make a blank page look better. You need to have some published work before you can benefit from any advice.
If the team mates have a different mindset, they see it as half baked or hacky. And if there is ever some bad feedback, they just use it as a "I told you so" and throw you under the bus.
Also the one person who has to review it before checking in needs to be resilient too
In general I think the "ship fast and break things" mentality assumes a false dilemma, as if the alternative to shipping broken software is to not ship at all. If thats the mentality no wonder software sucks today. I'd rather teams shipped working, correct, and performant software even if it meant delaying additional features or shipping a constrained version of their vision. The minimalism of the software would probably end up being a net benefit instead of stuffing it full of half baked features anyways.
It really sucks when the first mover / incumbent is some crappy half assed solution.
But unfortunately we live in a world where quality is largely irrelevant and other USPs are more important. For example these little weekend projects that become successful despite their distinct lack of quality
Linux kernel - free Unix.
JavaScript - scripting in browser
Python - sane "perl"
Today on GitHub alone you can probably find 100 more featured and higher quality projects than any of these were when they launched but nobody cares.
Someone was once talking about the "solving the right problem wrong" vs "solving the wrong problem right".
That's a really useful framing!
HP-UX and AIX were already legacy.
Linux 2.4 was when it hit critical mass because of the publicity of the dotcom boom and it was like what was left after the "tide went out and the market found out who was swimming naked".
The desktop took longer with less well-defined transition points and, arguably, MacOS with its BSD foundations (and command line option) ended up being a good alternative for a lot of the non-Windows crowd--though Windows is still dominant as a desktop/laptop OS. (Windows/Azure are, of course, still major in backend corporate environments as well.)
The trick is they just complain about the last thing they remember being bad, so it's a good sign when that doesn't change, and it's bad if they start complaining about a new thing.
> Figuring out what is useful for people is not some difficult problem that requires shipping half baked slop
what have you shipped? paying sees literally hundreds of thousands of dollars a year to ship out fledged out software that no one wants is exactly why Stadia lasted both way too long and got cancelled anyway.
figuring out what is useful is the hardest problem. if anything that's Google's biggest problem, not shipping fast enough, not iterating fast enough.
I've heard all the truisms listed in that post in my 14+ years at many companies that aren't Google and in all cases there's a major gap between the ideal and the reality.
This entire list reads to me as "I got paid 10s of millions of dollars to drink the Kool Aid, and I must say, the Kool Aid tastes great!"
Starting right is important.
Typing that first character on the page reveals the problems you didn't even know existed. You don't have a keyboard. You do, but it's not plugged in, and you have to move an unexpectedly heavy bookcase to reach the USB port. You need to learn Dvorak. You don't have page-creation privileges and need to open a ticket that will take a week to resolve. You can create the page, but nobody else is able to read it because their machines aren't allowed to install version of the page reader plugin that your page requires. And so on.
All these are silent schedule killers that reveal themselves only once you've shipped one full development (and deployment!) cycle. And as ridiculous as these example problems seem, they're not far from reality at a place as big and intricate as Google.
> At scale, even your bugs have users.
Something I discovered the hard way over many years of maintaining rclone. Fixing a bug has consequences and there are sometimes users depending on that bug!
xkcd: https://xkcd.com/1172/
"With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviors of your system will be depended on by somebody."
https://www.hyrumslaw.com/
Something that seems lost on those using LLMs to augment their textual output.
It may not be just that people don't edit LLM output. It may be that the stylistic blandness is so pervasive, it's just too much work to remove. (Yeah, maybe you could do it. But if you were willing to spend that kind of effort, you probably wouldn't have an LLM write it in the first place.)
Likewise, "Abstractions don’t remove complexity. They move it to the day you’re on call." made me think of this 23 year old classic from Joel Spolsky, the Law of Leaky Abstractions: https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a...
I’ve followed that rule for decades and always regretted it when I couldn’t: projects were either too boring or too stressful except at the magic level of novelty.
I don't think this is consistently true - in particular, I think that a lot of current well-known practices around writing code result in code that implicitly relies on assumptions in another part of the system that can change without warning; and novelty is necessary in order to make those assumptions more solid and ultimately result in software that is less likely to break unexpectedly.
What did you mean?
Learn what's happening a level or two lower, look carefully, and you'll find VAST unnecessary complexity in most modern software.
Similarly, if I factor out some well-named function instead of repeating the same sequence actions in multiple places - the work to be done is just as complex, and I haven't even removed the complexity from my code, but - I have traded the complexity of N different pieces of code for 1 such piece plus N function calls. Granted, that tradeoff isn't always the right thing to do, but one could still claim that, often, that _does_ reduce the complexity of the code.
But plenty of projects add quite a lot of incidental complexity, especially with technology choices. E.g., Resume Driven Development encourages picking impressive or novel tools, when something much simpler would do.
Another big source of unneeded complexity is code for possibilities that never come to fruition, or that are essentially historical. Sometimes that about requirements, but often it's about addressing engineer anxiety.
Simple example: You are not dealing with the complexity of process management of the OS, every time you start any application. Sometimes you might need to, if you are developing software. Or if your application hangs and you need to kill it via some task manager. Most users however, never deal with that, because it is abstracted "away". That's the whole point. Nevertheless, the actual complex work is always done. Behind the scenes.
First place I worked right out of college had a big training seminar for new hires. One day we were told the story of how they’d improved load times from around 5min to 30seconds, this improvement was in the mid 90s. The negative responses from clients were instant. The load time improvements had destroyed their company culture. Instead of everyone coming into the office, turning on their computers, and spending the next 10min chatting and drinking coffee the software was ready before they’d even stood up from their desk!
The moral of the story, and the quote, isn’t that you shouldn’t improve things. Instead it’s a reminder that the software you’re building doesn’t exist in a PRD or a test suite. It’s a system that people will interact with out there in the world. Habits with form, workarounds will be developed, bugs will be leaned for actual use cases.
This makes it critically important that you, the software engineer, understand the purpose and real world usage of your software. Your job isn’t to complete tickets that fulfill a list of asks from your product manager. Your job is to build software that solves users problems.
Yeah that's not gonna work nowadays.
>DOWNLOADWEBCAM.COM
Is that like Download More RAM?
>BROWSEHN.COM
Hey, I'm browsing that place right now!
>MUZICBRAINZ.COM
This sounds 100% legit no virus softpedia guaranteed.
Plus that's for higher stature service based roles, not warehouse logistics.
It's also mostly bullshit.
Teams work because they have the right combination of skills, both personal and technical, high EQ and IQ, leadership and ownership.
Whether or not you fall backwards into a team's arms or have to participate in childish games is not relevant.
Ignoring the customers becomes a habit, which doesn’t lead to success.
But then, caving to each customer demand will make solution overfit.
Somewhere in there one has to exercise judgement.
But how does one make judgment a repeatable process? Feedback is rarely immediate in such tradeoffs, so promotions go to people who are capable of showing some metric going up, even if the metrics is shortsighted. The repeatable outcome of this process is mediocracy. Which, surprisingly enough, works out on average.
Some person or small team needs to have a vision of what they are crafting and have the skill to execute on it even if users initially complain, because they always do. And the product that is crafted is either one customers want or don’t. But without a vision you’re just a/b testing your way to someone else replacing you in the market with something visionary.
This is not a repetitive process. It’s pretty hard to tell apart a visionary from a lunatic until after they deliver an outsized success.
Principles can help scale decision-making.
One of those higher levels of maturity that some people never reach is to realize that when your model becomes incorrect, that doesn't necessarily mean the world is broken, or that somebody is out to get you, or perhaps most generally, that it is the world's responsibility to get back in line with your internal model. It isn't.
This is just people complaining about the world not conforming to their internal model. It may sound like they have a reason, but the given reason is clearly a post hoc rationalization for what is basically just that their world model doesn't fit. You can learn to recognize these after a while. People are terrible at explaining to each other or even themselves why they feel the way they feel.
The solution is to be sympathetic, to consider their input for whether or not there is some deeper principle or insight to be found... but also to just wait a month or three to see if the objection just dissolves without a trace because their world models have had time to update and now they would be every bit as upset, if not more so, if you returned to the old slow loading time. Because now, not only would that violate their updated world models, but also it would be a huge waste of their time!
Thoughtful people should learn what a world model violation "feels like" internally so they can short-circuit the automatic rationalization circuits that seem to come stock on the homo sapiens floor model and run such feelings through conscious analysis (System 2, as it is sometimes called, though I really hate this nomenclature) rather than the default handling (System 1).
Second define what the real problem is.
Third define a solution that solves 80 percent of their problem.
None of this is intuitive or obvious. It may not even be technically feasible or profitable.
If a manager wants to structure a morning break into their employees’ day, they can do that. It doesn’t require a software fix.
https://www.hyrumslaw.com/
> With a sufficient number of users of an API, it does not matter what you promise in the contract: all observable behaviors of your system will be depended on by somebody.
I also believe an off the shelf example of how to use the library correctly will save everyone a lot of pain later.
This is pretty easy to understand IMO. About 70% of the time I hear machine's fans speed up I silently wish the processing would have just been slower. This is especially true for very short bursts of activity.
I did solve this problem once upon a time by running the process in a cgroup with limited CPU, though I later rewrote my dwm config and lost the command, without caring enough to maintain the fix.
Very important with this, is that not every work place sees your job as that, and you might get hired for the former while you believe it to be the latter. Navigating what is actually expected of you is probably good to try to figure out during the interview, or worst case scenario, on the first day as a new hire.
The overwhelming majority of organizations will say they want you focused on real user problems, but actually want you to make your boss (and their boss) look good. This usually looks more like clearing tasks from a list than creating new goals.
At Google there are both kinds of teams.
One of my early tasks as a junior engineer involved some automation work in a warehouse. It got assigned to me, the junior, because it involved a lot of time working in the warehouse instead of at a comfortable desk.
I assumed I’d be welcomed and appreciated for helping make their work more efficient, but the reality was not that simple. The more efficient I made the technical part of the job, the more time they had to spend doing the manual labor part of the job to keep up. So the more I reduced cycle times, the less time they had to sit around and chat.
Mind you, the original process was extremely slow and non-parallel so they had a lot of time to wait. The job was still very easy. I spent weeks doing it myself to test and optimize and to this day it’s the easiest manual labor job I’ve ever worked. Yet I as the anti-hero for ruining the good thing they had going.
How could that possibly work?
At some point I could see white collar work trending down fast, in a way that radically increased the value of blue color work. Software gets cheaper much faster than hardware.
But then the innovation and investments go into smart hardware, and robotics effectiveness/cost goes up.
If you can see a path where AI isn't a one-generational transition to most human (economic) obsolescence, I would certainly be interested in the principle or mechanism you see.
Economy should be a tool for the society and to benefit everyone. Instead it's becoming more and more a playground for the rich to extract wealth and the proletariats have only purpose to serve the bourgeois lest they be discarded to the outskirts of the economy and often to the literal slums of the society while their peers shout "you're just not working hard enough".
Automation is a game of diffuse societal benefit at the expense of a few workers. Well, I guess owners also benefit but in the long term that extra profit is competed away.
Housing is only a part of the basket used to measure inflation. Housing's price rose faster than the weighted basket average, some other goods and services rose slower or even fell.
As long as accommodation isn't 100% of your basket of goods and services you use to measure inflation, accommodation can rise in price faster (or slower) than the basket. This ain't exactly rocket science.
Samsung TV purchasing power has skyrocketed, though, so there's that.
Also any comparison of wage growth vs corporate profit growth over the last 30 years shows that wages have not kept pace with the increase in productivity.
So incomes are only just barely keeping up, when they should be booming.
https://fred.stlouisfed.org/series/WFRBST01122
About 18% is owned by foreign entities.
The USA is rather unique in its low pensions compared to countries in the EU or Australia (notable for its high contribution rates).
It's not greater profits but lower costs (and prices) that matter here.
Would you rather sell one widget for $1000 or 1000 widgets for $10? Does the answer depend on costs?
I'm all in favour of lowering barriers to entry, too. We need more competition.
Be that from startups, from foreign companies (like from China), or from companies in other sectors branching out (eg Walmart letting you open bank accounts).
If you want to spin up some conspiracy theory about elites snatching up productivity gains, you should focus on top managers.
(Though honestly, it's mostly just land. The share of GDP going to capital has been roughly steady over the decade. The share going to land has increased slightly at the cost of the labour share.
The labour share itself has seen some shake up in its distribution. But that doesn't involve shareholders.)
The oligarchy of the CxOs and boards and cross-pollination has led to concentration of the rewards of companies into the their hands, compared to 40 years ago.
All the productivity gains have not gone to labor, its predominately gone to equity and then extracted via options and buy backs to avoid tax which means public service and investment has gone down.
The craziness of the USG borrowing to fund tax cuts is the ultimate example.
What your evidence for that? See https://www.brookings.edu/wp-content/uploads/2016/07/2015a_r... for a good account.
> [...] and then extracted via options and buy backs to avoid tax which means public service and investment has gone down.
You seem very confused about how capital markets work. Are you also suggesting buy backs are morally different from dividends?
In any case, the whole point of investing (at least to the investor) is to eventually get more money back than you put in. Returning money to investor is not a bug, it's the point.
> The craziness of the USG borrowing to fund tax cuts is the ultimate example.
Blame voters.
The faster the LLM spits out garbage code, the more time I get to spend reviewing slop and dealing with it gaslighting me, and the less time I get to spend on doing the parts of the job I actually enjoy.
Couldn't help but imagining Darryl getting mad at you.
Thanks for the story!
Developers misunderstand what the users want, and then aren't able to accurately implement their own misunderstanding either. Users, in turn, don't understand what the software is capable of, nor what developers can do.
> Good intentions, hopes of correctness, wishful thinking, even managerial edict cannot change the semantics of the code as written or its effect when executed. Nor can they after the fact affect the relationship between the desires, needs, and requirements of users and the program […] implementation; nor between any of these and operational circumstances – the real world.
https://entropicthoughts.com/laws-of-software-evolution
525 more comments available on Hacker News