AWS CEO Says Replacing Junior Devs with AI Is 'one of the Dumbest Ideas'
Key topics
The debate around replacing junior developers with AI has sparked a lively discussion, with AWS CEO's comments being met with a resounding "well, yeah" from commenters who point out the obvious consequence: who will become senior engineers in 10-15 years? As some wryly noted, AI might become the senior dev, and possibly even get promoted to management. The conversation quickly pivoted to the challenges of career progression, with many sharing their own experiences of being pushed towards management roles despite their preference for staying in engineering. A consensus emerged that separating leadership and people management is a tricky problem for corporations, with some arguing that promoting top engineers to management can be a loss for the team.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
12m
Peak period
97
0-3h
Avg / period
12.3
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 17, 2025 at 12:08 PM EST
16 days ago
Step 01 - 02First comment
Dec 17, 2025 at 12:20 PM EST
12m after posting
Step 02 - 03Peak activity
97 comments in 0-3h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 19, 2025 at 5:51 AM EST
14 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I'm a big fan of the "staff engineer" track as a way to avoid this problem. Your 10-15 year engineers who don't vibe with management should be able to continue earning managerial salaries and having the biggest impact possible.
https://staffeng.com/about/
If you want to complain about tech companies ruining the environment, look towards policies that force people to come into the office. Pointless commutes are far, far worse for the environment than all data centers combined.
Complaining about the environmental impact of AI is like plastic manufacturers putting recycling labels on plastic that is inherently not recycleable and making it seem like plastic pollution is every day people's fault for not recycling enough.
AI's impact on the environment is so tiny it's comparable to a rounding error when held up against the output of say, global shipping or air travel.
Why don't people get this upset at airport expansions? They're vastly worse.
It helps when you put yourself in the shoes of people like that and ask yourself, if I find out tomorrow that the evidence that AI is actually good for the environment is stronger, will I believe it? Will it even matter for my opposition to AI? The answer is no.
You don't know that, and the environment is far from being the only concern.
Then you would be the exception, not the rule.
And if you find yourself attached to any ideology, then you are also wrong about yourself. Subscribing to any ideology is by definition lying to yourself.
Being able to place yourself into the shoes of others is something evolution spent 1000s of generations hardwiring into us, I'm very confident in my reading of the situation.
Having beliefs or values is not lying to oneself. Keeping beliefs despite being confronted to pieces of evidence that go against them is.
And yes, of course I'm attached to some ideologies. I assume everybody is, consciously or not.
The lie is that you adopted "beliefs, principles or values" which cannot ever serve your interests, you have subsumed yourself into something that cannot ever reciprocate.
> Citation needed
I will not be providing one, but that you believe one is required is telling. There is no further point to this discussion.
> I will not be providing one, but that you believe one is required is telling
Telling what? That you have the burden of proof?
> There is no further point to this discussion.
I'm afraid I agree with you here. Good day, good night.
We do too, don't worry.
I'm glad people are grabbing the reigns of power back from some of the most evil people on the planet.
> The juniors working this way compress their ramp dramatically. Tasks that used to take days take hours. Not because the AI does the work, but because the AI collapses the search space. Instead of spending three hours figuring out which API to use, they spend twenty minutes evaluating options the AI surfaced. The time freed this way isn’t invested in another unprofitable feature, though, it’s invested in learning. [...]
> If you’re an engineering manager thinking about hiring: The junior bet has gotten better. Not because juniors have changed, but because the genie, used well, accelerates learning.
That's why "copy page" buttons are increasingly showing on manual pages eg. https://platform.claude.com/docs/en/get-started
They’re best for data translation tasks: given X format it as Y, basically flexible data pipeline operations.
https://substack.com/@kentbeck
What software projects is he actively working on?
In many cases he helped build the bandwagons you're implying he simply jumped onto.
The fact that I cannot tell if you mean this satirically or not (though I want to believe you do!) is alarming to me.
Of course he can be wrong; he's human. That wasn't my point.
The thrust of the issue is that: when used suitably, AI tools can increase the rate of learning such that it changes the economics of investments in juniour developers - in a good way, to the contrary of how these tools have been discussed in the mainstream. That is an interesting take, and worthy of discussion.
Your appeal to authority here is out of place here and clearly uninformed, thus the downvotes.
What I did not know and what the Wikipedia page revealed is that he worked for a YCombinator company. Thus the downvotes.
What does any of that have to do with having a valid opinion?
Since desktop computers became popular, there have been thousands of small to mid-size companies that could benefit from software systems.. A thousand thousand "consultants" marched off to their nearest accountant, retailer, small manufacturer or attorney office, to show off the new desktop software and claim ability to make new, custom solutions.
We know now, this did not work out for a lot of small to mid-size business and/or consultants. No one can build a custom database application that is "good enough" .. not for lack of trying.. but pace of platforms, competitive features, stupid attention getting features.. all of that, outpaced small consultants .. the result is giant consolidation of basic Office software, not thousands of small systems custom built for small companies.
What now, in 2025? "junior" devs do what? design and build? no. Cookie-cutter procedures at AWS lock-in services far, far outpace small and interesting designs of software.. Automation of AWS actions is going to be very much in demand.. is that a "junior dev" ? or what?
This is a niche insight and not claiming to be the whole story.. but..
Lotus Notes is an example of that custom software niche that took off and spawned a successful consulting ecosystem around it too.
TIL Notes is still a thing. I had thought it was dead and gone some time ago.
I did not write "all software" or "enterprise software" but you are surprised I said that... hmmm
that's a joke, but the pattern for most 10x coders is 1-2 insane days per month (like 25 hours a day kinda thing) and then nothing but answering emails about the monstrosity they created.
But for many Jr engineers it’s the hard part. They are not (yet) expected to be responsible for the larger issues.
But these are the things people learn through experience and exposure, and I still think AI can help by at least condensing the numerous books out there around technology leadership into some useful summaries.
Doing backend and large distributed systems it (seems to me), much deeper. Types of consistency and their tradeoffs in practice, details like implementing and correctly using lamport clocks, good API design, endless details about reworking, on and on.
And then for both, a learned sense of what approaches to system organization will work in the long run (how to avoid needing to stage a re-write every 5 years).
There is such a thing as software engineering skill and it is not domain knowledge, nor knowledge of a specific codebase. It is good taste, an abstract ability to create/identify good solutions to a difficult problem.
how does one junior acquire engineering skills except through experience as you said?
In a long term enterprise the point is building up a long term skillset into the community. Bolstering your teams hive mind so to speak.
But work has evolved and the economy has become increasingly hostile to long term building, making it difficult to get buy in for efforts that don't immediately get work done or make money.
Good luck maintaining that.
Beware, your ego may steer you astray.
Doing backend and large distributed systems it (seems to me), much deeper. Types of consistency and their tradeoffs in practice, details like implementing and correctly using lamport clocks, good API design, endless details about reworking, on and on.
And then for both, a learned sense of what approaches to system organization will work in the long run (how to avoid needing to stage a re-write every 5 years).
Gatekeeping?
Why couldn't a backend team have all tasks be junior compatible, if uncoupled from deadlines and time constraints?
Not at all. Just trying to understand a POV I think I see here, and in other discussions that I can't quite place / relate to.
The person I replied to seemed to be saying that there is no role for experience, beyond knowing the language, tools, and the codebase. There is no real difference between someone with 5 years of experience and 15 years. This may not be what the think, or meant to say, I'm extrapolating a bit (which is why I asked for clarification)
That attitude (which I have run into in other places) seems totally alien to me, my experience, and that of my friends and colleagues. So, I think there must be some aspect that I'm missing or not understanding.
But dyou know what's really great at taking a bunch of tokens and then giving me a bunch of probabilistically adjacent tokens? Yeah exactly! So often even if the AI is giving me something totally bonkers semantically, just knowing all those tokens are adjacent enough gives me a big leg up in knowing how to phrase my next question, and of course sometimes the AI is also accidentally semantically correct too.
I would argue a machine that short circuits the process of getting stuck in obtuse documentation is actually harmful long term...
I would argue a machine that short-circuits the process of getting stuck in obtuse books is actually harmful long term...
Conversations like this are always well intentioned and friction truly is super useful to learning. But the ‘…’ in these conversations seems to always be implicating that we should inject friction.
There’s no need. I have peers who aren’t interested in learning at all. Adding friction to their process doesn’t force them to learn. Meanwhile adding friction to the process of my buddies who are avidly researching just sucks.
If your junior isn’t learning it likely has more to do with them just not being interested (which, hey, I get it) than some flaw in your process.
Start asking prospective hires what their favorite books are. It’s the easiest way to find folks who care.
So in principle Gen AI could accelerate learning with deliberate use, but it's hard for the instructor to guide that, especially for less motivated students
Please read it as: "who knows what you'll find if you take a stop by the library and just browse!"
It’s not as if today’s juniors won’t have their own hairy situations to struggle through, and I bet those struggles will be where they learn too. The problem space will present struggles enough: where’s the virtue in imposing them artificially?
Books often have the "scam trap" where highly-regarded/praised books are often only useful if you are already familiar with the topic.
For example: i fell for the scam of buying "Advanced Programming in the unix environment" and a lot of concept are only shown but not explained. Wasted money, really. It's one of those book i regret not pirating before buying, really.
At the end of the day, watching some youtube video and then referencing the OS-specific manpage is worth much more than reading that book.
I suspect the case to be the same for other "highly-praised" books as well.
AI, on the other hand...
</resurgent-childhood-trauma>
Eventually we will have to somehow convince AI of new and better ways of doing things. It’ll be propaganda campaigns waged by humans to convince God to deploy new instructions to her children.
And this outcome will be obvious very quickly for most observers won't it? So, the magic will occur by pushing AI beyond another limit or just have people go back to specialize on what eventually will becoming boring and procedural until AI catches up
Two things:
1 - I agree with you. A good printed resource is incredibly valuable and should be perfectly valid in this day and age.
2 - many resources are not in print, e.g. API docs, so I'm not sure how books are supposed to help here.
And yet I wouldn’t trust a single word coming out of the mouth of someone who couldn’t understand Hegel so they read an AI summary instead.
There is value in struggling through difficult things.
https://en.wikipedia.org/wiki/Mastery_learning
Maybe. The naturally curious will also typically be slower to arrive at a solution due to their curiosity and interest in making certain they have all the facts.
If everyone else is racing ahead, will the slowpokes be rewarded for their comprehension or punished for their poor metrics?
It's always possible to go slower (with diminishing benefits).
Or I think putting it in terms of benefits and risks/costs: I think it's fair to have "fast with shallow understanding" and "slower but deeper understanding" as different ends of some continuum.
I think what's preferable somewhat depends on context & attitude of "what's the cost of making a mistake?". If making a mistake is expensive, surely it's better to take an approach which has more comprehensive understanding. If mistakes are cheap, surely faster iteration time is better.
The impact of LLM tools? LLM tools increase the impact of both cases. It's quicker to build a comprehensive understanding by making use of LLM tools, similar to how stuff like autocompletion or high-level programming languages can speed up development.
Any task has “core difficulty” and “incidental difficulty”. Struggling with docs is incidental difficulty, it’s a tax on energy and focus.
Your argument is an argument against the use of Google or StackOverflow.
Complaining about docs is like complaining about why research article is not written like elementary school textbooks.
1995: struggling with docs and learning how and where to find the answers part of the learning process
2005: struggling with stackoverflow and learning how to find answers to questions that others have asked before quickly is part of the learning process
2015: using search to find answers is part of the learning process
2025: using AI to get answers is part of the learning process
...
XML oriented programming and other stuff was "invented" back then
To the extent that learning to punch your own punch cards was useful, it was because you needed to understand the kinds of failures that would occur if the punch cards weren't punched properly. However, this was never really a big part of programming, and often it was off-loaded to people other than the programmers.
In 1995, most of the struggling with the docs was because the docs were of poor quality. Some people did publish decent documentation, either in books or digitally. The Microsoft KB articles were helpfully available on CD-ROM, for those without an internet connection, and were quite easy to reference.
Stack Overflow did not exist in 2005, and it was very much born from an environment in which search engines were in use. You could swap your 2005 and 2015 entries, and it would be more accurate.
No comment on your 2025 entry.
I thought all computer scientists heard about Dijkstra making this claim at one time in their careers.
> A famous computer scientist, Edsger Dijkstra, did complain about interactive terminals, essentially favoring the disciplined approach required by punch cards and batch processing.
> While many programmers embraced the interactivity and immediate feedback of terminals, Dijkstra argued that the "trial and error" approach fostered by interactive systems led to sloppy thinking and poor program design. He believed that the batch processing environment, which necessitated careful, error-free coding before submission, instilled the discipline necessary for writing robust, well-thought-out code.
Seriously, the laments I hear now have been the same in my entire career as a computer scientist. Let's just look toward to 2035 where someone on HN will complain some old way of doing things is better than the new way because its harder and wearing hair shirts is good for building character.
The position you've attributed to Dijkstra is defensible – but it's not the same thing at all as punching the cards yourself. The modern-day equivalent would be running the full test suite in CI, after you've opened a pull request: you're motivated to program in a fashion that ensures you won't break the tests, as opposed to just iterating until the tests are green (and woe betide there's a gap in the coverage).
I would recommend reading EWD1035 and EWD1036: actually reading them, not just getting the AI to summarise them. While you'll certainly disagree with parts, the fundamental point that E.W.Dijkstra was making in those essays is correct. You may also find EWD514 relevant – but if I linked every one of Dijkstra's essays that I find useful, we'd be here all day.
I'll leave you with a passage from EWD480, which broadly refutes your mischaracterisation of Dijkstra's opinion (and serves as a criticism of your general approach):
> This disastrous blending deserves a special warning, and it does not suffice to point out that there exists a point of view of programming in which punched cards are as irrelevant as the question whether you do your mathematics with a pencil or with a ballpoint. It deserves a special warning because, besides being disastrous, it is so respectable! […] And when someone has the temerity of pointing out to you that most of the knowledge you broadcast is at best of moderate relevance and rather volatile, and probably even confusing, you can shrug out your shoulders and say "It is the best there is, isn't it?" As if there were an excuse for acting like teaching a discipline, that, upon closer scrutiny, is discovered not to be there.... Yet I am afraid, that this form of teaching computing science is very common. How else can we explain the often voiced opinion that the half-life of a computing scientist is about five years? What else is this than saying that he has been taught trash and tripe?
The full text of much of the EWD series can be found at https://www.cs.utexas.edu/~EWD/.
Now get back to work.
The arguments were similar, too: What will you do if Google goes down? What if Google gives the wrong answer? What if you become dependent on Google? Yet I'm willing to bet that everyone reading this uses search engines as a tool to find what they need quickly on a daily basis.
Of course no-one's stopping a junior from doing it the old way, but no-one's teaching them they can, either.
Microsoft docs are a really good example of this where just looking through the ToC on the left usually exposes me to some capability or feature of the tooling that 1) I was not previously aware of and 2) I was not explicitly searching for.
The point is that the path to a singular answer can often include discovery of unrelated insight along the way. When you only get the answer to what you are asking, you lose that process of organic discovery of the broader surface area of the tooling or platform you are operating in.
As a senior-dev, I have generally a good idea of what to ask for because I have built many systems and learned many things along the way. A junior dev? They may not know what to ask for and therefore, may never discover those "detours" that would yield additional insights to tuck into the manifolds of their brains for future reference.
If you're looking to find something simple on google you're in good hands. But if you want technical information in a space rife with misinformation, you're quickly out of luck, and traditional authoritative sources are irreplaceable.
And the google danger was that the incentive for maintaining such authoritative sources might be deemed to be less, allowing eventual dilution of reliable information. And AI takes this principle and puts it on steroids.
As an example, type "is a vegan diet protective against cholecystitis" on google, and see how many hits it takes before you reach one that correctly informs you that a vegan diet significantly increases your risk of cholecystitis. Chances are it will be buried under multiple (presumably well-meaning but otherwise completely unauthoritative) vegan blogs which will claim plant based diets are not only not risky, but in fact probably the best and only definitive cure for gallbladder disease.
Yet look it up on medline and you will instantly get a different picture.
Its no different now, just the level of effort required to get the code copy is lower.
Whenever I use AI I sit and read and understand every line before pushing. Its not hard. I learn more.
So if AI gets you iterating faster and testing your assumptions I would say that's a net win. If you're just begging it to solve the problem for you with different words then yeah you are reducing yourself to a shitty LLM proxy.
If you read great books all the time, you will find yourself more skilled at identifying good versus bad writing.
If you can just get to the answer immediately, what’s the value of the struggle?
Research isn’t time coding. So it’s not making the developer less familiar with the code base she’s responsible for. Which is the usual worry with AI.
Yes. And now you can ask the AI where the docs are.
The struggling is not the goal. And rest assured there are plenty of other things to struggle with.
There's a lot of good documentation where you learn more about the context of how or why something is done a certain way.
This is "the kids will use the AI to learn and understand" level of cope
no, the kids will copy and paste the solution then go back to their preferred dopamine dispenser
There might be value in learning from failure, but my guess is that there's more value in learning from success, and if the LLM doesn't need me to succeed my time is better spent pushing into territory where it fails so I can add real value.
This is an example of a book on Common Lisp
https://gigamonkeys.com/book/practical-a-simple-database
What you usually do is follow the book instructions and get some result, then go to do some exploration on your own. There’s no walk in the dark trying to figure your own path.
Once you learn what works, and what does not, then you’ll have a solid foundation to tackle more complex subject. That’s the benefit of having a good book and/or a good teacher to guide you to the path of mastering. Using a slot machine is more tortuous than that.
Also, for a lot of things, that is how people learn because there aren't good textbooks available.
I was helping a few people on getting started with an Android Development bootcamp and just being able to run the default example and get their bearing around the IDE was interesting to them. And I remember when I was first learning python. Just doing basic variable declaration and arithmetic was interesting. Same with learning C and being able to write tic-tac-toe.
I think a lot of harm is being done by making beginner have expectations that would befit people that have years of experience. Like you can learn docker in 2 months to someone that doesn't even know Linux exists or have never encountered the word POSIX.
Please do read the following article: https://www.norvig.com/21-days.html
I would argue you're learning less than you might believe. Similarly to how people don't learn math by watching others solve problems, you're not going to learn to become a better engineer/problem solver by reading the output of ChatGPT.
Regarding leveling up as an engineer, at this point in my career it's called management.
Just as some might pull the answers from the back of the textbook, the interesting ones are the kids who want to find out why certain solutions are the way they are.
Then again I could be wrong, I try hard to stay away from the shithose that is the modern social media tech landscape (TikTok, Insta, and friends) so I'm probably WAY out of touch (and I prefer it that way).
How are we defining "learning" here? The example I like to use is that a student who "learns" what a square root is, can calculate the square root of a number on a simple 4 function calculator (x, ÷, +, -) if iteratively. Whereas the student who "learns" that the √ key gives them the square root, is "stuck" when presented with a 4 function calculator. So did they 'learn' faster when the "genie" surfaced a key that gave them the answer? Or did they just become more dependent on the "genie" to do the work required of them?
Because that makes the most business sense.
I hate to be so negative, but one of the biggest problems junior engineers face is that they don't know how to make sense of or prioritize the gluttony of new-to-them information to make decisions. It's not helpful to have an AI reduce the search space because they still can't narrow down the last step effectively (or possibly independently).
There are junior engineers who seem to inherently have this skill. They might still be poor in finding all necessary information, but when they do, they can make the final, critical decision. Now, with AI, they've largely eliminated the search problem so they can focus more on the decision making.
The problem is it's extremely hard to identify who is what type. It's also something that senior level devs have generally figured out.
368 more comments available on Hacker News