Opus 4.5 Is the First Model That Makes Me Fear for My Job
Key topics
The release of Opus 4.5 has sparked a lively debate about the potential impact of AI on software development jobs, with some commenters sounding the alarm while others remain skeptical. As one commenter quipped, "It feels like every model release has its own little hype cycle," suggesting that the excitement around Opus 4.5 may be overblown. However, others, like giancarlostoro, report that the model has genuinely improved their workflow, allowing them to focus on higher-level problem-solving. Meanwhile, some developers are grappling with the existential implications, with pton_xd wistfully remarking, "It was fun while it lasted!"
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2m
Peak period
69
0-6h
Avg / period
15.4
Based on 77 loaded comments
Key moments
- 01Story posted
Dec 14, 2025 at 3:40 PM EST
22 days ago
Step 01 - 02First comment
Dec 14, 2025 at 3:42 PM EST
2m after posting
Step 02 - 03Peak activity
69 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 16, 2025 at 7:17 PM EST
19 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Crypto was just that, a pure grift where they were creating something out of nothing and rugpulling when the hype was highest.
AI is actually creating something, it's generating replacement for artists, for creatives, for musicians, for writers, for programmers. It's literally capable of generating something from _almost_ nothing. Of course, you have to factor in energy usage & etc, but the end user sees none of that. They type a request and it generates an output.
It may be easily identifable slop today, but it's getting better and better at a RAPID rate. We all need to recognize this.
I don't know what to do with the knowledge that it's coming for our jobs. Adapt or die? I don't know...
I see what you're saying, that's a bit of a different aspect entirely. I don't know how much people are making from viral posts on Twitter (or fb?) from that kind of thing.
But, outside of those specific platforms, there's quite a bit of discussion on it on reddit and on here has had some of the best. The good tech sites like Ars, Verge, Wired, Register all have excellent realistic coverage of what's going on.
I think if you're only seeing hype I'd ask where you're looking. And on the flip side, there's the very anti-ai crowd who I'm sure might be getting that same kind of reach to their target audience preaching the evils & immortality of it.
Pick anything else you have a far better likelihood to fall back into manual process, legal wall, or whatever that AI cannot replace easily.
Good job boys and girls. You will be remembered.
Testing AI output just doesn't have the same feeling, unfortunately.
Maybe it's just because my side projects are fairly elementary.
And I agree that AI is pretty good at code review, especially if the code contains complex business logic.
The document is human-crafted and human-reviewed, and it primarily targets humans. The fact that it works for machines is a (pretty neat) secondary effect, but not really the point. And the document sped up the act of doing the refactors by around 5x.
It's not really vibe coding at that point, really. It's closer to old-school waterfall-style development, though with much quicker iteration cycles.
It brings the “what to build“ question front and center while “how to build it“ has become much, much easier and more productive
https://www.bloodinthemachine.com/s/ai-killed-my-job
Good job AI fanboys and girls. You will be remembered when this fake hype is over.
I don't really see why anywhere near the number of great jobs this industry has had will be justifiable in a year. The only comfort is all the other industries will be facing the same issue so accomodations will have to be made.
What that means for society where there are extremely rich people who owns resources and capital, and everyone else is only valued for their dexterity and physical labor (vs skills) I can only guess.
I do think the AI labs have potentially unleashed a society changing technology that ironically penalizes meritocracy and/or intelligence by making it less scarce. The jobs left will be the ones people avoided for a reason (health, risk, etc)
Damn it that I’m only 40+ so I still need to work more or less 15 years even when we live frugally.
The commonality of people working on AI is that they ALL know software. They make a product that solves the thing that they know how to solve best.
If all lawyers knew how to write code, we’d seem more legal AI startups. But lawyers and coders are not a common overlap, surely nowhere as near as SWEs and coders.
> do not know what's coming for us in the next 2-3 years, hell, even next year might be the final turning point already.
What is this based on? Research? Data? Gut feeling?
> but how long will it be until even that is not needed anymore?
You just answered that. 2 to 3 years, hell, even next year, maybe.
> it also saddens me knowing where all of this is heading.
If you know where this is heading why are you not investing everything you have in these companies? Isn't that the obvious conclusion instead of wringing your hands over the loss of a coding job?
It invents a problem, provides a time line, immediately questions itself, and then confidently prognosticates without any effort to explain the information used to arrive at this conclusion.
What am I supposed to take from this? Other than that people are generally irrational when contemplating the future?
Because unlike previously:
Combined with the fact that many are reliant on their income to pay the bills and don't have enough capital to invest in these things and yes:https://futurism.com/the-byte/startup-spams-reddit-slop
Right: if you expect your job as a software developer to be effectively the same shape on a year or two you're in for a bad time.
But humans can adapt! Your goal should be to evolve with the tools that are available. In a couple of years time you should be able to produce significantly more, better code, solving more ambitious profiles and making you more valuable as a software professional.
That's how careers have always progressed: I'm a better, faster developer today than I was two years ago.
I'll worry for my career when I meet a company that has a software roadmap that they can feasibly complete.
Otherwise, with all due respect, there's very little of value to learn in that subreddit.
They have several billion dollars of annual revenue already.
If OpenAI is only going to be profitable (aka has an actual business model) if other companies aren't training a competitive model, then they are toast. Which is my point. They are toast.
In principle, I mean. Obviously there's a sense in which it doesn't matter if they only get fined for cross-subsidising/predatory pricing/whatever *after* OpenAI et al run out of money.
But as a gut-check, even if all the people not complaining about it are getting use out of any given model, does this justify the ongoing cost of training new models?
If you could delete the ongoing training costs of new models from all the model providers, all of them look a lot healthier.
I guess I have a question about your earlier comment:
> Google is always going to be training a new model and are doing so while profitable.
While Google is profitable, or while the training of new models is profitable?
> Taking longer than usual. Trying again shortly (attempt 1 of 10)
> ...
> Taking longer than usual. Trying again shortly (attempt 10 of 10)
> Due to unexpected capacity constraints, Claude is unable to respond to your message. Please try again soon.
I guess I'll have to wait until later to feel the fear...
qwen3-coder blew me away.
If I was only writing code, the fear would be completely justified.
In threads where I see an example of what the author is impressed by, I'm usually not impressed. So when I see something like this, where the author doesn't give any examples, I also assume Claude did something unimpressive.
It's definitely more useful than me the first 5 years of my professional career though, so for people who don't improve fast or for average new grades, this can be a problem.
That’s a reason why I can’t believe the benchmarks and why I also believe open source models (claiming 200 but realistically struggling past 40k) aren’t only a bit but very far behind SOTA in actual software dev.
This is not true for all software, but there are types of systems or environments where it’s abundantly clear that Opus (or anything with a sub 1m window) won’t cut it, unless it has a very efficient agentic system to help.
I’m not talking about dumping an entire code base in the context, I’m talking about clear specs, some code, library guidelines, and a few elements to allow the LLM to be better than a glorified autocomplete that lives in an electron fork.
Sonnet still wins easily.
Rinse and repeat.
At this point both Ai doomers and boomers are just as wrong as each other just in opposite directions.
"The overwhelming consensus in this thread is that OP's fear is justified and Opus represents a terrifying leap in capability. The discussion isn't about if disruption is coming, but how severe it will be and who will survive."
My fellow Romans, I come here not to discuss disruption, but to survive!
1) it’s not impartial
2) it’s useless hype commentary
3) it’s literally astroturfing at this point
I'm honestly not complaining about the model releases, though. Despite their shortcomings, they are extremely useful. I've found Gemini 3 to be an extremely useful learning aid, so as long as I don't blindly trust its output, and if you're trying to learn, you really ought not do that anyways. (Despite what people and benchmarks say, I've already caught some random hallucinations, it still feels like you're likely to run into hallucinations on a regular basis. Not a huge problem, but, you know.)