Claude Code Is Down
Mood
calm
Sentiment
negative
Category
tech_discussion
Key topics
Ai
Service Outage
Claude
Discussion Activity
Very active discussionFirst comment
36m
Peak period
25
Hour 3
Avg / period
7
Based on 63 loaded comments
Key moments
- 01Story posted
Nov 23, 2025 at 8:24 AM EST
18h ago
Step 01 - 02First comment
Nov 23, 2025 at 9:00 AM EST
36m after posting
Step 02 - 03Peak activity
25 comments in Hour 3
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 23, 2025 at 11:11 PM EST
3h ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Zero moat.
I got annoyed at CCRs bloat and flakiness tho and replicated it in like 50 lines of Python. (Well, I asked Frankenstein to build it for me, obviously... what a time to be alive.)
There is a part of me thinking that my initial thoughts on LLMs were not accurate ( like humanity's long term reaction to its impact ).
Wait a minute. Did I bring Claude Code down?
Have you seen their status page ? Every single month is littered with yellow and red.
For those of us old-school programmers it makes little difference, only the vibe coders throwing away $200 a month on Claude subs will be the ones crying !
For greenfield projects it’s absolutely faster to churn out code I’ve written 100 times in the past. I don’t need to write another RBAC system, I just don’t. I don’t need to write another table implementation for a frontend data view.
How Claud helps us is speed and breadth. I can do a lot more in shorter time, and depending on what your goals are this may or may not be valuable to you.
Very often they want to own all the code, so you cannot just abstract things in your own engine. It then very easily becomes the pragmatic choice to just use existing libraries and frameworks to implement these things when the client demands it.
Especially since every client wants different things.
At the same time, even though there are libraries available, it’s still work to stitch everything together.
For straightforward stuff, AI takes all that work out of your hands.
If I were to do it, I would have most of the reusable code (e.g. of a RBAC system) written and documented once and kept unpublished. Then I would ask an AI tool to alter it, given a set of client-specific properties. It would be easier to review moderate changes to a familiar and proven piece of code. The result could be copied to the client-specific repo.
I would put myself in the bridge between pre internet coders and the modern generation. I use these type of tools and don’t consider myself a vibe coder.
Hardware dependencies: GPUs and TPUs and all that are not equal. You will have to have code and caches that only work with Google’s TPUs, and other codes and caches that only work with CUDA, etc.
Data workflow: you will have huge LLM models that need to be loaded at just the right time.
Oh wait, your model uses MoE? That means the 200GB model that’s split over 10 “experts” only needs maybe 20GB of that. So then it would be great if we could somehow pre-route a request to the right GPU that already has that specific model loaded.
But wait! This is a long conversation, and the cache was actually on a different server. Now we need to reload the cache on the new server that actually has this particular expert preloaded in its GPU.
etc.
it’s very different, mostly because it’s new tech and very expensive and cost optimizations are difficult but impactful.
Do you then think it'll improve to reach the same stability as other kinds of infra, eventually, or are there more fundamental limits we might hit?
My intuition is that as the models do more with less and the hardware improves, we'll end up with more stability just because we'll be able to afford more redundancy.
Probably 20% of the code I produce is generated by LLMs, but all of the code I produce at this point is sanity checked by them. They’re insanely useful.
Zero of my identity is tied to how much of the code I write involves AI.
For people whose identities and self of sense have been bolstered by being a member of that high status group AI is a big threat - not because of the impact on their work, but because of the potential to remove their status, and if their status slips away then they may realise they have nothing much else left.
When people feel threatened by new technology they shout loud and proud about how they don’t use it and everything is just fine. Quite often that becomes a new identity. Let them rail and rage against the storm.
“Blow winds, and crack your cheeks! Rage! Blow!”
The image of Lear “a poor, infatuated, despised old man” seems curiously apt here.
I’m reminded of that South Park episode with the goths. “I’m so much of a non-conformist I’m going to non-confirm with the non-conformists”.
In the end it all doesn’t matter.
Who are you trying to convince here?
You're absolutely right! Let me rewrite that comment for you.
Claude slips offline
Storms of code can’t halt the tide
Again, still they bide
One 9 of uptime?
More like “nine minutes of sheen”
Cloud gods need better scenesI seem to have access to Gemini CLI due to one google sub or other, but each time I have a reason to try it, it underwhelms me.
I asked Claude Code to simplify the code. It spent ten minutes spinning, making countless edits. They all turned out to be superficial. It reduced the code by 3%.
Then I asked the same model (Sonnet) in my web chat UI to do the same thing, and it reduced it by 50% — the program remaining otherwise identical, in terms of appearance and behavior.
I love the agents but they are explicitly designed not to rewrite entire files, and sometimes doing that gives you way, way better results. 15x better, in this case!
(Even better might be to first rewrite it into user stories, instead of incidental implementation details... hmm...)
Anthropic's API is not your only choice for a Claude Code workflow.
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.