Claude Code 2.0
Posted3 months agoActive3 months ago
npmjs.comTechstoryHigh profile
calmmixed
Debate
60/100
AI Coding ToolsClaude CodeLLM CLI Tools
Key topics
AI Coding Tools
Claude Code
LLM CLI Tools
The release of Claude Code 2.0, a coding AI tool by Anthropic, has sparked a mixed reaction from the HN community, with some praising its new features and others criticizing its changes and limitations.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
6m
Peak period
84
0-6h
Avg / period
20
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 29, 2025 at 1:12 PM EDT
3 months ago
Step 01 - 02First comment
Sep 29, 2025 at 1:18 PM EDT
6m after posting
Step 02 - 03Peak activity
84 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 1, 2025 at 10:01 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45416228Type: storyLast synced: 11/22/2025, 11:17:55 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
1: https://block.github.io/goose/
I think I lack the social skills to community drive a fix, probably through some undiagnosed disorder or something so I've been trying to soldier alone on some issues I've had for years.
The issues are things like focus jacking in some window manager I'm using on xorg where the keyboard and the mouse get separate focuses
Goose has been somewhat promising, but still not great.
I mean overall, I don't think any of these coding agents have given me useful insight into my long vexing problems
I think there has to be some type of perception gap or knowledge asymmetry to be really useful - for instance, with foreign languages.
I've studied a few but just in the "taking classes at the local JC" way. These LLMs are absolutely fantastic aids there because I know enough to frame the question but not enough to get the answer.
There's some model for dealing with this I don't have yet.
Essentially I can ask the right question about a variety of things but arguably I'm not doing it right with the software.
I've been writing software for decades, is it really that I'm not competent enough to ask the right question? That's certainly the simplest model but it doesn't check out.
Maybe in some fields I've surpassed a point where llms are useful?
It all circles back to an existential fear of delusional competency.
They seem autonomous but often aren’t.
I've hit this point while designing developer UX for a library I'm working on. LLMs can nail boilerplate, but when it comes to dev UX they seem to not be very good. Maybe that's because I have a specific vision and some pretty tight requirements? Dunno. I'm in the same spot as you for some stuff.
For throwaway code they're pretty great.
cl --version 1.0.44 (Claude Code)
as expected … liar! ;)
cl update
Wasn't that hard sorry for bothering
[1] https://github.com/marckrenn/cc-mvp-prompts/compare/v1.0.128...
[2] https://x.com/CCpromptChanges/status/1972709093874757976
The bot is based on Mario Zechner's excellent work[1] - so all credit goes to him!
[1] https://mariozechner.at/posts/2025-08-03-cchistory/
I wrote about one tool for doing that here: https://simonwillison.net/2025/Jun/2/claude-trace/
Why do you think these aren't legit?
Interesting. This was in the old 1.x prompt, removed for 2.0. But CC would pretty much always add comments in 1.x, something I would never request, and would often have to tell it to stop doing (and it would still do it sometimes even after being told to stop).
- like all documentation, they are prone to code rot (going out of date)
- ideally code should be obvious; if you need a comment to explain it, perhaps it's not as simple as it could be, or perhaps we're doing something hacky that we shouldn't
An example of this: assume you live in a world where the formula for the circumference of a circle has not been derived. You end up deriving the formula yourself and write a function which returns 2piradius. This is as simple as it gets, not hacky at all, and you would /definitely/ want to include a comment explaining how you arrived at your weird and arbitrary-looking "3.1415" constant.
I've considered just leaving the comments in, considering maybe they provide some value to future LLMs working in the codebase, but the extra human overhead in dealing with them doesn't seem worth it.
It's cognitively stressing, but is beneficial for juniors, and developers new to the codebase, just as it is for senior developers to reduce the mental overhead for the reader.
It's always good to spend an extra minute thinking how to avoid a comment.
Of course there are exceptions, but the mental exercise trying to avoid having that exception is always worth it.
Comments are instant technical debt.
Especially junior developers will be extremely confused and slowed down by having to read both, the comment, and then the code, which was refactored in the meantime and does the opposite of what the comment said.
I think a happy medium of "comment brevity, and try thinking of a clearer way to do something instead of documenting the potentially unnecessary complexity with a comment" would be good.
I don't know where this "comments are instant technical debt" meme came from, because it's frankly fucking stupid, especially in the age of being able to ask the LLM "please find any out-of-date comments in this code and update them" since even the AI-averse would probably not object to it commenting code more correctly than the human did
Docstring comments are even worse, because it's so easy for someone to update the function and not the docstring, and it's very easy to miss in PR review
Good and up to date comments are good and up to date. Bad and outdated comments are bad and outdated. If you let your codebase rot then it rots. If you don't then it doesn't. It's not the comment's fault you didnt update it. It's yours.
Guard rails should be there to prevent inexperienced developers (or overworked, tired ones) from committing bad code.
"Try to think how to refactor functions into smaller ones and give them meaningful names so that everyone knows immediately what's going on" is a good enough guard rail.
That's exactly what I wrote, phrased slightly differently.
We both agree at the core.
I'm wondering if tsdoc/jsdoc tags like @link would help even more for context
So far Clause Code's comments on my code were completely useless. They just repeated what you could figure out from the name of called functions anyway.
Edit: an obvious exception is public libraries to document public interfaces, and use something like JavaDoc, or docstrings, etc.
I assume it comes from the myriad tutorial content on medium or something.
gpt-oss is the most egregious emoji user: it uses emoji for numbers in section headings in code, which was clearly a stylistic choice finetuned into the model and it fights you on removing them.
I’ve noticed Claude likes to add them to log messages and prints and with 4.5 seems to have ramped up their use in chat.
what in the world?
Here's how it works in detail: https://mariozechner.at/posts/2025-08-03-cchistory/
Here's how it works: https://mariozechner.at/posts/2025-08-03-cchistory/
I should probably include that in my Claude.md instead I guess?
I hope this is the case.
Your tools should work for you, and git is no exception. Commit early and commit often. Before you (or an LLM) go on a jaunt through the code, changing whatever, commit the wip to git as you go along. That way, if something goes awry, it's an easy git reset HEAD^ to go backwards just a little bit and undo your changes.
Later on, when it's time to share your work, git rebase -i main (or wherever your branching off point was). This will bring up your editor with a list of commits. Reorder them to make more sense, and then also merge commits together by changing the first word on the line to be "fixup". exit your editor and git will rewrite history for outside consumption. Then you can push and ask someone else to review your commits, which hopefully is now a series of readable smaller commits and not one giant commit that does everything, because those suck to review.
That said, having a single option that rewinds LLM context and code state is better than having to do both separately.
- you DO want your prompts and state synced (going back to a point in the prompt <=> going back to a point in the code).
Git is a non starter then. At least the repo’s same git.
Plus, you probably don’t want the agent to run mutating git commands, just in case it decides to allucinate a push —force
> Our new checkpoint system automatically saves your code state before each change, and you can instantly rewind to previous versions by tapping Esc twice or using the /rewind command.
https://www.anthropic.com/news/enabling-claude-code-to-work-...
Lots of us were doing something like this already with a combination of WIP git commits and rewinding context. This feature just links the two together and eliminates the manual git stuff.
> Checkpoints apply to Claude’s edits and not user edits or bash commands, and we recommend using them in combination with version control
Hey Claude... uh... unlaunch those
https://news.ycombinator.com/item?id=45426787
Avoids having to do any jj command at all!
https://news.ycombinator.com/item?id=45426787
Avoids even having to do "jj new"!
some pretty neat jj tricks I just learned about!
Though I will see how this pans out.
that's generally my workflow and I have the results saved into a CLAUDE-X-plan.md. then review the plan and incrementally change it if the initial plan isn't right.
To be honest, Claude is not great about moving cards when it's done with a task, but this workflow is very helpful for getting it back on track if I need to exit a session for any reason.
### Development Process
All work must be done via TODO.md. If the file is empty, then we need to write our next todo list.
When TODO.md is populated:
1. Read the entire TODO.md file first 2. Work through tasks in the exact order listed 3. Reference specific TODO.md sections when reporting progress 4. Mark progress by checking off todos in the file 5. Never abbreviate, summarize, or reinterpret TODO.md tasks
A TODO file is done when every box has been checked off due to completion of the associated task.
WTF. Terrible decision if true. I don't see that in the changelog though
They just changed it so you can't set it to use Opus in planning mode... it uses Sonnet 4.5 for both.
Which makes sense Iif it really is a stronger and cheaper model.
If you have run your own benchmarks or have convincing anecdotes to the contrary, that would be an interesting contribution to the discussion.
If I hit shift-Tab twice I can still get to plan mode
This isn't true, you just need to use the usual shortcut twice: shift+tab
I use Opus to write the planning docs for 30 min, then use Sonnet to execute them for another 30 min.
I do like the model selection with opencode though
- supports every LLM provider under the sun, including Anthropic
- has built-in LSP support https://opencode.ai/docs/lsp
This is pretty funny while Cursor shipped their own CLI.
https://news.ycombinator.com/item?id=45377734
https://www.reddit.com/r/ClaudeAI/comments/1mlhx2j/comment/n...
Pardon my ignorance, but what does this mean? It's a terminal app that has always expanded to the full terminal, no? I've not noticed any difference in how it renders in the terminal.
What am i misunderstanding in your comment?
I just downgraded to v1 to confirm this.
Wonder what changes that i'm not seeing? Do you think it's a regression or intentional?
pretty sure your old behavior was the broken one tho - i vaguely remember fugling with this to "fullscreen correctly" for a claude-in-docker-in-cygwin-via-MSYS2 a while ago
Sonnet 4.5 is beating Opus 4.1 on many benchmarks. Feels like it's a change they made not to 'remove options', but because it's currently universally better to just let Sonnet rip.
I've always been curious. Are tags like that one: "<system-reminder>" useful at all? Is the LLM training altered to give a special meaning to specific tags when they are found?
Can a user just write those magic tags (if they knew what they are) and alter the behavior of the LLM in a similar manner?
You can just make them up, and ask it to respond with specific tags, too.
Like “Please respond with the name in <name>…</name> tags and the <surname>.”
It’s one of the approaches to forcing structured responses, or making it role-play multiple actors in one response (having each role in its tags), or asking it to do a round of self-critique in <critique> tags before the final response, etc.
Okay, I know I shouldn't anthropomorphize, but I couldn't prevent myself from thinking that this was a bit of a harsh way of saying things :(
I haven’t fully tested it yet, but I found it because its supports JetBrains IDE integration. It has MCPs as well.
I wish it was maintained by a larger team though. It has a single maintainer and they seem to be backlogged or working on other stuff. If there was an aider fork that ran forward with capabilities I'd happily switch.
That said, I haven't tried Claude Code firsthand, only saw friends using it. I'm not comfortable letting agents loose on my production codebase.
Why?
'This project aims to be compatible with upstream Aider, but with priority commits merged in and with some opportunistic bug fixes and optimizations'
* New native VS Code extension
* Fresh coat of paint throughout the whole app
* /rewind a conversation to undo code changes
* /usage command to see plan limits
* Tab to toggle thinking (sticky across sessions)
* Ctrl-R to search history
* Unshipped claude config command
* Hooks: Reduced PostToolUse 'tool_use' ids were found without 'tool_result' blocks errors
* SDK: The Claude Code SDK is now the Claude Agent SDK Add subagents dynamically with --agents flag
[1] https://github.com/anthropics/claude-code/blob/main/CHANGELO...
253 more comments available on Hacker News