Claude Code On-the-Go
Key topics
Developers are buzzing about Claude Code, a tool that lets them code on-the-go, and sharing their favorite setups and workarounds to maximize its potential. Some swear by using Tailscale for seamless connectivity, while others rave about git worktrees or alternative tools like Conductor and Vibe Kanban. A lively debate erupted around the "planning mode" feature, with some users finding creative ways to replicate it in the web version, such as prompting Claude to write detailed plans to a markdown file. As the multi-agent AI space continues to evolve, users are exploring innovative ways to harness Claude Code's capabilities.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
87
0-6h
Avg / period
20
Based on 160 loaded comments
Key moments
- 01Story posted
Jan 4, 2026 at 2:48 PM EST
3d ago
Step 01 - 02First comment
Jan 4, 2026 at 4:00 PM EST
1h after posting
Step 02 - 03Peak activity
87 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Jan 6, 2026 at 6:38 PM EST
22h ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I've been using the simpler but not as flexible alternative: I'm running Claude Code for web (Anthropic's version of Codex Cloud) via the Claude iPhone app, with an environment I created called "Everything" which allows all network access.
(This is moderately unsafe if you're working with private source code or environment variables containing API keys and other secrets, but most of my stuff is either open source or personal such that I don't care if the source code leaks.)
Anthropic run multiple ~21GB VMs for me on-demand to handle sessions that I start via the app. They don't charge anything extra for VM time which is nice.
I frequently have 2-3 separate Claude Code for web sessions running at once, often prompted from my phone, some of them started while I'm out walking the dog. Works really well!
My current setup: Tailscale + Terminus(ipad) + home machine(code base)
Need to look into how to work on multiple features at the same time next.
https://www.youtube.com/watch?v=up91rbPEdVc
Pair worktrees with the ralph-wiggum plugin and I can have Claude work for hours without needing any input:
https://looking4offswitch.github.io/blog/2026/01/04/ralph-wi...
conductor -> multiple claude codes/codexes -> multiple agents -> multiple tools/skills/sub-agents -> LLMs
I spend most of my time updating the memory files and reviewing code and just letting a ton of tasks run in parallel
I like that it ends up in the repo as it means it survives compaction or lets me start a fresh session entirely.
This can be customized via a shell env variable that I cannot remember ATM.
The downside (upside?) is that the plan will not end up in your repo. Which sometimes I want. I love the native plan mode though.
I also wrote my own tool to extract and format the complete transcript, it gives me back things like this where I can see everything it did including files and scripts it didn't commit. Here's an example: https://gistpreview.github.io/?3a76a868095c989d159c226b7622b...
What about running services locally for manual testing/poking? Do you open ports on the Anthropic VM to serve the endpoints, or is manual testing not part of your workflow?
If something is too fiddly to test within the boundaries of a cloud coding agent I switch to my laptop. Claude Code for web has a "claude --teleport" command for this, or I'll sometimes just do a "gh or checkout X" to get the branch locally.
Even with GitHub CI now all of sudden it wasted $50 on few days of CI actions. Should have everything run on my home server. But I think I may need more powerful home server, I have a cheap Dell refurbished one now.
I don't want to ever have to touch a UI again (except in places like Hackernews or the like) and the ones I specially built (read: vibecoded) for myself.
They'll include screenshots on your PRs etc.
I like using them a lot when I can.
I have a project where I've made a rule that no code is written by humans. It's been fun! It's a good experience to learn how far even pre-Opus 4.5 agents can be pushed.
It's pretty clear to me that in 12 months time looking at the code will be the exception, not the rule.
Absolutely - for me, that's already true. I just wouldn't want to give up the ability to _ever_ look at the code before I submit it!
I could imagine this working for a small number of branches/changes.
.. with a valid SSH key unless I’m reading it wrong?
Ofc if you have demo deployments etc on branches that you could open on mobile it works for longer.
Another issue is that I often need to sit down and think about the next prompt going back and forth with the agent on a plan. Try out other product features, do other research before I even know what exactly to build. Often doing some sample implementations with Claude code and click around these days. Doing this on a phone feels... limiting.
I also can't stand the constant context switching. Doing multiple feature in parallel already feels dumb because every time I come from feature B to A or worse from feature G to E it takes me some time to adjust to where I was, what Claude last did and how to proceed from here. Doing more tasks than 2 max. 3 in parallel often ends up slowing me down. Now you add ordering coffee and small talk to the mix and I definitely can't effectively prompt without rereading all history for minutes before sending the next prompt. At which point I might have also opened up my laptop.
Ofc if you truly vibe code and just add feature on feature and pray nothing breaks, the validation overhead and bar for quality goes down a lot so it works a lot better but the output is also just slop by then.
I typed this on my phone and it took 20 minutes, a laptop might have been faster.
It won't matter if I'm washing the dishes, walking the dog, driving to the supermarket, picking up my kids from school. I'll always be switched on, on my phone, continuously talking to an LLM, delivering questionable features and building meaningless products, destroying in the process the environment my kids are going to have to grow in.
I'm a heavy LLM user. On a daily basis, I find LLMs extremely useful both professionally and personally. But the cognitive dissonance I feel when I think about what this means over a longer time horizon is really painful.
Is this still accurate?
(I'm not really sure LLMs will make it that much worse here, but all those things have been harmful to workers already.)
When you saw 996 being talked about it should have set a few alarm bells off, because it started a countdown timer until such a work culture surpasses the rather leisurely attitude of the West in terms of output and velocity. West cannot compete against that no matter how many “work smarter, not harder” / “work to live don’t live to work” aphorisms it espouses. This should be obvious by now (in hindsight).
You can blame LLM or capitalism or communism but the hard matter is, it’s a money world and people want to have as much of it as they possibly can, and you and your children can’t live without it, and every day someone is looking to have more of it than you are. This isn’t even getting into the details of the personality types that money and power attracts to these white collar leadership roles.
Best of luck to you.
It's the power imbalance. Shitty managers still control your means to eat.
I really don't get it -- is it that people think these technologies will be so transformative that it is most moral to race toward them? I don't see much evidence of that, it's just future promises (especially commensurate with the benefit / cost ratio). When I do use this tech it's usually edutainment kind of curiosity about some subject matter I don't have enough interest in to dive into--it's useful and compelling but also not really necessary.
In fact, I don't really think the tech right now is at all transformative, and that a lot of folks are unable to actually gauge their productivity accurately when using these tools; however, I do not believe that the technology will stay that way, and it will inevitably start displacing people or degrading labor conditions within the only economically healthy remaining tranche of people in America: the white collar worker.
1. Like most labor organizing, I think this would be beneficial for software engineers, but not long-term beneficial for the world at large. More software that is easier to make is better for everybody.
Would you still want to live in a world where your elevator stops working when the elevator operator is sick, or where overseas Whatsapp calls cost $1 per minute, because they have to be connected by a chain of operators?
2. Software engineering is a lot easier to move than other professions. If you want to carry people from London to New York, you need to cater to the workers who actually live in London or New York. If you want to make software... Silicon Valley is your best bet right now, but if SV organizes and other places don't, it may not be your best bet any more. That would make things even worse for SV than not organizing. Same story applies to any other place.
Sure, companies won't more overnight, but if one place makes it too hard for AI to accelerate productivity, people will either go somewhere else, or that place will just end up completely outcompeted like Europe did.
> your elevator stops working when the elevator operator is sick
Can you point somewhere outside of US where this is the case with unions?
When dockworker's unions are able to prevent port automation, is that beneficial to society?
With LLM, my productivity suddenly went up x25 and was able to produce at a speed that I had never known. I'm not a developer any more, instead feels like project manager with dedicated resources always delivering results. It isn't perfect, but when you are used to manage teams it isn't all that different albeit the results are spectacularly better.
My x25 isn't just measured on development, for brainstorming, documentation, testing, deployment. It is transformative, in fact: I think software is dead. For the first time I've used neither a paper notebook nor even an IDE to build complex and feature-complete products. Software isn't what matters, what matters is the product and this is what the transformation part is all about. We all here can write products in languages we never had contact with and completely outperform any average team of developers doing the same product.
Replaces the experts and domain specific topics? Not yet. Just observe that the large majority of products are boringly simple cases of API, UI and some business logic inside. For that situation, it has "killed" software.
The code is written in Dart and never wrote a line of DART in my life, I'm a veteran expert around Java, C++. The reason for choosing DART is simply because it is way readier for multi-platform contexts than Java/C++. The same code base now runs on Linux, Android, iOS, OSX, Windows and Web (as static HTML). Plus the companion code in C++ for ESP32 microcontrollers. It also includes a CLI for running as linux server.
Don't ask me for a hard analysis and data proving x25 performance increase, what I know is that an off-grid product that was previously took me two years of research and effort to build in Android/Web and get a prototype running. Now in about a month went far above all previous expectations (cached maps with satellite imagery, bluetooth mesh, webRTC, whatever) and be able to release implementations of the apps that work as envisioned to them iterate quickly and release several times per day.
The repository: https://github.com/geograms/geogram
Overview of the apps being written: https://github.com/geograms/geogram/tree/main/docs/apps
Codex is far superior at the moment for complex tasks, Claude is cheaper and still good enough quality for most tasks. In addition to keep several terminals with tasks in parallel, this gives time throughout the day for other tasks and motivation like a coding-buddy to try different routes and quickly implement a prototype. For example, it added an offline GPT bot but wasn't what was needed so could quickly discard it too.
These tools get lost on API interactions and the documentation folder is mostly there to provide the right context when needed. I've learned to use simple markdown documents with things to keep in mind like "reusable.md" or "API.md" to make sure it won't reinvent them.
You can try the Android or Linux versions if you are so inclined. Never in my life would I ever be able to build so much in 5 weeks.
Quite ambitious.
Is this an LLM hallucinating? taking a break from coding? or leaking your personal desktop session?
https://github.com/geograms/geogram/blob/main/.cli_history
Ha! In any case, I'm happy to see I'm not the only one compulsively "ls-ing" all over the place in every terminal I open :)
Had some fun and added some CLI dungeon and dragon games inside. Will put that file on the .ignore list. Basically the games are based on markdown text files: https://github.com/geograms/geogram/blob/main/games/azurath-...
I've answered in more detail on the other reply below on the conversation. Thank you for spotting that.
> Would you describe this product as a whole application suite
The rabbit hole goes even further. The reason why callsigns are used is because geogram can happily communicate using radio-waves on walkie-talkies without internet at all. On the previous iterations (before AI) it was sending free SMS using walkie-talkies and satellites (APRS), this current incarnation should soon be doing the same things too. A presentation from two months ago: https://www.youtube.com/watch?v=Nb_VUSaNw8k
This is a niche app, written for our community in Portugal to connect with each other.
I don't believe people will spend time looking at the code beyond the small blurbs they can read from the command line while talking with the AI, so I agree with you that it ends being treated as a blackbox.
Did an experiment for a server implementation in Java (my strong language), gave the usual instructions and built up the server. When I went to look into the code, it was a far smaller and more concise code base than what I would write myself. AI is treating programming language on the level of a compiler for javascript, it will make the instructions super efficient and uses techniques that on my 30 years experience I'm not able to pair-review because we tend to have our own patterns of programming while these tools use everything, no matter how exotic they will use it to their advantage.
After that experience I don't look at generated source code any longer. For me it is becoming the same as trying to look at compiled binary data.
You'll get a lot further and faster than you'd expect.
Things will probably plateau as you master the new tech, but it's possible you'll not write a ton of code manually along the way.
Oh, your general software development experience should help with debugging the weird corner cases.
I imagine it's really hard to do this with 0 software dev experience, for example. Yeah, you'll build some simple things but you'll need and entire tech education to put anything complex in prod.
Either way it’s been a fun ride.
I wrote this up a bit ago in my essay fragments collection. It's rough and was just a thought I wanted to get down, I'm unsure of it, but it's at least somewhat relevant to the discussion here:
LLM or LLM-adjacent technology will never take over the execution of work in a way that approaches human where humans continue to guide (like PMs or C-suite just "managing" LLMs).
The reason is that spoken language is a poor medium by which to describe technical processes, and a well-enumerated specification in natural language describing the process is at-least synonymous with doing the work in skilled applications.
For example, if someone says to an LLM: Build a social media app that is like Tinder but women can only initiate.
... this is truly easily replicatable and therefore with little real business value as a product. Anything that can be described tersely that is novel and therefore valuable unfortunately has very little value practically because the seed of the short descriptor is sort of a private key of an idea itself: it will seed the idea into reality by labor of LLMs, but all that is needed for that seed's maturation is the original phrase. These would be like trade secrets, but also by virtue of something existing out there, its replication becomes trivial since that product's patterns are visible and copyable.
In this way, the only real outcome here is that LLMs entirely replace human labor including decision making or are tools to real human operators but not replacements.
Consider "Uber, but for X"
This wasn't a thing you could deploy as a term pre-Uber.
I'm not sure what this means for your analogy, but it does seem important. Somehow branding an idea reifies a ... callable function in? ???
Maybe something like (just spitballing)
The specification-length needed for a given idea isn't fixed - it's relative to available conceptual vocabulary. And that vocabulary expands through the work of instantiation and naming things?
Which maybe complicates the value story... terseness isn't intrinsic to the idea, it's earned by prior reification work?
Hmm
Basically it seems that "Like Tinder but" is doing a lot of lifting there... and as new patterns get named, the recombination space just keeps expanding?
Yeah, this feels right. It's like a process of condensing: new ideas brought to life condense metaphors into more compact forms and so make language more dense and expressive. This idea reminds me of Julian Jaynes's description of metaphor condensation in Origin of Consciousness.
A lot of hard work goes into novel products, but once that work has been proven, it is substantially more trivial for human or machine to copy. Groping around in the darkness of new, at the edge of what-could-be is difficult work that looks simple in hindsight to others who consider that edge a given now.
> The specification-length needed for a given idea isn't fixed - it's relative to available conceptual vocabulary. And that vocabulary expands through the work of instantiation and naming things?
Yeah, I think that naming and grouping things, then condensing them (through portmanteau construction or other means) is an underrated way to learn. I call this "personal taxonomy," and it's an idea I've been working on for a little bit. There is just tremendous value in naming patterns you personally notice, not taking another person's or group's name for things, and most importantly: allow those names to move, condense, fall away, and the like.
I left out a piece of my fragment above wherein I posit that a more constrained form of natural language to LLMs would likely lead to better results. Constraining interaction with LLM to a series of domain-specific metaphors, potentially even project specific givens, might allow for better outcomes. A lot of language is unspecific, and the technical documents that would truly detail a novel approach to an LLM require a particularly constrained kind of language to be successful where ambiguity is minimized and expressiveness maximalized (legal documents attempt at minimal ambiguity). I won't go into details there, I'm likely poorly reiterating a lot of the arguments that Dijkstra made here:
https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...
Personally I don't think they're a great fit for the software industry where the nature of the job and the details are continuously changing as technology evolves.
The fundamental point of the union is to be able to negotiate as a group. That is valuable regardless of the industry.
- paternity leave
- overtime
- not having to answer a call or email outside of work hours
- workman’s comp / short/long-term disability for issues with my back or wrists or eyes or…
- about 100 more things
It's not just the labor regulations holding Europe back, it's the lack of funding due to not having a unified European digital market.
Netflix Europe needs to have 20+ licensing deals. Selling across Europe at a large scale requires interactions with 20+ legal teams. Language and cultural barriers kill a lot of things.
How do US giants thrive in Europe, then?
Because they come in directly giant-sized based on growth in the US. They either ignore European legal compliance until sued or pay peanuts for them to handle all the legal aspects.
People could be directly in the middle of losing their own job or taking on the responsibilities of 5 other laid-off coworkers, and they would still ask "what could a labor union possibly do for me??"
“I don’t need a union, I can negotiate my wages and working conditions just fine on my own”
Never lived in the US, where I assume you are from. It's the same country that contrary to most countries, does not have May 1st as a Holiday. Same country that has states with at will employment, etc etc.
unreal? nope, totally coherent and expected.
https://workerorganizing.org/
Excluding work (where granted, some companies are dictating the use of llms) and trying not to sound uncaring or disrespectful, but have you thought about not using llms for everything and using the old grey cells? Not having answers to every whimsical thought might be a good thing.
It's very easy to relax the brain (and be lazy tbh) with llms and it's scary to think what will happen in the next 4 years in terms of personal cognitive ability (or as a society).
e.g. I've noticed (and probably most have here) that the world is full of zombies glued to their phones. Looking over their shoulder (e.g. on a train, yeah it's a bit rude but I'm the curious type), they are doom scrolling or playing waste-time games (insert that boomer meme in Las Vegas with slot machines [0]). I try to use my phone as little as possible (especially for dog walks) and feel better for it, allowing me to daydream and let boredom take over.
Maybe I'm fortunate to be able to do this (gen-x: having grown up before cell phones/internet), but worth stating in case anyone wants to try.
[0]: https://tenor.com/view/casino-oldpeople-oldpeopleonslots-slo...
Anyways if we do get to the point where you need to use LLMs to write code, I can make a decision then, but for now I don't feel the need to adopt agentic workflows and I think the people who don't will be better cognitively positioned in the future.
Why is that?
Where we're going, there's no "white collars workers" anymore.
Only white collars Claude agents.
The best we can do is wrestle the control away from hyperscalers and get as much of this capability into the open as possible.
Stop using Anthropic products and start using weight available models. (I'm not talking ICs - I mean the entire startup / tech ecosystem.)
Either "we" create models better than commercial state of the art (by using whatever means).
Or we use open models AND fund organisations building such models (could be by purchasing service from these orgs or donations - in which case would these orgs be different than hyperscalers?).
But i dont see how just hosting the models on some private servers would give us an edge?
I do use Claude code for my personal projects and ping at them from coffee shops and micro moments during my free time.
It’s possible to engineer your own life boundaries and not be a victim of every negative trend in existence.
The only reason we can't expect this is that we live under a system that is arranged for the sole benefit of the owners of capital, and have been convinced that this is an immutable state of affairs or that our own personal advantage can be found in making a Faustian bargain with it.
It sounds like you have not read Harrison Bergeron by Kurt Vonnegut.
What alternative do you propose?
Now what?
See for example https://en.wikipedia.org/wiki/May_68#Slogans_and_graffiti
Realistically, if you have 300M, you and your direct family are settled for life. So, I want to propose 1B cap on net worth, if its more than that for 12 months straight, surplus goes to government, if your net worth is down after that, government obliges to return it partially to make it to 1B.
People, who are eager building things and innovating, will keep building regardless, power hungry will try to find other ways to enrich themselves, but eventually they will give up (e.g. having 10 kids, each with 1B net worth)
Some options why it should happen:
1. Assuming Elon is curious person, he will still build it out of curiosity.
2. Assuming Elon is not curios person, just power hungry, he will probably think its not worth building it, but someone else who is curious will build it eventually. This is even better, because when power hungry person owns such thing, they might use it for bad things as well (e.g. to gain more power, eventually interfering with elections, oh wait, it did already happen)
3. Government will build, because government will have more money now, but then we should be even more careful who gets to the top. Assuming people won't have more than 1B, maybe there will be less lobbying? because its not worth as much as it was before?
Ah, so your idea is the good old “only the emperor who controls the violence apparatus should have a lot of money and power”?
It’s not a very original idea, and it has been tried many times, and it failed many times.
> but then we should be even more careful who gets to the top
Right, so “for some reason only the greedy power hungry psychopaths get to the top in the current system — let’s fix it so that there can’t be many of them, only one government who has power to take away other people’s wealth and concentrate it immensely, surely we will figure out how to make sure it’s not filled with greedy power hungry psychopaths as we go”
One example:
* 300k vs 300M - doesn't matter if I said 100M, 200M, 550M, if you think 300M is not enough for you and your family to afford anything, not sure how other people are surviving for even less.
Here is why I think this is good:
1. Ambitious people will still be ambitious, its rare some genius kid says: I know this is 100B idea, but I won't build it, because I will only own 1B of it.
2. Limits the power, when power is really limited, people will be forced to focus on different things. For example, if you had plans to take over the world by making $10T and creating an army to kidnap president of another state you don't like, then you would know, it is not possible to make 10T, its not only about how much, its about suppressing hungry animal in you by capping your limits.
3. There is a chance "bad" ambitious people, will be converted to real philanthropist, because they know it doesn't matter to own more than 1B anyway and they can't own it.
I can agree with that idea, to an extent. If something is near impossible (not saying this is), then it does become not worth it.
The other questions the parent posed are more interesting to me:
> How would you determine the worth of rare, illiquid or intangibles? What about wealth held in trusts or companies? How does the accounting work if I borrow against my wealth? What happens when things change value dramatically in a short period of time?
Another I wonder is that (ignore all specifics of the values, just the concepts matter here), let's say you own a private business that then becomes valued at 1.5 billion dollars and this individual has 20 million dollars liquid. How do you tax that? The government can't take one third of the business, at least not without a lot of issues (in business dealings and individual rights), and the 20 million liquid wouldn't come close to what this plan would value. What do we do then? Plenty of billionaires don't really have liquid cash and forcing liquidation of assents in such a way seems like it would be very difficult.
I'm all for more taxes on higher net worth individuals, but I think there's a lot of talk to be had on how one can implement this. It's going to be really difficult to find a way that makes sense.
For example, say individual has 20M liquid cash, 2 houses each valued at 5M and 1.5B in company shares (based on averaged company value for the last 6 or 12 months):
* whatever you can immediately spend is prioritised first, so you keep your 20M + 2 houses, then surplus is $530M of your company shares
* this equivalent number of shares will be moved to government trust, individual doesn't have any control over it, if person dies next day, government keeps the money (lets simplify for now and keep voting rights as separate question)
* let's say after shares moved to gov. trust, during next 6 months company value halved, gov. returns all your shares, if stock dropped only 10%, you get equivalent back to make your net worth 1B
* regarding taxation, I would keep it as it is today and tax on "realization event"
There are around 3.000 billionaires in the world, even hiring 10 dedicated people for each billionaire to calculate all this stuff on a quarterly basis is not expensive
This is not at all to say that more conservative or reactionary theorists are wrong about how the world works. In fact, I think they're usually more right about what's really going on abstractly.
But, the working man doesn't need to know what's really going on. They need to win the war, and there's a ton of tactical advice written down—hard won lessons by those who built the modern world through the labor movement.
The place to start is with the usual suspects. Verso Books, The New Centre for Social Research, histories of the labor movement, and new political commentators like Josh Citarella.
Peer competition is what makes everything work. You need scarcity of necessities to force people in to the system. Recent rulings allowing the criminalisation of homelessness are pushing this further. Your existence is default-illegal unless you work to outbid your peers for housing.
You'll likely get used to this new thing too.
168 more comments available on Hacker News