Parallel AI Agents Are a Game Changer
Posted4 months agoActive4 months ago
morningcoffee.ioTechstory
controversialmixed
Debate
80/100
AI Coding AgentsParallel ProcessingSoftware Development
Key topics
AI Coding Agents
Parallel Processing
Software Development
The article discusses the potential benefits of using parallel AI agents in software development, but the discussion is divided on its practicality and effectiveness.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
34m
Peak period
70
0-6h
Avg / period
16.2
Comment distribution81 data points
Loading chart...
Based on 81 loaded comments
Key moments
- 01Story posted
Sep 2, 2025 at 6:44 PM EDT
4 months ago
Step 01 - 02First comment
Sep 2, 2025 at 7:19 PM EDT
34m after posting
Step 02 - 03Peak activity
70 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 5, 2025 at 8:29 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45110075Type: storyLast synced: 11/20/2025, 2:27:16 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
LLMs write so much code in such a short time that the bottleneck is already the human having to review, correct, rewrite.
Parallel agents working on different parts of the application just compound this problem worse, it's impossible to catch up.
The only far fetched use case I can see is swarming hundreds of solutions against a properly designed test case and spec documents and having an agent selecting the best solutions.
Still, I'm quite convinced humans would be the bottleneck.
The sweet spot for me tends to be running one of these slower projects on a worktree in the background, and one more active coding project.
https://www.claudelog.com/mechanics/you-are-the-main-thread/
This seems like an important caveat the author of the article failed to mention when they described this:
> you can have several agents running simultaneously - one building a user interface, another writing API endpoints, and a third creating database schemas.
If these are all in the same project then there has to be some required ordering to it or you get a frontend written to make use of a backend that doesn't have the endpoints used, and you get a backend that makes use of a different database schema than the separately generated database schema.
If we're going to say, who cares, with LLMs we'll never need 20 year old codebases we'll just keep writing new stuff, OK you do you.
Sure would be great if ai agents could learn from conversations. That would really makes things better. I tell Claude to capture things in the Claude.md file, but I have to manually tend to that quite a lot.
> 1. Prepare issues with sufficient context
> Start by ensuring each GitHub issue contains enough context for agents to understand what needs to be built and how it integrates with the system. This might include details about feature behavior, file locations, database structure, or specific requirements such as displaying certain fields or handling edge cases.
> You can’t do half-hearted prompts and fix as you go, because those fixes come an hour later.
> Skills That Become More Important > Full-stack understanding > Problem decomposition > Good writting skills > QA and Code Review skills
This is just software engineering?!?
edit: On the other hand, maybe I can convince people in my org to get better at software engineering by telling them its for the AI to work better.
Absolutely. The existence of vibe coding does not mean production code is going to be engineered without the same principles we've always used - even if we're using AI to generate a lot more code than before.
Any crowd suggesting that this was not the case has lost the plot, imo.
When the AI does it, it’s being polite and stuff. /s, kinda.
/s but not really?
I wonder how much better humans would be at generating code given the same abundance of clearly-written design documentation?
I judge which decisions I make and which ones I bring up to my team/PO/whatever. Most of the time I just do what I think is best, some times I'll do something and then bring it up later like "I did this this way but if that doesn't work I can change it", typically for things that will be easy to change later. Some things I ask about before I do them because they won't be easy to change later.
I'll often take technical liberties with frontend designs, for example I'll use a html select rather than reinventing the drop-down just to be able to put rounded corners on the options. I'll style scrollbars using the very limited options css provides rather than reinvent scrollbars just to follow the design exactly. Most of the time nobody cares, we can always go back later and do these types of things if we really want a certain aesthetic.
I have never had the impression that my questions bother people, rather the opposite. I've had multiple designers say they appreciate the way I interact with them, I respect their work and their designs but I ask them if something looks like an oversight or I'm not exactly sure what their intention is. POs and such are always happy to answer simple questions, I make it easy for them: here's a decision we need to make, I want you to make it. Maybe I have a suggestion for what I would prefer and some reasons why I prefer that solution.
I don't expect them to think of everything and answer all my potential questions in advance, that's just unnecessary and difficult work.
Indeed yes. Although most places shipping software in a "software development" and/or "programming" fashion for many years.
Many, many places certainly do not do the engineering part, even though resulting product is a software.
Really good engineering practices are fundamental to get the most out of AI tooling. Convoluted documentation, code fragmentation, etc all pollute context when working with AI tools.
In my experience, just having one outdated word (especially if it's a library name) anywhere in code or documentation can create major ongoing headaches.
The worst part of it is trying to avoid negative assertions. When the AI tooling keeps trying to do "the wrong thing" it's sometimes a challenge to rephrase instructions for "the right thing" to frame a positive assertion.
Two hours of Web Dev Cody.
If somebody like that producing code of like that low quality worked with me, I can see myself spilling coffee or acid on them or their laptop.
We gotta stop the endless characters production at least for a bit so we and the git servers get some time to breathe free from slop. A tactical coffee spill is a small price to pay.
I (author) sometimes stream my work here as well https://www.youtube.com/@operatelybackstage.
Briefly mentioned on the article but async agents really thrive on small and scoped issues. Imagine hooking them up to your feedback tool (eg canny) and automatically having a PR as you review the customer feedback. Now this would likely not work for large asks but for smaller asks, you can just accept the PR and ship it really fast!
If people were to actually read beyond the first sentence, it would become clear very quickly that this was meant to be tongue in cheek.
(1) I feel like most people call these async agents, though maybe "parallel" is the term that will stick.
(2) Async is great for reasons other than concurrent execution.
(3) Concurrent execution is tricky, at least for tightly defined projects, because the PRs will step on each other, and (maybe this is just me) I would rather rewrite an entire project than try to pick through a complicated merge conflict.
The idea of having multiple parallel agents merge pull requests or resolve issues in parallel is still just an idea.
Please don’t post or upvote attention seeking crap like this. It gives a very exciting and promising technology a bad name.
And even if OP also can't, this is a good place to discuss possible problems and solutions for parallel development using coding agents.
Please refrain from gatekeeping.
A quote from the post. No, I think my post is calibrated quite well considering what OPs post does to our industry.
It can mean, for example, that 2 agents worked for some time in a list of 20 TODO features and produced 20 PRs to be reviewed. They could have worked overnight even.
You're seemingly judging from the least generous interpretation, which is not constructive and is also against HN guidelines fyi.
Even if parallel agents is not something easily done currently, debating about ways to do it is constructive enough for me.
9-10am: I comb through our issue list, figuring out which issues are well defined, which need more input or design decision. => I pick a hand-full, lets say 10 that I kick off to run in the background, and lets say another 10 for further specification.
10-2pm: I thinker with 10 issues to figure out the exact specs and to expand the requirement list.
2pm-6pm: I review the code written by the agents, one by one. I kick off for further work things that need more input, or merge things that look good.
I say this as someone who uses them every day for programming and is also excited now and for the future possibilities. The just blatant lying needs to stop though and needs to be called out.
Because in real life, one agent tries to fix build issue with rm -rf node_modules and the other is already running a server (ie: npm server), conflicting with each other nearly all the time!. (even if it's not a destructive action, the second npm server will most likely to fail due to port-allocation conflicts!)
Meanwhile, what I found helpful is that: 1. Clone the same repo twice or three times 2. In each terminal or whatever, `cd` into it 3. Create a branch, run your ~commands~ prompts (each is their own session with their own repo) 4. commit/push then merge/rebase (also resolve conflicts if needed, use LLM again if you want)
Any other way multiple agents work in harmony in a single repo (filesystem/directory) at the same time is a pipe-dream with the current status of the MCP and agents.
Let alone being aware of each other (agents), they don't even have a proper locking mechanism. As soon as you make a new change, most of the editing (search/replace) functionality in the MCPs fail miserably. Then they re-read the entire file, just creating context-rot or over-filling with already-existing stuff. Soon you run out of tokens (or just pay extra for no reason)
> edit: comments mentioned that each agent runs in a VM isolated from others, kinda makes sense but still, there will be massive merge-conflicts unless each agent runs in a completely different set of service/code-base (ie frontend vs backend or couple of micro-services)
---
> Any other way multiple agents work in harmony in a single repo (filesystem/directory) at the same time is a pipe-dream with the current status of the MCP and agents.
Every agent runs in a separate VM on GitHub.
> Let alone being aware of each other (agents), they don't even have a proper locking mechanism.
Never claimed this. Feels like a strawman argument.
Or UML tool that generate code?
I mean they might be changing the game into prodicing more hard to maintain software, faster, but if that is the game you are playing, I dont wanna participate.
Has anyone set up a local only autonomous agent, using an open source model from somewhere like huggingface?
Still a bit confused on the details of implementing the technique. Would appreciate any explanations (thanks in advance).
PS: I have no affiliation with Parallel the company
Comes across as someone who just wants to shill for AI for some reason.
The reason for having dependencies in the same repo as the trunk of the project code is precisely because the dependencies aren't sufficiently generic, too dependent on the project's business domain and so they require constant maintenance.
This tight coupling means that the agent requires more context to solve problems and implement simple features. The need for more context is a problem for agents, not a benefit. Agents benefit from modularization, loose coupling and well-chosen abstractions. These attributes do not correspond to the kinds of complex, tightly integrated codebases which benefit from having a monorepo structure.
Dependencies should be like tools. If you think of a hammer, you can do a lot of different jobs with the same hammer... You can debate whether or not a hammer is the right tool for any given job, but for those jobs where a hammer is the right tool, how often do you need to tweak the hammer itself? A hammer solves a very specific problem but that problem can be generalized to countless different use cases. A hammer would make a good module.
Now if you did a project for a candle factory and let the project business domain leak into the design of your tools/modules; you may build a hammer out of wax to straighten out candles... Then in your next project building a house you will find that this hammer doesn't work for that case and needs to be modified. This is a failure of separation of concerns. The tool was not originally optimized for the specific task of applying blunt force to a limited area; it couldn't do that narrow job very well and that's why it needs to be changed. Had you built a hammer out of steel, it would likely have solved both problems even through it's a completely different use case.
The idea of having multiple instances working in parallel sounds like a nightmare to me. I find this technology works best when it is guided, ideally in real time.