A Pm's Guide to AI Agent Architecture
Posted4 months agoActive4 months ago
productcurious.comTechstoryHigh profile
skepticalmixed
Debate
80/100
AIProduct ManagementLLM
Key topics
AI
Product Management
LLM
The article provides a guide for product managers to understand AI agent architecture, but the discussion reveals skepticism about the current state of AI technology and its applications.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
44
Day 1
Avg / period
16
Comment distribution48 data points
Loading chart...
Based on 48 loaded comments
Key moments
- 01Story posted
Sep 4, 2025 at 12:45 PM EDT
4 months ago
Step 01 - 02First comment
Sep 4, 2025 at 2:04 PM EDT
1h after posting
Step 02 - 03Peak activity
44 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 13, 2025 at 3:49 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45129237Type: storyLast synced: 11/20/2025, 5:45:28 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
With current technology (LLM), how can an agent ever be sure about its confidence?
Calibrated Language Models Must Hallucinate
https://arxiv.org/abs/2311.14648
https://www.youtube.com/watch?v=cnoOjE_Xj5g
These models are tools, and LLM products bundles these tools with other tools, and 90% of UX amounts to bundling these well. The article here gives a great sense of what this takes.
Ok, but can you please make your substantive points without putting others down? Your comment wouold be fine without this bit.
https://news.ycombinator.com/newsguidelines.html
I agree with that.
But what you originally wrote was, "The AI bundling problem is over. The user interface problem is over." It would probably make more sense to say "...will be over."
People tend to be sensitive to those kinds of claims because there's a lot of hype around all this at the moment. So when people seem to imply that what we have right now is much more capable than it actually is, there tends to be pushback.
Why do you even have KDE installed if AI has replaced GUIs?
In my book they ideally focus on understanding scope, user needs and how to measure success, while implementation details such as orchestration strategies, evaluation and making sure your system delivers the capabilities you want in general, are engineering responsibilities.
In short: nice industry roadmap, but we are nowhere near robust, trustworthy multi-agent systems yet.
Even assuming you've correctly auth'd the user contacting you (big assumption!), allowing that user to very literally prompt a 'semi-confident thing with tools' - however many layers of abstraction away the tool is - feels very, very far away from a real-world, sensible implementation right now.
Just shoot the tool prompts over to a human operator, if it's so necessary! Sense-check!
My view is that you need to transition slowly and carefully to AI first customer support.
1. Know the scope of problems an AI can solve with high probability. Related prompt: "You can ONLY help with the following issues."
2. Escalate to a human immediately if its out of scope: "If you cannot help, escalate to a human immediately by CCing bob@smallbiz.co"
3. Have an "unlocked agent" that your customer service person can use to answer a question and evaluate how well the agent performs in helping. Use this to drive your development roadmap.
4. If the "unlocked agent" becomes good at solving a problem, add that to the in-scope solutions.
Finally, you should probably have some way to test existing conversations when you make changes. (It's on my TODO list)
I've implemented this for a few small businesses, and the process is so seamless that no one has suspected interaction with an AI. For one client, there's not even a visible escalation step: they get pinged on their phone and take over the chat!
It’s pretty simple. When a non-tech person sees faked demos of what it can do - it looks epic and everyone extrapolates results and thinks AI is that good.
LLMs ability to give convincing sounding answers is like catnip for service desk managers who have never actually been on the desk itself
Using GenAI is a huge breakthrough in this field, because it is a socially acceptable way to tell someone you don't care about their issue.
The purpose has been achieved, in that there is a large drop rate. The product manager has met their goals, cut costs, and might be looking forward to their bonus.
It would be far more expensive to make the LLM behave effectively than it would be to do nothing. Any product manager that sincerely cared about customer support wouldn't be inflicting a personalised callous disregard for service. Instead they'd be focusing on improving documentation, help, and processes. But that's not innately quantifiable in a way that leads to bonuses, and therefore goes unnoticed.
'Routing through increasingly specialised agents' was my approach, and the only thing that would've done the job (in MVP form) at the time. There weren't many models that would fit our (v good) CS & Product teams' dataset of "probable queries from customers" into a single context window.
I never personally got my MVP beyond sitting with it beside the customer support inbox, talking to customers. And AFAIK it never moved beyond that after I left.
Nor should it have been, probably - there are (wild, & mostly ineffable) trade-offs that you make the moment you stop actually talking to users at the very moment they get in touch. I don't remember ever making a trade-off like that where it was worthwhile.
I _do_ remember it as perhaps the most worthwhile time I ever spent doing product-y work.
I say that because: To consider a customer support query type that might be 0.005% of all queries received by the CS team, even my trash MVP had to walk a path down a pretty intricate tree of agents and possible query types.
So - if you believe that 'solving the problems users have with your product' = 'making a better product'. then talking to an LLM that was an advocate for a tiny subset of users, and knew very intimately the details of their issue with your product, that felt really good. It felt like it was a very pure version of what _I_ should be to devs, as any kind of interface between them and our users.
It was very hard to stay a believer in the idea of a 'PM' after seeing that, at least. As a person who preferred to just let people get on with things.
I enjoyed the linked post; it's really interesting to see how far things have come. I'm surprised nobody has built 'talk to your customers at scale', yet - this feels like a far more interesting problem than 'avoid talking to your customers at scale'.
I'm also not surprised, I guess, since it's an incredibly bespoke job to do properly, I imagine, for most products.
This sounds hard to pull off in a very similar way to getting good data through surveys.
I generally don't want to talk to my tools. If I'm motivated to talk to you, it's probably because something went wrong. And even if I talked to you when not annoyed, I'd struggle to articulate more than "it's working good" at any given moment - when what you really want as a product person is to know "it's working good, but I had to internalize this workaround for something for my use case that no I don't even think about but originally I found offputting and almost bounced because of" or whatever.
I get the feeling there's going to be either 1) a great revert of the features, 2) a bunch of hurried patches, or 3) a bunch of legacy systems operating on MCP v0.00-beta (metaphorically speaking)
:lol_sob:
Far better to focus on enhancing human capabilities with agents.
For example while a human talks to a customer on the phone, AI is fetching useful context about the customer and suggesting talking points to improve the human conversation.
One example of a direct benefit for business using AI this way is reducing onboarding times for new employees
14 more comments available on Hacker News