Ask HN: Why do my friends' users hate the product? Is it worth finding out?
No synthesized answer yet. Check the discussion below.
Or is that now what you are asking?
and
if this is even a useful exercise given that the default (talking to your users, watching a random selection of user sessions, using the product yourself to find problems) is proven to work over a long enough time horizon.
In my personal experience (as a PM and working in software development), there are loads of ways to get data already. The bottleneck is processing and understanding that data to come to hypotheses and conclusions.
(Recently with AI there are some product that promise to help with that, but I have no personal revolutionary positive experiences with them.)
I've seen the same trend with the new AI products in this space, having researched it briefly for this problem. They seem bolted-on and built as afterthoughts. I've been watching a lot of user sessions recently and I've noticed that it's basically impossible to get AI to accurately and consistently classify a problem in a user session by itself. Honestly it seems like AI hurts more than it helps here.
Even so, I'm curious if you would personally find value in something that helps with this bottleneck?
So are you probing HN for a problem you can try to solve, or are you actually trying to help some friends?
If it's the latter, I would have expected YC to have taught them the path to answer, talk to your users
if nobody else has the same problem I'm just gonna hack together some scripts, call it a day, and charge them like a contractor.
if the problem is more generalizable then it's worth hunkering down and building something more robust, and charging them like a vendor.
at any rate, part of their problem is people are leaving before they get a chance to talk, and not enough people are talking to them. bit of a catch-22 for them. why not see if the well runs deep?
Can you share a link to your friends product? HN could look at it and perhaps give some indication of things that stand out
That's true, I am more interested in gauging if this problem is worth solving in the first place than I am interested in finding a specific solution. I am nonetheless interested in helping my friends.
No, sorry, I can't share a link.
But I think this is also the hardest task of a PM, so I am skeptical. There is a lot less learning and training material for an AI to use (compared to for example writing code) - so it is no surprise that AI in its current state does often not lead to great results.
If you don't mind me asking, when you watch sessions, do you have a better way to prioritize the session you watch than just picking at random?
For example, do you have some way to pull out a bunch of session groups from the data automatically so you only have to watch one session in the group to know what the problems were for all of the sessions in that group?
And this implies to me that your ideal scenario is one in which the amount of data coming in from user convos, surveys, complaints, tickets, etc is equal or greater than the amount of time you have as a team to process it, such that you can focus on that and keep yourself productively at capacity.
But what if the amount of such high-priority signals is much higher than what you can deal with? Is it worth clustering that to get a smaller list of actionable trends?
Furthermore, if this is the highest quality data, is there even any need to go in and 'process' ALL of the sessions and bin them regardless of their high-priority signal status? Am I reading you right?
Just to wrap this back, I suspect there's a way to process these sessions really quickly to figure out what the user trends are in ~real-time. This way you can do the same thing you're already doing, but with much more context.
Adding another two channels, with these channels being: 1. A weighted map of stats for all of the sessions at once. Weighted by heuristics you choose or by good defaults. 2. Reports detailing all the natural problem groups the sessions fall into. With breadcrumb trails available for deep-dives.
Most importantly I think there's a way to do this without having to rely on llms at all; by modeling the whole thing as a set of graph problems.
If this sounds reasonable to you lmk.
Have a good one!