Bitter Lessons Building AI Products
Mood
calm
Sentiment
mixed
Category
other
Key topics
The article discusses lessons learned from building AI products, sparking a discussion on the challenges and best practices in AI development and product management.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
1h
Peak period
9
Hour 3
Avg / period
4
Based on 20 loaded comments
Key moments
- 01Story posted
Oct 10, 2025 at 10:56 PM EDT
about 2 months ago
Step 01 - 02First comment
Oct 11, 2025 at 12:16 AM EDT
1h after posting
Step 02 - 03Peak activity
9 comments in Hour 3
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 11, 2025 at 5:42 AM EDT
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Now the AI is good enough. People are still saying we hit a wall. Are you guys sure?
He learned lesson about building a product with AI that was incapable. What happens when AI is so capable it negates all these specialized products?
AI is not in a bubble. This technology will change the world. The bubble are people like this guy trying to build GUI's around AI to smooth out the rough parts which are constantly getting better and better.
> He learned lesson about building a product with AI that was incapable. What happens when AI is so capable it negates all these specialized products?
I don't know, ask me again in 50 years.
But you have to realize, Before AI was capable of doing something like NotebookLLM nobody bought into it. And they were wrong. They failed to extrapolate.
Now that AI CAN do NotebookLLM, people hold on to the same sentiment. You guys were wrong.
1. We're not all the same person, to be clear.
2. It's also not the same argument as before. It's not the same extrapolation.
3. And being right or wrong in the past has no bearing on current
NotebookLM doesn't need new AI. It's tool use and context. Tool use is awesome, I've been saying that for ages.
It's wrong to extrapolate we're seamlessly going to go from tool use to "AI replaces humans"
No. But you all run under the same label. This is common if you didn't know. For example a certain group of people with certain beliefs can be called republican or democrat or catholic. I didn't name the label explicitly. But you all are in that group. I thought it was obvious I wasn't talking about one person. I don't think you're so stupid as to actually think that so don't pretend you misinterpreted what I said.
>2. It's also not the same argument as before. It's not the same extrapolation.
Seems like the same argument to me, you thought LLMs were stochastic parrots and inherently and forever limited by it's very nature (a statement made with no proof).
The extrapolation is the same since the dawn of AI: upwards. We may hit a wall, but nobody can know this for sure.
>3. And being right or wrong in the past has no bearing on current
It does. Past performance is a good predictor of current performance. It's also common sense, why else do we have resumes?
You were wrong before, chances are... you'll be wrong again.
>It's wrong to extrapolate we're seamlessly going to go from tool use to "AI replaces humans"
You just make this statement without any supporting evidence? It's just wrong because you say so?
This is my statement: How about the trendline points to an eventual future that remains an open possibility due to a trendline...
versus your conclusion which is "it's wrong"
They tend to poke the free ChatGPT for ill defined requests and come away disappointed.
The nice part is I can spend an hour or so writing specs, start 3 or 4 tasks, and come back later to review the result. It's hard to be totally objective about how much time it saves me, but generally feels worth the 200/month.
One thing I'm not impressed by is the ability to review code changes, that's been mostly a waste of time, regardless of how good the prompt is.
If you're not using AI for 60-70 percent of your code, you are behind. And yes 200 per month for AI is required.
That's a low bar.
The technology can change the world, and still be a bubble.
Just because neural networks are legit doesn’t mean it’s a smart decision to build $500 billion worth of datacenters.
You're dropping that line as if it's absurd. Be realistic. Dark conclusions are not automatically illogical. If the logic points to me being replaced, then that's just reality.
Right now we don't know if I (aka you) will be replaced, but trendlines point to it as a possiblity.
Longer analysis:
https://youtu.be/21EYKqUsPfg?t=47m28s
To (try and) summarize those in the context of TFA: builders need to distinguish between policy optimisations and program optimisations
I guess a related question to ask (important for both startups and Big Tech) might be: "should one focus on doing things that don't scale?"
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.