Not Hacker News Logo

Not

Hacker

News!

Home
Hiring
Products
Companies
Discussion
Q&A
Users
Not Hacker News Logo

Not

Hacker

News!

AI-observed conversations & context

Daily AI-observed summaries, trends, and audience signals pulled from Hacker News so you can see the conversation before it hits your feed.

LiveBeta

Explore

  • Home
  • Hiring
  • Products
  • Companies
  • Discussion
  • Q&A

Resources

  • Visit Hacker News
  • HN API
  • Modal cronjobs
  • Meta Llama

Briefings

Inbox recaps on the loudest debates & under-the-radar launches.

Connect

© 2025 Not Hacker News! — independent Hacker News companion.

Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.

Not Hacker News Logo

Not

Hacker

News!

Home
Hiring
Products
Companies
Discussion
Q&A
Users
  1. Home
  2. /Discussion
  3. /Bitter lessons building AI products
  1. Home
  2. /Discussion
  3. /Bitter lessons building AI products
Last activity about 2 months agoPosted Oct 10, 2025 at 10:56 PM EDT

Bitter Lessons Building AI Products

vinhnx
32 points
20 comments

Mood

calm

Sentiment

mixed

Category

other

Key topics

AI Development
Product Management
Software Engineering
Debate intensity40/100

The article discusses lessons learned from building AI products, sparking a discussion on the challenges and best practices in AI development and product management.

Snapshot generated from the HN discussion

Discussion Activity

Moderate engagement

First comment

1h

Peak period

9

Hour 3

Avg / period

4

Comment distribution20 data points
Loading chart...

Based on 20 loaded comments

Key moments

  1. 01Story posted

    Oct 10, 2025 at 10:56 PM EDT

    about 2 months ago

    Step 01
  2. 02First comment

    Oct 11, 2025 at 12:16 AM EDT

    1h after posting

    Step 02
  3. 03Peak activity

    9 comments in Hour 3

    Hottest window of the conversation

    Step 03
  4. 04Latest activity

    Oct 11, 2025 at 5:42 AM EDT

    about 2 months ago

    Step 04

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (20 comments)
Showing 20 comments
ninetyninenine
about 2 months ago
4 replies
The bitterest lesson is that AI is improving. It didn't actually hit a wall. The first product was to early... it failed because AI was not good enough. Back then everyone said we hit a wall.

Now the AI is good enough. People are still saying we hit a wall. Are you guys sure?

He learned lesson about building a product with AI that was incapable. What happens when AI is so capable it negates all these specialized products?

AI is not in a bubble. This technology will change the world. The bubble are people like this guy trying to build GUI's around AI to smooth out the rough parts which are constantly getting better and better.

airstrike
about 2 months ago
1 reply
Not all of us buy into that extrapolation.

> He learned lesson about building a product with AI that was incapable. What happens when AI is so capable it negates all these specialized products?

I don't know, ask me again in 50 years.

ninetyninenine
about 2 months ago
1 reply
Nobody buys into it. That's the problem.

But you have to realize, Before AI was capable of doing something like NotebookLLM nobody bought into it. And they were wrong. They failed to extrapolate.

Now that AI CAN do NotebookLLM, people hold on to the same sentiment. You guys were wrong.

airstrike
about 2 months ago
1 reply
Your argument is a fallacy in three immediate ways:

1. We're not all the same person, to be clear.

2. It's also not the same argument as before. It's not the same extrapolation.

3. And being right or wrong in the past has no bearing on current

NotebookLM doesn't need new AI. It's tool use and context. Tool use is awesome, I've been saying that for ages.

It's wrong to extrapolate we're seamlessly going to go from tool use to "AI replaces humans"

ninetyninenine
about 2 months ago
>1. We're not all the same person, to be clear.

No. But you all run under the same label. This is common if you didn't know. For example a certain group of people with certain beliefs can be called republican or democrat or catholic. I didn't name the label explicitly. But you all are in that group. I thought it was obvious I wasn't talking about one person. I don't think you're so stupid as to actually think that so don't pretend you misinterpreted what I said.

>2. It's also not the same argument as before. It's not the same extrapolation.

Seems like the same argument to me, you thought LLMs were stochastic parrots and inherently and forever limited by it's very nature (a statement made with no proof).

The extrapolation is the same since the dawn of AI: upwards. We may hit a wall, but nobody can know this for sure.

>3. And being right or wrong in the past has no bearing on current

It does. Past performance is a good predictor of current performance. It's also common sense, why else do we have resumes?

You were wrong before, chances are... you'll be wrong again.

>It's wrong to extrapolate we're seamlessly going to go from tool use to "AI replaces humans"

You just make this statement without any supporting evidence? It's just wrong because you say so?

This is my statement: How about the trendline points to an eventual future that remains an open possibility due to a trendline...

versus your conclusion which is "it's wrong"

journal
about 2 months ago
1 reply
i've not been impressed since gpt3.5
nougati
about 2 months ago
2 replies
I'm surprised at this, LLMs have had many developments since Gpt3.5, technologically and culturally. What kind of developments would be impressive to you?
oldge
about 2 months ago
1 reply
This is a common sentiment from my peers who have not spent any real time with the frontier models in the last six months.

They tend to poke the free ChatGPT for ill defined requests and come away disappointed.

exfalso
about 2 months ago
1 reply
Same experience here, using new models. Every time it's a disappointment. Useful for search queries that are not too specialized. That's it.
sampullman
about 2 months ago
2 replies
I get pretty good results with Claude code, Codex, and to a lesser extend Jules. It can navigate a large codebase and get me started on a feature in a part of the code I'm not familiar with, and do a pretty good job of summarizing complex modules. With very specific prompts it can write simple features well.

The nice part is I can spend an hour or so writing specs, start 3 or 4 tasks, and come back later to review the result. It's hard to be totally objective about how much time it saves me, but generally feels worth the 200/month.

One thing I'm not impressed by is the ability to review code changes, that's been mostly a waste of time, regardless of how good the prompt is.

ninetyninenine
about 2 months ago
Company expectations are higher too. Many companies expect 10x output now due to AI, but the technology has been growing so quick that there are a lot of people/companies who haven't realized that we're in the middle of a paradigm shift.

If you're not using AI for 60-70 percent of your code, you are behind. And yes 200 per month for AI is required.

fragmede
about 2 months ago
We've been trialing code rabbit at work for code review. I have various nits to pick but it feels like a good addition.
journal
about 2 months ago
1 reply
maybe if openai let me generate an image through api? that would impress me. instead, they took away temperature and gave us verbosity and reasoning effort to think about every time we make an api call.
esafak
about 2 months ago
Then you should be very impressed, because they let you generate videos by API: https://platform.openai.com/docs/models/sora-2

That's a low bar.

Legend2440
about 2 months ago
2 replies
>AI is not in a bubble. This technology will change the world.

The technology can change the world, and still be a bubble.

Just because neural networks are legit doesn’t mean it’s a smart decision to build $500 billion worth of datacenters.

aloha2436
about 2 months ago
The internet was a bubble! Somewhat after, it took over planet earth. But it was also a bubble.
kingstnap
about 2 months ago
You are right we should've built $5 trillion /s.
rf15
about 2 months ago
1 reply
If AI becomes as good as you claim, there is no need for you. Since it can replace you in every endeavor and be better at it, ANY energy given to you is logically better invested by giving it to the AI. Stop wasting our collective resources.
ninetyninenine
about 2 months ago
It can. That's the future bro. It replace me, you and all of us.

You're dropping that line as if it's absurd. Be realistic. Dark conclusions are not automatically illogical. If the logic points to me being replaced, then that's just reality.

Right now we don't know if I (aka you) will be replaced, but trendlines point to it as a possiblity.

gsf_emergency_4
about 2 months ago
Rich Sutton, the guy behind both "reinforcement learning" & "the Bitter Lesson", muses that Tech needs to understand the Bitter Lesson better:

https://youtu.be/QMGy6WY2hlM

Longer analysis:

https://youtu.be/21EYKqUsPfg?t=47m28s

To (try and) summarize those in the context of TFA: builders need to distinguish between policy optimisations and program optimisations

I guess a related question to ask (important for both startups and Big Tech) might be: "should one focus on doing things that don't scale?"

View full discussion on Hacker News
ID: 45546200Type: storyLast synced: 11/20/2025, 1:30:03 PM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.

Read ArticleView on HN
Not Hacker News Logo

Not

Hacker

News!

AI-observed conversations & context

Daily AI-observed summaries, trends, and audience signals pulled from Hacker News so you can see the conversation before it hits your feed.

LiveBeta

Explore

  • Home
  • Hiring
  • Products
  • Companies
  • Discussion
  • Q&A

Resources

  • Visit Hacker News
  • HN API
  • Modal cronjobs
  • Meta Llama

Briefings

Inbox recaps on the loudest debates & under-the-radar launches.

Connect

© 2025 Not Hacker News! — independent Hacker News companion.

Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.