Say Farewell to the AI Bubble, and Get Ready for the Crash
Posted5 months agoActive5 months ago
latimes.comTechstoryHigh profile
skepticalmixed
Debate
80/100
AI BubbleGPT-5Tech Industry Trends
Key topics
AI Bubble
GPT-5
Tech Industry Trends
The article claims the AI bubble is bursting, citing the disappointing release of GPT-5, while commenters debate the validity of this assertion and discuss the future of AI development.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
12m
Peak period
76
0-2h
Avg / period
9.5
Comment distribution104 data points
Loading chart...
Based on 104 loaded comments
Key moments
- 01Story posted
Aug 20, 2025 at 2:12 PM EDT
5 months ago
Step 01 - 02First comment
Aug 20, 2025 at 2:24 PM EDT
12m after posting
Step 02 - 03Peak activity
76 comments in 0-2h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 21, 2025 at 8:51 PM EDT
5 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 44964548Type: storyLast synced: 11/20/2025, 6:24:41 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Are we there yet?
/s
Nobody has said they're simple databases, they would obviously be complex databases.
In engineering school, it was easy to spot the professors who taught this way. I avoided them like the plague.
Those talking heads haven't had to mea culpa for : Hype about Hadoop, hype about blockchain, hype about no code, hype about the previous AI bubble, hype about "agile", hype about whatever JS script is popular this week, etc.
you could say "it's just matrix multiplication"; but then quantum mechanics (and thus chemistry, biology, and everything on top of that) is just linear algebra
There're a lot of business discovering its benefits now, companies will continue building things over it.
Also, what was the deal with all those mysterious Star Wars pictures?
So I assume he thought hype would work again, but people are beginning to scrutinize the real capabilities of "AI".
[1] https://www.wsj.com/tech/ai/sam-altman-seeks-trillions-of-do...
But they are clearly on their way to build 20 data centers[1]. OpenAI raising $500B over 10-15 years to build inference capacity isn’t really that hard to believe or that impressive at this point tbh. Like that could just be venture debt that is constantly serviced.
[1]: https://builtin.com/articles/stargate-project
And he's not the only one, a handful of companies are well aware that we're nearing the peak of a hype cycle and making sure they can survive the burst.
Crashes come when there was no real business value.
I use AI all day and I’m sure I’m not the only one.
Not so fun.
https://m.youtube.com/watch?v=1H3xQaf7BFI
State of AI in Business 2025 [pdf] - https://news.ycombinator.com/item?id=44941374 - August 2025
https://web.archive.org/web/20250818145714/https://nanda.med...
> Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return. The outcomes are so starkly divided across both buyers (enterprises, mid-market, SMBs) and builders (startups, vendors, consultancies) that we call it the GenAI Divide. Just 5% of integrated AI pilots are extracting millions in value, while the vast majority remain stuck with no measurable P&L impact. This divide does not seem to be driven by model quality or regulation, but seems to be determined by approach.
https://venturebeat.com/ai/why-do-87-of-data-science-project...
A lot of AI investment right now is hinged on promises of "AGI" that are failing to materialize, and models themselves are seeing diminishing returns as we throw more hardware at them.
that didn't stop the housing bubble in the 2000s.
likewise, if I argue that Dutch "Tulip mania" [0] was a bubble, "but tulips are pretty" is not an effective counter-argument. tulips being pretty was a necessary precondition for the bubble to form.
the existence of a foo bubble does not mean that foo has zero value - it means that the real-world usefulness of foo has become untethered from market perceptions of its monetary value.
0: https://en.wikipedia.org/wiki/Tulip_mania
>Theres 2 AI conversations on HN occurring simultaneously.
> Convo A: Is AI actually reasoning? does it have a world model? etc..
> Convo B: Is it good enough right now? (for X, Y, or Z workflow)
The internet reshaped the entire global economy, yet the dot com crash occurred all the same.
Convo A leads to questioning if the insane money being poured into AI make sense. The fact that many people are finding utility, doesn't preclude things from being over valued and over hyped.
a) AI is an extremely useful productivity tool to accomplish tasks that other programming paradigms can't do.
b) Investment in AI is disproportionate to the impact of (a), leading to a low probability of sufficient ROI.
You fall into all or nothing logic. That's thinking failure.
If real business value is 10% of the price, there will be massive crash and years of slow advance.
Dot-com bust was like that. Internet clearly had value, but not as much and not as quickly as people thought.
Evidence is emerging that the former could be twenty times the latter, or more.
The value you perceive has been much, much more expensive than investors would like, I suspect.
Indeed. That's why we don't have trains or the internet anymore; once they had their big crashes we knew there was no business value, so they went away.
... I mean, what? You generally can't get a big bubble without _some_ business value, so bursting bubbles almost always have _something_ behind them (the crypto one may be the exception).
Even if AI valuations have a sharp correction, there will still be a great need—and demand—for compute.
For example, Nouriel Roubini calling out the risks of the 2008 Recession before it happened, Michael Pettis calling out the risks of a real estate balance sheet crisis in China before Evergrande happened, and Arvind Subramanian calling out the risks of a a shadow bank crisis in India before the ILFS collapse in 2018.
For AI/ML, I'd tend to trust Emily Bender, given her background in NLP which itself was what became LLMs originated from.
Both of them give off "influencer" vibes. They're meaningless without more context. We used to just call people "experts", but now that's an arbitrarily bad word.
Bubbles are a lot easier to visualise from the outside.
The employment numbers, the inflation numbers, government austerity, the gpt-5 disappointment... the valuations are all more like meme stocks and not based on reality.
If enough articles about the crash start appearing, and enough people believe the crash is coming, the congratulations: the crash will occur.
gpt5 has always been about making a "collection of models" work together and not about model++. This was announced what, a year ago? And they delivered. Capabilities ~90-110% of their top tier old models at 4-6x lower price. That's insane!
gpt5-mini is insane for its price, in agentic coding. I've had great sessions with it, at 0.x$ / session. It can do things that claude 3.5/3.7 couldn't do ~6 months ago, at 10-15x the price. Whatever RL they did is working wonders.
It's an op-ed. It's supposed to be biased.
One way I leverage opinion pieces for things with which I disagree, is to treat it as a sort of "devil's advocate". What argument are they making? Is that really the strongest one they have? Does my understanding of that domain effectively counter those arguments? etc.
In this case, the main argumentation is on how ChatGPT is not the miraculous genie it was hyped up to be. That's a fair statement, but to extrapolate that into "AI bubble is crashing now" is overlooking a host of other facts about its usefulness. Yes we'll eventually hit the through of disillusionment but I don't think we're there yet.
That is revisionist history. Look at Altman's hype statements in the weeks and months leading up to gpt5, some of which were quoted in the article. He never proposed gpt5 as what you're saying and indeed he claimed a massive leap in model performance.
> After that, a top goal for us is to unify o-series models and GPT-series models by creating systems that can use all our tools, know when to think for a long time or not, and generally be useful for a very wide range of tasks.
> In both ChatGPT and our API, we will release GPT-5 as a system that integrates a lot of our technology, including o3. We will no longer ship o3 as a standalone model.
6 months ago.
There's also another one, earlier that says gpt5 will be about routing things to the appropriate model, and not necessarily a new model per se. Could have been in a podcast. Anyway, receipts have been posted :)
No, it wasn’t. Have you read and listened to Altman’s hype around GPT-5 from a year ago? They changed the narration after the 4.1 flop, which they thought would be GPT-5, and it seems some people fell for it.
> Capabilities ~90-110% of their top tier old models at 4-6x lower price
Maybe they finally implemented the DeepSeek paper.
I replied below in this thread with the specific post, 6 months ago.
> After that, a top goal for us is to unify o-series models and GPT-series models by creating systems that can use all our tools, know when to think for a long time or not, and generally be useful for a very wide range of tasks.
> In both ChatGPT and our API, we will release GPT-5 as a system that integrates a lot of our technology, including o3. We will no longer ship o3 as a standalone model.
Obviously it's not.
1. https://lexfridman.com/sam-altman-2-transcript/
This is hard to quantify exactly since very few benchmarks have the kind of scales where comparing two deltas would be meaningful. But if we pick the Artifical Analysis composite score[0] as the baseline, GPT-3.5 Turbo was at 11, GPT-4 at 25, and GPT-5 at 69. It's just that most of the post-GPT-4 improvement was with o1 and o3.
Feels like a pretty fair statement.
[0] https://artificialanalysis.ai/#frontier-language-model-intel...
OpenAI's CEO says he's scared of GPT-5
https://www.techradar.com/ai-platforms-assistants/chatgpt/op...
Sam Altman Compares OpenAI To The Manhattan Project—And He's Not Joking About the Risks
https://finance.yahoo.com/news/sam-altman-compares-openai-ma...
This is Altman after the release:
Sam Altman says ‘yes,’ AI is in a bubble
https://www.theverge.com/ai-artificial-intelligence/759965/s...
Source? Others are calling out this as being incorrect, so a source would help people evaluate this claim. Personally I'm much more likely to believe that AI companies are moving the goalposts rather than making significant leaps forward.
It's only "bias" when you don't like it.
That rings true and I suspect the bubble won't burst until something else comes along to steal the show.
One could argue the same was true of blockchain until AI came along.
As it happens LLMs work comparatively well with code. Is this because code does not refer (a lot) to the outside world and fits well to the workings of a statistical machine? In that case the LLMs output can also be be verified more easily by inspection through an expert, compiling, typechecking, linting and running. Although there might be hidden bugs that only show up later.
what else would they be good at
also
>ChatGPT is already the fifth biggest website in the world, according to Altman, and he plans for it to leapfrog Instagram and Facebook to become the third, though he acknowledged: “For ChatGPT to be bigger than Google, that’s really hard.”
>“We have better models, and we just can’t offer them, because we don’t have the capacity,” he said. GPUs remain in short supply, limiting the company’s ability to scale.
So Altman wants to keep building. Whether investors will pay up for that remains to be seen I guess.
Anyone looked at buying S&P sector-specific ETFs? For people who want to keep their portfolio spread as widely as possible, but are frightened by how tech-heavy the S&P index is, these seem a good option. But they all seem to have high costs (the first one I pulled up is 0.39%).
If the answer is "no" for all above, then you should expect some bubble to keep going. At most, they will change the bubble subject.
Nvidia will be Cisco of this era. Cisco was the worlds most valuable company when dot-com bubble peaked, went down almost 90% in 2 years. There was lots of "dark fiber" all around (fiber optic cable already installed but not used).
I think OpenAI and most small AI companies go down. Microsoft, Google, Meta scale down, write down losses but keep going and don't stop research.
I hope AI bubble leaves behind massive amounts of cloud compute that companies are forced to sell at the price of the electricity and upkeep for years. New startups with new ideas can build upon it.
Investors will feel poor, crypto market will crash and so on.
"What I had not realized," Weizenbaum wrote in 1976, "is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people." Weizenbaum warned that the "reckless anthropomorphization of the computer" - that is, treating it as some sort of thinking companion - produced a "simpleminded view of intelligence.""
https://www.theguardian.com/technology/2023/jul/25/joseph-we...
Weizenbaum's 1976 book: https://news.ycombinator.com/item?id=36875958
HN commenter rates this "greatest tech book of all-time":
https://news.ycombinator.com/item?id=36592209
If you ask it to list all 50 states or all US presidents it does it no problem. Asking it to generate the text of the answer in an image is a piss poor way of testing a language model.
I heavily dislike GPT-5 but at least have a fair review of it.
Overstates things a bit. It seems unlikely OpenAI will release human level AI in the next year or two, but the march of AI improving goes on.
Also re the AI Con book saying AI is a marketing term, I'm more inclined to go with Wikipedia and "a field of research in computer science".
Though there is a bit of a dot com bubble feel to valuations.
Even if AI valuations have a sharp correction, there will still be a great need—and demand—for compute.