The AI Bubble Is 17 Times Bigger Than the Dot-Com Bust
Posted3 months agoActive3 months ago
cnn.comOtherstory
skepticalnegative
Debate
80/100
AIMarket BubbleInvestment
Key topics
AI
Market Bubble
Investment
The article claims that the current AI bubble is 17 times larger than the dot-com bust, sparking debate among commenters about the validity of this comparison and the potential consequences of an AI market correction.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
21m
Peak period
67
0-6h
Avg / period
13.4
Comment distribution94 data points
Loading chart...
Based on 94 loaded comments
Key moments
- 01Story posted
Oct 19, 2025 at 2:36 PM EDT
3 months ago
Step 01 - 02First comment
Oct 19, 2025 at 2:57 PM EDT
21m after posting
Step 02 - 03Peak activity
67 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 21, 2025 at 9:05 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45636708Type: storyLast synced: 11/20/2025, 4:02:13 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Name one
But still that is the ultimate survivorship bias. Is each new customer that Cursor has bringing in more money than they cost Cursor?
https://medium.com/@Arakunrin/the-post-ipo-performance-of-y-...
Yes, it is.
> Many companies are already getting direct value out of AI.
Many companies were already getting direct value out of the internet during the dotcom bubble. Bubbles do not require the absence of real value being delivered by the bubble industry, they require levels of investment that are anticipate more real value that can be delivered on a time frame for existing valuations across the industry to be sustainable.
> The Dot com burst happened because there were lots of unsustainable business models.
There are lots of unsustainable business models in the AI space, too.
If you are looking at OpenAI, Google, and Anthropic (even though they, too, may be somewhat inflated), you are making the same mistake as looking at Google (ironically) during the dotcom bubble.
> I don't see them as equal.
They aren't equal, the AI bubble is much bigger.
Source that immediately refutes this claim: https://www.artificialintelligence-news.com/wp-content/uploa...
> Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return
There's always some "value" in a bubble, but how does one confirm that it's enough "direct value" that the investments are proportionate?
Enormous investments should go with enormous benefits, and by now a very-measurable portion of expected benefit should have arrived...
Hasn't the performance been asymptotic?
Before computers came along, we really couldn't fit curves to data much beyond simple linear regression. Too much raw number crunching to make the task practical. Now that we have computers—powerful ones—we've developed ever more advanced statistical inference techniques and the payoff in terms of what that enables in research and development is potentially immense.
Realistically, timing is the issue. "This is a bubble" is worth ~nothing. "This is a bubble and it will pop in late December" is worth a lot if you're correct.
Why? What does that tell you?
But that is not what is happening here, is it?
If you were able to predict a lotto number that has a high probability of appearing within the next 24 months, but each ticket cost $2000 to buy, would you still be suspicious?
I find that the people of the opinion "If you think this is a bubble, why aren't you shorting it" don't really have much of a grounding in statistics, especially with regard to EV.
I also find it odd that so many people saying "Why don't you short it" have never heard "The market can remain irrational longer than you can remain solvent."
If they did, the articles would look less like “wow, numbers are really big,” and more like, “disclaimer: I am short. Here’s my reasoning”
They don’t even have to be short for me to respect it. Even being hedged or on the sidelines I would understand if you thought everything was massively overvalued.
It’s a bit like saying you think the rapture is coming, but you’re still investing in your 401k…
Edit: sorry to respond to this comment twice. You just touched on a real pet peeve of mine, and I feel a little like I’m the only one who thinks this way, so I got excited to see your comment
Heck, just look at yesterday: Myself and several million other people wouldn't have needed to march if smart people reliably ended up in charge.
I think it's more valuable to flip the lens around, and ask: "If you're so rich, why aren't you smart?"
To simplify: Yes.
While it seems foolish to discount all effect from individual agency or merit, we do know that random chance is sufficient to lead to the trends we see. [0] Much like how an iceberg always has some ~10% portion above the water: The top water molecules probably aren't special snowflakes (heh) compared to the rest, we're mostly just seeing What Ice Does.
Combine that with how humans seem hardwired to dislike/ignore random chance, and it's reasonable to think we overestimate the importance of personal qualities in getting rich. Consider how basically anyone flipping a coin starts thinking of of causal stories like "hot streaks" or "cold streaks" or "now I'm overdue for a different outcome", even when they already know it's 50/50.
________________
A simple trading simulation of equally-smart equally-lucky agents still demonstrates oligarchic outcomes [0]. When you also add a redistributing effect (like taxing the rich to keep the poor alive) it generates outcomes that resemble real-world statistics for different countries.
> If you simulate this economy, a variant of the yard sale model, you will get a remarkable result: after a large number of transactions, one agent ends up as an “oligarch” holding practically all the wealth of the economy, and the other 999 end up with virtually nothing.
> It does not matter how much wealth people started with. It does not matter that all the coin flips were absolutely fair. It does not matter that the poorer agent's expected outcome was positive in each transaction, whereas that of the richer agent was negative. Any single agent in this economy could have become the oligarch—in fact, all had equal odds if they began with equal wealth.
[0] https://www.scientificamerican.com/article/is-inequality-ine...
First, you have to show up at a game in person. No one watching the game on TV or ignoring it altogether is catching a ball.
Next, you have a greater chance at catching a ball if you bring a glove.
Then, it also helps your chances if you've practiced catching balls.
However, all of that preparation is for naught if a ball is never hit to you.
For every person who strikes it rich, there are hundreds if not thousands of people who were just as smart, worked just as hard, and did all the same right things, but they simply didn't make it.
tl;dr - it is really tiring, reading these "clever" quips about "why won't you short then?", mainly because they are neither clever nor in any way new. We have heard that for a decade about "why won't you short BTC them?". You are not original.
The bubble is the manifestation of this concept. Things should be falling apart, yet they keep going up, for longer than it is reasonable; at some point, bearish investors lose so much money they decide it's better just to ride the wave up, growing the bubble even further, until it bursts and everybody loses.
There is a reason investors flock to gold during these times. The best move is not to play (though you don't want to hold too much cash either)
and
"I believe this is a bubble and it will pop and I believe I can time it well enough to be worth putting money on when it will pop"
Are...not the same belief.
Reminds me of the "everybody knows Tether doesn't have the dollars it claims and it's collapse is imminent" that was parroted here for years.
This seems to be the disconnect.
Ever heard of gold or real estate?
https://en.wikipedia.org/wiki/Store_of_value
But to reiterate, there is great and massive actual use case for the tokens, yes. No one would argue against it :) . We just think that it is bad.
It was a good lesson for me personally, to always check wider picture and consider unknown factors.
One of the reasons is the way they were built. The original large language model AI was built using vectors to try and understand the statistical likelihood that words follow each other in the sentence. And while they’re very clever, and it’s a very good bit of engineering required to do it, they’re also very limited.
The second thing is the way LLMs were applied to coding. What they’ve learned from — the coding that’s out there, both in and outside the public domain — means that they’re effectively showing you rote learned pieces of code. That’s, again, going to be limited if you want to start developing new applications.”
Frankly kind of amazing to be so wrong right out of the gate. LLMs do not predict the most likely next token. Base models do that, but the RLed chat models we actually use do not — RL optimizes for reward and the unit of being rewarded is larger than a single token. On the second point, approximately all commercial software consists of a big pile of chunks of code that are themselves rote and uninteresting on their own.
They may well end up at the right conclusion, but if you start out with false premises as the pillars of your analysis, the path that leads you to the right place can only be accidental.
This is a reasonable explanation, though as a non-expert I can’t vouch for the formal parts: https://www.harysdalvi.com/blog/llms-dont-predict-next-word/
There are other posttraining techniques that are not strictly speaking RL (again, not an expert) but it sounds to me like they are still not teaching straightforward next token prediction in the way people mean when they say LLMs can’t do X because they’re merely predicting the most likely next token based on the training corpus.
I'm definitely not an expert, but to me, RL and other techniques looks like a guide or a constraint on the still 'next token prediction' concept. What I do not get is - is this all about training? Or is this about inference.
In any case, this is still an eye opener and I need to study this a bit more.
When talking inference, models from huggingface are composed of what then? Because they can do angentic stuff, no?
Market Analyst, perhaps?
Less if inflation adjusted.
Amazon's market cap is 2.27 trillion [0] and 5 trillion was wiped off the NASDAQ over two years [1].
[0] https://www.macrotrends.net/stocks/charts/AMZN/amazon/market...
[1] https://web.archive.org/web/20191218173143/https://www.latim...
And there are legitimately applications beyond search, I don't know how big those markets are, but it doesn't seem that odd to suggest they might be larger than the search market.
They compete legitimately with Google Search as I compete legitimately with Jay-Z over Beyonce :)
Is there a reason why AI cannot be far better than Google at providing results to queries?
Inherently, they are in the same business, but I am not very aware of any AI specifically aimed right at Google's business....... but it is completely logical that they would.
Furthermore, it appears that Google just sells off placement to the highest bidder, and these AIs could easily beat that by giving free AI access in exchange for nibbling at the queries and adding a tab of 'sponsored results'
There isn't just like there isn't any reason why Bing or Duck could have overtaken Google all these years.
> but it is completely logical that they would.
Logical yes, there are a lot of things that are logical, actually making something logical practical is whole other thing...
> it appears that Google just sells off placement to the highest bidder
They have been doing this for years while their search and other business has been growing and growing...
> these AIs could easily beat that by giving free AI access in exchange for nibbling at the queries and adding a tab of 'sponsored results'
agree 100% on everything here except easily part :)
Meanwhile you have Lebron who is already the highest scoring player of all time, and he's still going out every night and putting up another 20 points.
Comparing potential to actual at 1:1 ratio is insane.
No, but there is a reason to suspect that other AI has a big challenge there, and that is that Google is a trillion dollar company who is a leader in both search and AI and whose investment in AI has always been significantly about both improving its ability to respond to queries and avoiding needing queries in favor of proactively supplying information.
And also that the most productive means of using AI to respond to queries about material facts continues to rely on something search-like supporting an LLM for grounding.
> Furthermore, it appears that Google just sells off placement to the highest bidder, and these AIs could easily beat that by giving free AI access in exchange for nibbling at the queries and adding a tab of 'sponsored results'
Google already provides free AI access, uses it by default to respond to most queries, and puts it above the sponsored results, with a link to go into a more focussed AI interface for further exploration.
Is that all? It really is that easy huh.
Building even a tiny fraction of those moats is mind-bogglingly difficult. Building a third of that moat is insanely hard. To claim that the AI industry's "expected endgame moat size" is one-third of Google's current moat is a ludicrous prediction. You'd be better off playing the lottery than making that bet.
I would be happy to bet against this if I could do it without making a Keynes-wager (that I can remain solvent longer than markets remain irrational), but I see no way to do so. Put options expire, futures can be force-liquidated by margin calls, and short sales have unlimited downside risk.
Here is the report
https://www.youtube.com/watch?v=uz2EqmqNNlE
https://news.ycombinator.com/item?id=45465969
and https://news.ycombinator.com/item?id=45465969 111 comments
both on "AI bubble is 17 times bigger"
By the way the 17 times refers to an interest rate model and is largely unrelated to AI. Explained here https://www.youtube.com/watch?v=uz2EqmqNNlE&t=306