Fears Over AI Bubble Bursting Grow in Silicon Valley
Posted3 months agoActive3 months ago
bbc.comTechstory
skepticalmixed
Debate
80/100
AI BubbleTech InvestmentAI Adoption
Key topics
AI Bubble
Tech Investment
AI Adoption
The article discusses growing concerns about an AI bubble in Silicon Valley, with some commenters questioning the sustainability of AI investments while others point to the technology's rapid adoption and potential value.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
6m
Peak period
45
0-2h
Avg / period
6.8
Comment distribution68 data points
Loading chart...
Based on 68 loaded comments
Key moments
- 01Story posted
Oct 10, 2025 at 10:28 PM EDT
3 months ago
Step 01 - 02First comment
Oct 10, 2025 at 10:33 PM EDT
6m after posting
Step 02 - 03Peak activity
45 comments in 0-2h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 12, 2025 at 4:19 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45546069Type: storyLast synced: 11/20/2025, 1:42:01 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
If there were a bubble right now, would it be suicide for a professor at the Stanford business school to be quoted by a reporter saying that?
There's no penalty for anyone calling out what might seem like a bubble. If it turns out to be an actual bubble, they can claim to be prophetic visionaries.
If it turns out it's not a bubble, nobody bats an eye.
> "It is very hard to time a bubble," Prof Admati told me. "And you can't say with certainty you were in one until after the bubble has burst."
This statement is very true. Even if we are in a bubble you should not make the mistake of trying to time it properly.
For example, look at Nvidia during the last cryptocurrency hype cycle. If you predicted that was a bubble and tried shorting their stock, you would have lost since it didn't drop at all at they successfully jumped from crypto to AI and continued their rise.
I am not saying crypto wasn't a bubble, and I am not saying AI isn't a bubble; I am saying it would be a mistake to try to time it. Just VT and chill.
Which prompted the question of whether there would be strong disincentive for any Stanford business school professor to give a non-innocuous response that would be a wet blanket on the AI ambitions surrounding them.
I think there would be lot more pushback if you said that this will go to 10x or 100x or 10000x in a few years... That might be suicide...
People literally outsource thinking to it.
People have killed themselves because there were no stopgaps in the communication aspect.
Rural towns are getting overly fucked by water and electricity usage, adding in the fumes produced.
All of this for what, so we can make a video of Michael Jackson high-fiving Ghandi?
The implementation was haphazard and some could say felonious, but altm*n is richer than 99% of us, and would never actually see the inside of a courtroom.
... if you could talk to it like a human and have google search hold a conversation with you - sure. That distinction is a big big big difference though
If you're limiting yourself to simple fact retrieval questions like this then you are...limiting yourself.
Wait till you see how much pollution was involved in producing the computing device that you're viewing this comment from, all so you can handwring about AI while on the toilet.
According to Sustainability by Numbers, "[s]ome of our best estimates are that one query emits around 2 to 3 grams of CO2. That includes the amortised emissions associated with training."
That means 32,000 queries equal one iPhone. If you keep a phone three years, that's 29 queries a day for AI to be equivalent.
I've said it before, and I'll say it again: the only meaningful critiques of AI are that
1. it's making us stupider 2. it's taking the meaningful parts of being alive away from the people (i.e., replacing human artistic expression with machine "artistic" expression)
Courts have consistently ruled it's fair use and therefore not copyright infringement. Anthropic did get dinged for piracy to collect training data, but it can hardly be extended to the entire industry.
If all new media is just endless stuff produced by AI I’m kind of not interested. Which is interesting because it made me realize I don’t actually care about the media itself, but the connection to another human (through movies or writing or whatever).
It’s also made me take a step back and take a hard look at technology and I truly believe that at least 60% of software is just useless garbage that really doesn’t need to exist. I’ve been slowly cutting “smart” things, time trackers, just any random app out of my life and I honestly don’t miss it.
Not sure what direction I’m going with this but there has to be a better use of our best minds than more ads or more entertainment.
Its also super clear that this is a technology that most of Big Tech totally missed and seems unable to catch up on (Apple, Amazon, Microsoft; Google is doing fine). There's a seriously possible Microsoft-vs-IBM-v2 play possible in the next couple years.
I'm a cynic, but I'm not convinced this is a bubble in the traditional sense. I'd argue that its startlingly asinine to point at a product that went from nothing to being used by 10% of the planetary population in two years, accidentally, like they internally thought it was a stupid idea and still think its a stupid idea, and say "nah they got nothing".
Crypto was / is a true bubble Lots of paper value but hardly any adoption
Some iffy figures are maybe 560 million holders globally and about 28% of Americans including of course the president and family.
Trading volume in the last 24 hours was $1.37tn which is a lot for something with hardly any adoption.
The Bitcoin bubble of course crashed in 2013 from $1k, 2018 from $19k, 2021 from $65k and has just fallen to $111k and will probably crash further.
Sora is one of the fastest growing apps in history.
When they add ads to ChatGPT, they'll make bank. I use ChatGPT way more than I use Google now.
The fear of an AI bubble isn't that AI companies will fail, it's that a downturn in the AI "bubble" will lay bare the underlying bear market that their growth is occluding. What happens then? Nobody knows. Probably nothing good. Personally I think much of the stock market growth in the last few years that seems disconnected with previous trends (see parallel growth betwen gold and equities) is based on retail volume and unorthodox retail patterns (Robinhood, WSB, et. al) that I think conventional historical market analysis is completely unprepared for. At this point everything may go to the moon forever, or it may collapse completely. Either way, we live in a time of little precedence.
It isn't? What is stopping companies from building on GPT-OSS or other local models for cheaper? The AI services have no moat.
I agree with your second paragraph. The boom in the AI market is occluding a general bear market.
Right now there is an efficiency/hardware moat. That's why the Stargate in Abilene and corresponding build outs in Louisiana and elsewhere are some of the most intense capex projects from the private sector ever. Hardware and electric production is the name of the game right now. This Odd Lots podcast is really fresh and relevant to this conversation: https://www.youtube.com/watch?v=xsqn2XJDcwM
Local models, local agents, local everything and the commodification of LLMs, at least for software eng is inevitable IMO, but there is a lot of tooling that hasn't yet been built for that commodified experience yet. For companies rapidly looking to pivot to AI force multiplication, the superscalers are the answer for now. I think it's a highly inefficient approach for technical orgs, but time will create efficiency. For your joe on the street feeding data into an LLM, then I don't think any of those orgs (think your local city hall, or state DMV) are going to run local models. So there is a captured market to some degree for the the current superscalars.
It doesn't give good recommendations because the training set is so out of date, and I find it unusual that that anyone would use it for product recommendation.
Lastly, "fastest growing" in the short term does not amount to much in the long term.
I would say the exact same for Google.
What isn't SEO'd to hell is crowded out by half a page of paid search ads.
I'm not a typical consumer though. I don't think I've ever bought anything via an ad.
The only search ads I click are the ones where Google FOMO-extorted the brand into buying ads for their own trademark. (That shit ought to be illegal given the monopoly levels of search capture Google has. Everyone having to buy up search clicks for their own brands and company names is comical. A protection racket.)
If I would have asked an LLM, it would have told me to buy an all-metal hotend that costs $70 or more, based on outdated advice.
I don't trust LLMs for out of the box thinking because they have no idea what is going on in the real world. They are fine for discussing general aspects of well-documented and often-discussed things like taxes, cooking, or gardening.
Here's something I implore readers think about, just to help ground your reality in the numbers we're talking: The super bowl gets ~120M viewers. During the four hour event, this year, they turned ~$800M in advertising revenue. What you're thinking is "I see where you're going with this, but that's higher value advertising, its not the same on..." and I'll stop you right there and ask: How much revenue do you think Meta makes every day? The answer: ~$450M. Not far off.
OpenAI will make so much money if they figure out advertising. Incomprehensible, eye-watering amounts of money. They will spend all of it and more building data centers. If your mental model on these companies is still based in some religious belief that they need to achieve AGI to be profitable, and AGI isn't possible/feasible: you need to update your model. AGI doesn't matter. People want to stop thinking and sext their chatgpt robot therapist. You can hate it, but get with the program and stop crystalizing that legitimate hate for what the future looks like as some weird fantasy that OpenAI is unsuccessful and going to implode.
https://www.pcmag.com/news/angry-birds-shares-your-data-far-...
I suppose the game should be valued at $200 billion.
The better comparison is Fortnite & Epic Games, which has ~1.5M concurrent players [1] and commands an estimated valuation of ~$18B [2]. You can also look at Meta, which runs platforms materially similar to OpenAI, with user counts within the same order of magnitude, and they command a valuation of ~$1.7T.
[1] https://www.playerauctions.com/player-count/fortnite/
[2] https://finance.yahoo.com/markets/private-companies/highest-...
250M for Angry Birds is not all time users. They had 260 million monthly active players:
https://yourstory.com/2025/02/rise-fall-angry-birds
Who got bored later, as is the case for "AI" already now. I agree though that monetization of people entering their private data is much more promising for a company with ex-NSA directors on the board.
Still, I believe that "Open" "AI" is completely overvalued.
https://www.bondcap.com/report/pdf/Trends_Artificial_Intelli...
So nah.
If you’re claiming “decades” for the transformation of the internet, by the same measure one could argue that AI started in the 60s. If you’re saying “less than 5 years”, what exactly are you considering “AI” ?
They're essentially just throwing VC money at training to do free work for all of us. I have a similar attitude towards facebook today or bell labs a long time ago: they are doing all of this R&D work that will benefit everyone except themselves.
People are spending real money on the product, it's not just the companies spending on infrastructure.
So yes there is a market but that doesn't match the level of investment. A bubble doesn't mean the product is useless.
If this is true it means there is a ton of growth available once people understand that it's much more than this.
My point was: many or most people still think "AI" is limited to the summary at the top of Google search results. In Jon Stewart's recent podcast with Geoffrey Hinton he said that he thinks of AI as "polite Google".
So most people haven't tried applying this tech yet.
Also fyi NVDA is 7% of the s&p so if you own the market you own it too. I don't own it directly either.
Also, what to do with the shovels that are not sold if there are no buyers?
But given the amount of change, and competition, it is just as obvious that there will be many sub-market bubbles, of varying size, with unpredictable thresholds, timing and resolution.
And large companies, tech and non-tech (they all depend on information tech), will burn fortunes defensively. Which, if understood as a hedge isn't a bubble.
--
For now, real demand for higher quality AI is insane. What will be interesting will be the dynamics as it passes the "average" person. (That will appear to happen at different times to different people, because these models are not ever going to be "just like us".)
I can imagine AI getting smarter fast enough to overshoot contemporary needs or ability to leverage. Needs and new leverage adoption are limited in rate of change by legal, political, social, economic, and adjacent/supporting tech adaptation rates. Any overshoot would completely commodify AI for as long as it took for use to catch up with potential.
That would resulting a temporary but still market-wide AI bubble burst. When nobody (or only a small minority) needs the best AI, and open source and low margin models will clean the clocks of high investment burning overshooters.
It is easy to underestimate how many small, unimportant, irrelevant, independent and easy adaptations that a new tech needs to deliver very different kinds of value, that are actually huge, important, inherent and non-obvious adaptations.
An analogy: Give a bright adaptable 10x (relative return/time vs. the norm) developer a billion dollars. See how long it takes them to re-orient themselves to their new scale and challenges, and get even 2x the returns on their new wealth relative to the norm. They may do much worse than 1x.
Achieving superintelligence "too fast" would have a similar effect. It will take almost every actor more time to adjust than we think. And many capable and historically successful enterprises, including some at the forefront of AI, will die of adaptation-overload shock.
From that viewpoint, OpenAI looks like they are doing the right things. Because they are going as vertical as they can, they will be confronting new-value-proposition frictions much earlier than others might. That is a very wise move given all the uncertainties. (Besides the obvious motivation of wanting it all.)
You speak very factually of this.
What is this enormous number of people? What's the source of those numbers?
HN was supposed to be above low-effort knee-jerk reactions such as this one.
Bank of England flags risk of 'sudden correction' in tech stocks inflated by AI
https://news.ycombinator.com/item?id=45516265
OpenAI, Nvidia fuel $1T AI market with web of circular deals
https://news.ycombinator.com/item?id=45521629
AMD signs AI chip-supply deal with OpenAI, gives it option to take a 10% stake
https://news.ycombinator.com/item?id=45490549
Without data centers, GDP growth was 0.1% in the first half of 2025
https://news.ycombinator.com/item?id=45512317
"It's going to drag down the rest of the economy.""