After the Bubble
Key topics
The debate rages on: is there an AI bubble about to burst, or is it a revolutionary technology with staying power? Some commenters, like kevin061, argue that there's no AI bubble, drawing parallels with Bitcoin, which has persisted despite being labeled a bubble since its inception. Others, such as alextingle, counter that the lack of a clear path to profitability for companies like OpenAI is a red flag, while thunderfork notes that even if the AI bubble pops, it could still leave a lasting impact, much like the dot-com bubble. As the discussion unfolds, it becomes clear that the conversation is not just about AI, but also about the broader implications of emerging technologies and their potential to disrupt various industries.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
5h
Peak period
41
6-9h
Avg / period
8.5
Based on 85 loaded comments
Key moments
- 01Story posted
Dec 9, 2025 at 7:02 AM EST
about 1 month ago
Step 01 - 02First comment
Dec 9, 2025 at 11:45 AM EST
5h after posting
Step 02 - 03Peak activity
41 comments in 6-9h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 10, 2025 at 11:48 PM EST
about 1 month ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
People have been calling Bitcoin a bubble since it was introduced. Has it popped? No. Has it reached the popularity and usability crypto shills said it would? Also no.
AI on the other hand has the potential to put literally millions of individuals out of work. At a minimum, it is already augmenting the value of highly-skilled intellectual workers. This is the final capitalism cheat code. A worker who does not sleep or take time off.
There will be layoffs and there will be bankruptcies. Yes. But AI is never going to be rolled back. We are never going to see a pre-AI world ever again, just like Bitcoin never really went away.
This has been true since, say, 1955.
> This is the final capitalism cheat code. A worker who does not sleep or take time off.
That’s the hope that is driving the current AI Bubble. It has neither ever been true nor will be true with the current state of the art in AI. This realization is what is deflating the bubble.
The technology will remain, of course, just like we still have railways, and houses.
But, and this is key, AI is not going away for as long as the potential to replace human labour remains there.
Renewed interest by the Trump clan with Lutnick's Cantor & Fitzgerald handling Tether collateral in Nayib Bukele's paradise wasn't easy to predict either.
Neither was the recent selloff. It would be hilarious if it was for a slush fund for Venezuelan rebels or army generals (bribing the military was the method of choice in Syria before the fall of Assad).
Agree with you it would be different, crypto is global, most of the accessible alternative methods are localized to varying degrees.
(And putting masses of people out of work and and thereby radically destabilizing capitalist societies, to the extent it is a payoff, is a payoff with a bomb attached.)
AI companies are releasing useful things right this second, even if they still require human oversight, they are also able to significantly accelerate many tasks.
The AI bubble involves a lot of that, too.
> AI companies are releasing useful things right this second, even if they still require human oversight, they are also able to significantly accelerate many tasks.
So were the Googles and other leading firms in the dotcom bubble era, and if you said the dotcom bubble wasn’t a bubble because of that, you’d obviously have been wrong.
1. The infra build out bubble: this is mostly the hypescalers and Nvidia.
2. The AI company valuation bubble: this includes the hyperscalers, pure-play AI companies like OpenAI and Anthropic, and the swarm of startups that are either vaporware or just wrappers on top of the same set of APIs.
There will probably be a pop in (2), especially the random startups that got millions in VC funding just because they have a ".ai" in their domain name. This is also why the OpenAI and Anthropic are getting into the infra game by trying to own their own datacenters, that may be the only moat they have.
However, when people talk about trillions, it's mostly (1) that they are thinking of. Given the acceleration of demand that is being reported, I think (1) will not really pop, maybe just deflate a bit when (2) pops.
However, I think the AI datacenter craze is definitely going to experience a shift. GPU chips get obsolete really fast, especially now that we are moving into specialised neural chips. All those datacenters with thousands of GPUs will be outcompeted by datacenters with 1/4th the power demand and 1/10th the physical footprint due to improved efficiency within a few years. And if indeed the valuation collapses and investors pull out of these companies, where are these datacenters supposed to go? Would you but a datacenter chock full of obsolete chips?
However, I've come across a number of articles that paint a very different picture. E.g. this one is from someone in the GPU farm industry and is clearly going to be biased, but by the same token seems to be more knowledgeable. They claim that the demand is so high that even 9-year old generations still get booked like hot cakes: https://www.whitefiber.com/blog/understanding-gpu-lifecycle
What does this prove? Demand is inflated in a bubble. If the AI company valuation bubble pops, demand for obsolete GPUs will evaporate.
The article you're linking here doesn't say what percentage of those 9-year-old GPUs already failed, nor does it say when they were first deployed, so it's hard to draw conclusions. In fact their math doesn't seem to consider failure at all, which is highly suspicious.
In another subthread, you pointed to the top comment here about a 5-year MTBF as supposedly contradicting the original article's thesis about depreciation. 5 years is obviously less than the 9 years here, so clearly something doesn't add up. (Besides, a 5-year MTBF is rather poor to begin with, not a smoking gun which contradicts anything in Tim Bray's original article.)
Is it? The dot-com fiber bubble for instance was famous for laying far more fiber than would be needed for the next decade even as the immediate organic demand was tiny.
In this case however, each and every hyperscaler is bemoaning / low-key boasting that they have been capacity constrained for the past multiple quarters.
The other data point is the climbing rate of AI adoption as reported by non-AI affiliated sources, which also lines up with what AI companies report, like:
https://www.stlouisfed.org/on-the-economy/2025/nov/state-gen...
That article is a little crazy. Not only are 54% of Americans using AI, that is 10 percentage points over last year... and usage at work may even be boosting national-level metrics!
> In fact their math doesn't seem to consider failure at all, which is highly suspicious.
That's a good point! If I had to guess, that may be because Burry et al don't mention failure rates either, and seem to assume a ~2 year obsolescence based on releases of new generations of GPUs.
As such, everybody is responding to those claims. The article I linked was making the point that even 9-year old generations are still in high demand, which also explains the 5 years vs 9 years difference -- two entirely different generations and models, H100 vs M4000.
And while MTBF is not directly related to depreciation, it's Bray who brings up failure rates in a discussion about depreciation. This is one reason I think he's just riffing off what he's heard rather than speaking from deep industry knowledge.
I've been trying to find any discussion that mentions concrete failure rates without luck. Which makes sense, since they're probably heavily-NDA'd numbers.
Yes, demand is absolutely inflated in a bubble. We're talking about GPUs, so look at hardware sales for the comparison, not utility infrastructure. Sun Microsystems' revenue during and after the dotcom bubble, for example. Or Cisco's, for a less extreme but still informative case.
> it's Bray who brings up failure rates in a discussion about depreciation
Yes, I understood his point to be that depreciation schedules for GPUs are overly optimistic (too long) relative to their apparently short MTBF. Implying what is on the books as assets may be inflated compared to previous normal practices in tech.
In any case, at this point I agree with the other commenter who said you're just trying to confirm your existing opinion, so not really much sense in continuing this discussion.
Bitcoin/crypto doesn't have earnings reports, but many crypto-adjacent companies have crashed down to earth. It would have been worse but regulation, or sometimes lack thereof, stopped them from going public so the bleeding was limited.
The Bitcoin bubble, if anything, deflated. But I'd still disagree with this characterisation because the market capitalisation of Bitcoin only seems to be going up.
Going by the logic of supply and demand, as more and more Bitcoin is mined, the price should drop because there's more availability. But what I've observed is the value has been climbing over the past few years, and remained relatively stable.
In any case, it's hard to argue that more people are using Bitcoin and crypto now compared to 5 years ago. Sure, NFTs ended up fizzling out, but, to be honest, they were a stupid idea from the beginning, anyway.
I mean, to one degree or another, this is correct. somethings are not going back into the genie bottle.
Except for the physical buildings, permitting, and power grid build-out.
Those are extremely localized at a bunch of data centers and how much of that will see further use? And how much grid work has really happened (there are a lot of announcement about plans to maybe build nuclear reactor etc., but those projects take a lot of time, if ever done)
nVidia managed to pivot their customer base from crypto mining to AI.
As much as there is market for somewhat-less-expensive data centers. (Data centers where somebody else already paid the cost of construction.)
And where they are doesn't matter. The internet is good at shipping bits to various places.
This is how "serverless" became a thing btw.
Of course, we didn't call it "serverless" back then. If you are referring to the name rather than the technology, I'd credit Ruby on Rails. It it what brought servers into fashion, and it was that which enabled us to make "serverless" something brand able again.
I think the first part of this is probably true, but I don’t think everyone knows it. A lot of people are acting like they don’t know it.
It feels like a bubble to me, but I don’t think anyone can say to a certainty that it is, or that it will pop.
Its more accurate to say that bubbles rely on most people being blind to the bubble's nature.
Or they're acting like they think there's going to be significant stock price growth between now and the bubble popping. Behaviors aren't significantly different between those two scenarios.
Putting your statement another way, if you and I can see the bubble, then it's almost a certainty that the average tech CEO also sees a bubble. They're just hoping that when the music stops, they won't be the one left holding the bag.
I own some NVDA. I've sold a good portion of it, so I've "locked in the profit" on it. If it doubles again, I'll sell some more. If it crashes I won't be too disappointed -- I've locked in my profit, and now I own more reasonably priced NVDA shares.
Note that if you have index funds, you probably already own a surprising amount of NVDA.
When the bubble pops, do you fire _even more_ people? What does that look like given the decimation in the job market already?
Equivocating about what YOU comfortably would prefer to call it is wasted effort that I don't care to engage in.
Wouldn't AI largely be race to bottom? As such even if expensive employees get replaced, the cost of replacing them might not be that big. It might only barely cover the costs of interference for example. So might it be that profits will actually be lot lower than costs of employees that are being replaced?
To the second point, the race to the bottom won't be evenly distributed across all markets or market segments. A lot of AI-economy predictions focus on the idea that nothing else will change or be affected by second and third order dynamics, which is never the case with large disruptions. When something that was rare becomes common, something else that was common becomes rare.
The Meta link does not support the point. It's actually implying a MTBF of over 5 years at 90% utilizization even if you assume there's no bathtub curve. Pretty sure that lines up with the depreciation period.
The Google link is even worse. It links to https://www.tomshardware.com/pc-components/gpus/datacenter-g...
That article makes a big claim, does not link to any source. It vaguely describes the source, but nobody who was actually in that role would describe themselves as the "GenAI principal architect at Alphabet". Like, those are not the words they would use. It would also be pointless to try to stay anonymous if that really were your title.
It looks like the ultimate source of the quote is this Twitter screenshot of an unnamed article (whose text can't be found with search engines): https://x.com/techfund1/status/1849031571421983140
That is not merely an unofficial source. That is just made up trash that the blog author just lapped up despite its obviously unreliable nature, since it confirmed his beliefs.
But you can see that it works: go to colab.research.google.com. Type in some code ... "!nvidia-smi" for instance. Click on the down arrow next to "connect", and select change runtime type. 3 out of 5 GPU options are nVidia GPUs.
Frankly, unless you rewrite your models you don't really have a choice but using nVidia GPUs, thanks to, ironically, Facebook (authors of pytorch). There is pytorch/XLA automatic translation to TPU but it doesn't work for "big" models.
In other words, Google rents out nVidia GPUs to their cloud customers (with the hardware physically present in Google datacenters).
I don't understand what you mean, most models aren't anywhere near big in terms of code complexity, once you have the efficient primitives to build on (like you have an efficient hardware-accerated matmul, backprop, flash attention, etc.) these models are in the sub-thousand LoC territory and you can even vibe-convert from one environment to another.
That's kind of a shock to realize how simple the logic behind LLMs is.
I still agree with you, Google is most likely still using Nvidia chips in addition to TPUs.
Too bad that doesn't work. Transformers won't perform well without an endless series of tricks. So endless you can't write that series of tricks. You can't initialize the network correctly when starting from scratch. You can't do the basic training that makes the models good (ie. the trillions of tokens). Flash attention, well that's 2022, it's cuda assembly, and only works on nVidia. Now there's 6 versions of flash attention, all of which are written in Cuda Assembly. It's also only fast on nvidia.
So what do you do? Well you, as they say "start with a backbone". That used to always be a llama model, but Qwen is making serious inroads.
The scary part is that this is what you do for everything now. After all, llama and Qwen are text transformers. They answer "where is Paris?". They don't do text-speech, speech recognition, object tracking, classification, time series, image-in or out, OCR, ... and yet all SOTA approaches to all of these can be only slightly inaccurately described as "llama/qwen with a different encoder at the start".
That even has the big advantage that mixing becomes easy. All encoders produce a stream of tokens. The same tokens. So you can "just" have a text encoder, a sound encoder, an image encoder, a time series encoder and just concatenate (it's not quite that simple, but ...) the tokens together. That actually works!
So you need llama or Qwen to work, not just the inference but the training and finetuning, with all the tricks, not just flash attention, half of which are written in cuda assembly, because that's what you start from. Speech recognition? SOTA is taking sounds -> "encoding" into phonemes -> have Qwen correct it. Of course, you prefer to run the literal exact training code from ... well from either Facebook or Alibaba, with as little modifications as possible, which of course means nvidia.
You're assuming this is normal, for the MTBF to line up with the depreciation schedule. But the MTBF of data center hardware is usually quite a bit longer than the depreciation schedule right? If I recall correctly, for servers it's typically double or triple, roughly. Maybe less for GPUs, I'm not directly familiar, but a quick web search suggests these periods shouldn't line up for GPUs either.
This means that society as a whole is perhaps significantly poorer than if LLMs had been properly valued (i.e. not a bubble), or had simply never happened at all.
Unfortunately it will likely be the poorest and most vulnerable in our societies that will bear the brunt. 'Twas ever thus.
A small price to pay for erotic roleplay
In somehwere around 1999, my high school buddy, worked overtime shifts to afford a CPU he had waited forever to buy! Wait for it, it was a 1 GHZ CPU!
> When companies buy expensive stuff, for accounting purposes they pretend they haven’t spent the money; instead they “depreciate” it over a few years.
There's no pretending. It's accounting. When you buy an asset, you own it, it is now part of your balance sheet. You incur a cost when the value of the asset falls, i.e. it depreciates. If you spend 20k on a car you are not pretending to not having spent 20k by considering it an asset, you spent money but now you have something of similar value as an asset. Your cost is the depreciation as years go by and the car becomes less valuable. That's a very misleading way to put it.
> Management gets to pick your depreciation period, (...)
They don't. GAAP, IFRS, or whatever other accounting rules that apply to the company do. There's some degree of freedom in certain situations but it's not "management wants". And it's funny that the author thinks that companies in general are interested in defining longer useful lives when in most cases (this depends on other tax considerations) it's the opposite because while depreciation is a non-cash expense you can get real cash by reducing your taxable income and the sooner you get that money the better.
> It’s like this. The Big-Tech giants are insanely profitable but they don’t have enough money lying around to build the hundreds of billions of dollars worth of data centers the AI prophets say we’re going to need.
Actually they do, Meta is the one that has the least but it could still easily raise that money. Meta in this case just thinks it's a better deal to share risk with investors that at the moment have a very strong appetite to own these assets. Meta is actually paying a higher rate through these SPVs compared to funding them outright. Now, personally I don't know how I would feel about that deal in particular if I was an investor just because you need to dig a little deeper in their balance sheet to have a good snapshot of what is going on but it's not any trick, arguably it can make economic sense.
Actually the author has worked for Google and Amazon (VP-level), as well as Sun and DEC, and was a co-creator of XML.
2. That level of seniority does, on the other hand, expose them to a lot of the shenanigans going on in those companies, which could credibly lead them to develop a "big tech bad" mindset.
On the other side, we have the sole comment ever by a pseudonymous HN user with single-digit karma.
Personally I'll trust the former over the latter.
I mean, per the top comment in this thread, he cites an article -- the only source with official, concrete numbers -- that seems to contradict his thesis about depreciation: https://news.ycombinator.com/item?id=46208221
I'm no expert on hardware depreciation, but the more I've dug into this, the more I'm convinced people are just echoing a narrative that they don't really understand. Somewhat like stochastic parrots, you could say ;-)
My only goal here is to get a real sense of the depreciation story. Partially out of financial interest, but more because I'm really curious about how this will impact the adoption of AI.
It doesn't look like your goal is to get a real sense or at least your strategy is really poor as you have an opinion already and wants to confirm it
I think we are actually in two bubbles -- see sibling thread: https://news.ycombinator.com/item?id=46211400 -- 1) AI infra + hyperscalers, and 2) pure-play AI company / startup valuations.
My take is only the second one will pop and cause a temporary deflation in the first one, and the GPU depreciation story is going to influence when it happens, how painful it will be, and how long it will last.
However, I'm convinced this will be a temporary blip. By all the data points I can find -- academic studies, quarterly earnings, government reports, industry surveys, not to mention my daily experiences with AI -- the AI growth story looks very real and very unstoppable. To the extent I'm concerned for my kids' prospects when they say they're interested in software engineering!
BUT (my point)
Is that the article is terrible at reflecting all of that and makes wrong and misleading comments about it.
The idea that companies depreciating assets is them "pretending they haven't spent the money" or that "management gets to pick your depreciation period" is simply wrong.
Do you think any of those two statements are accurate?
On the "management gets to pick your depreciation period" one in particular, despite being a massive over-simplification, there is substantial underlying truth to the statement. The author's comment about "I can remember one of the big cloud vendors announcing they were going to change their fleet depreciation from three to four years and that having an impact on their share price" is specifically referring to Alphabet's 2021 Q3 earnings statement. See the HN discussion I linked in reply to the sibling comment.
Notice that I never disagreed with his underlying take. Everyone should have concerns about investments in the magnitude of 1 or 2 pp of GDP that serve novel and unproven business models. He even cites a few people that I love to read - Matt Levine. I just think that the way he is representing things is grossly misleading.
If I had little financial knowledge my take away from reading his article would be (and _this_ is a simplification for comedic purposes): all of these big tech companies are all "cooking the books", and they all know that these investments are all bad and they just don't care (for some reason)... and they just hide all of these costs because they don't have money... And this is not a fair representation.
I think we are too forgiving of these type of "simplifications", if you think they are reasonable, ok. I just shared my take, probably I should have stuck with just the observations on the content and left out the subtle ad hominem, so that's a fair point.
aaaaaaaaHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA
> Anyhow, there will be a crash and a hangover. I think the people telling us that genAI is the future and we must pay it fealty richly deserve their impending financial wipe-out. But still, I hope the hangover is less terrible than I think it will be.
Yup. We really seem to be at a point where everyone has their guns drawn under the table and we're just waiting for the first shot—like we're living in a real-world, global version of Uncut Gems.
[citation needed]
I thought there was a US IRS Law that was changed sometime in the past 10/15 years that made companies depreciate computer hardware in 1 year. Am I misremembering ?
I thought that law was the reason why many companies increased the life time of employee Laptops from 3 to 5 years.
I think people need to realize that if the bubble gets bad enough, there will absolutely, positively, 100% be a bailout. Trump doesn't care who you are or what you did, as long as you pay enough (both money and praise) you get whatever you want, and Big Tech has already put many down payments. I mean, they ask him "Why did you pardon CZ after he defrauded people? Why did you pardon Hernandez after he smuggled tons of cocaine in?" and he plainly says he doesn't know who they are. And why should he? They paid, there's no need to know your customers personally, there's too many of them.
I always appreciate the generic, boring terminology. "Special Purpose Vehicles" reminds me of "Special Purpose Entities" from the 90s and 00s, e.g., for synthentic leases