Just How Bad Would an AI Bubble Be?
Posted4 months agoActive4 months ago
theatlantic.comTechstory
skepticalmixed
Debate
80/100
AI BubbleProductivity GainsTech Investment
Key topics
AI Bubble
Productivity Gains
Tech Investment
The article discusses the potential for an AI bubble and its implications, with commenters debating whether AI is overhyped and whether its productivity gains will materialize.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
4m
Peak period
43
0-3h
Avg / period
6.3
Comment distribution69 data points
Loading chart...
Based on 69 loaded comments
Key moments
- 01Story posted
Sep 7, 2025 at 3:59 PM EDT
4 months ago
Step 01 - 02First comment
Sep 7, 2025 at 4:03 PM EDT
4m after posting
Step 02 - 03Peak activity
43 comments in 0-3h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 9, 2025 at 7:37 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45161618Type: storyLast synced: 11/20/2025, 4:11:17 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
or, even worse perhaps the productivity gains will be high.
This is probably what is going on when we see McD executives say there are basically two economies in the US at the moment, one doing very well for itself and the other experiencing a lot of hardship.
Your phrasing there is misleading. The article says, "In the first half of this year, business spending on AI added more to GDP growth than all consumer spending combined," and the key is the "added more to GDP growth" part.
Growth in consumer spending was sluggish, while growth in business AI spending was insane, so in terms of how much the economy grew, the rise in AI spending exceeded the rise in consumer spending. Which is pretty amazing, actually. But, as far as the total amount of spending, business AI is at most only a few percentage points of GDP (and that's if you interpret "business AI spending" very broadly), while consumer spending is somewhere around 67-70% of GDP.
My comment is really a off-topic, its that if AI really does work well it'll put half of us out of work, leaving us to do non-knowledge work that the robots can't do yet.
The effects of the technology would have to rival that of.. idk, soap and the locomotive combined? In order for us not to be in a bubble.
It has swallowed nearly all discourse about technology and almost all development, nearly every single area of technology is showing markers of recession.. except AI, which has massively inflated salaries and valuations.
I can’t even think of a scenario where we don’t look back on this as a bubble. What does the tech need to be able to do in order to cover its investment?
Can you quantify this or are you going based on vibes? If I were to compare this to previous bubbles (ie crypto, dot com bubble), in valuation multiple terms we haven't gotten started. And a number of tech companies eventually grew into and surpassed their bubble valuations.
Locomotives were something that had a definite upfront cost, unlike soap, and we invested so much money - up to 3% of GDP in some years in the 1800s.. but the economic benefits were ridiculous.
Spending on AI data centers is so massive that it's taken a bigger chunk of GDP growth than shopping! (according to Yahoo!: https://consent.yahoo.com/v2/collectConsent?sessionId=3_cc-s...)
I mean, it’s really apples to oranges, but I can’t imagine us getting the returns anywhere close to what rail gave us in the 1800s.
> Spending on AI data centers is so massive that it's taken a bigger chunk of GDP growth than shopping
You havent compared this to previous bubbles like the internet, smart phones and personal computing in general. Your argument isnt convincing.
But AI investment is outpacing all the things you mentioned.
I mentioned soap not because of capital investments, but because of what the GDP growth was following the invention. If AI is qualitatively lower than the GDP growth caused by SOAP then we're going to see a lot of unemployment. So, it's a bubble- nothing could possibly match soap.
This is not the case with AI. It is not even clear if these tools are useful. This really is much more an “Emperor has no clothes” situation, which is why this bubble is perhaps more dangerous.
1) Overhype can cause a “winter” where the word itself will become toxic once the bubble pops, leading to significant underinvestment for the period following. This actually has already happed twice with AI, and is part of why we use “Machine Learning” for concepts that used to be called AI… because AI was a toxic word that investors ran from.
2) A bubble sucks all the oxygen out of the room for all other technological endeavours. It’s strictly a bad thing as it can crush the entire technology sector (or potentially even the economy).
3) Bubbles might cause a return to form, but the internet was more like the railroads. Once built the infrastructure was largely existing and it being sold “for cheap” lead to a resurgence. AI has fewer of these core bits of infrastructure that will become cheap in a bust.
When that happens they will end up overcrowding other jobs sectors pushing salaries down for people already in these fields. Once that happens, the lost of purchase power will hit every sector and drag down economies anyway.
So, if I have to summarize my thoughts, we are either in a bubble that will pop and drag down AI related stocks, or it is not a bubble because the tech will actually deliver on what CEOs are touting and there will be high unemployment/lower salaries for many which in turn will mess up other parts of the economy.
Happy to be wrong, but given the hype on AI the winning conditions have to be similarly high, and millions losing their jobs will for sure have huge repercussions.
Look at history, there’s so many jobs that have been automated yet we still have 95%+ employment in most countries and effectively “double” the workforce as we’re pushing dual-income households as a standard.
I’m not sure how our obscenely wealthy overlords think things will play out when we’re all wage-slaves barely able to scrape by. It hasn’t worked out for any society historically.
Now, what happens if there are no jobs available someplace else? Would the sort of leadership we have in power these days consider a New Deal esque plan for mass public work projects and employment opportunities when the private sector has none available? Or would they see it as an opportunity to bring back a sort of feudalism or plantation economy where people aren’t really compensated at all and allowed to starve if not immediately economically useful?
soap + water = bubble
That said: In principle, if the tech did what the strongest proponents forecast it would do, it would change "the economy" in a manner for which we don't even have a suitable metaphor, let alone soap and locomotives.
Now, I do not believe the AI on the horizon at the moment[0] will do any of that. The current architectures are making up for being too stupid to live[1] by doing its signal processing faster than anything alive by the same degree to which jogging is faster than continental drift[2].
As for "minimum" needed for this to not be a bubble, it needs to provide economic value in the order of 0.1 trillion USD per year, but specifically in a way that the people currently investing the money can actually capture that as profit.
The first part of that, providing economic value, I can believe: Being half-arsed with basic software development, being first line customer support, supplying tourist-grade translation services, etc. I can easily belive this adds up to a single percentage point of the world GDP, 10x what I think it needs to be.
Capturing any significant fraction of that, though? Nah. I think this will be like spreadsheets (Microsoft may be able to charge for Office, but Google Docs and LibreOffice are free) or Wikipedia. Takes an actual business plan to turn any of this into income streams, and there's too many people all competing for the same space, so any profit margin is going to be ground to nothing until any/all of them actually differentiate themselves properly.
[0] Not that this says very much given how fast the space is moving; it's quite plausible a better architecture has already been found but it isn't famous yet and nobody's scaled it up enough to make it look shiny and interesting, after all Transformers took years before anyone cared about them.
[1] Literally: no organic brain could get away with being this slow vs. number of examples needed to learn anything
[2] Also literally
AI is such a thing.
Either AI is a total fraud, completely useless (at least for programming), or it's deus ex machina.
But reality has more than one bit to answer questions like "Is AI a hype?"
Despite only recently becoming a father and feeling like I am in my prime, I've seen many hypes.
And IT is an eternal cycle of hypes. Every few years a new holy cow is sent through the village to bring salvation to all of us and rid us of every problem (un)imaginable.
To give a few examples:
Client-Server SPAs Industry 4.0 Machine Learning Agile Blockchain Cloud Managed Languages
To me LLMs are nice, though no revelation.
I can use them fine to generate JS or Python code, because apparently the training sets were big enough, and they help me by writing boilerplate code I was gonna write anyway.
When I try them to help me write Rust or Zig, they fall extremely short though.
LLMs are extremely overhyped. They made a few people very rich by promising too much.
They are not AI by any means but marketing.
But they are a tool. And as such they should be treated. Use them when appropriate, but don't hail them...
You were previously talking about AI being a bubble and also useful For reference, wikipedia defines a bubble as: "a period when current asset prices greatly exceed their intrinsic valuation". I find that hard to reason about. One way to think about it is that all that AI does is create economic value, and for it to be useful it would have to create more economic value than it destroys. But that's hard to reason about without knowing the actual economics of the business, which the labs are not super transparent about. On the other hand I would imagine that all that infra building by all big players shows some level of confidence that we are way past "is it going to be useful enough?". That is not what reasonable people do when they think there's a bubble, at least that would be unprecedented.
And that's why I was asking.
It is genuinely a useful technology. But it can't do everything and we will have to figure out where it works well and where it doesn't
For myself, I am not a huge user of it. But on my personal projects I have:
1) built graphing solutions in JavaScript in a day despite not really knowing the language or the libraries. This would have taken me weeks (elapsed) rather than one Saturday.
2) finished a large test suite, again in a day that would have been weeks of elapsed effort for me.
3) figured out how to intercept messages to alter default behaviour in a java swing ui. Googling didnt help.
So I have found it to be a massive productivity boost when exploring things I'm not familiar with, or automating boring tests. So I'm surprised that the study says developers were slower using it. Maybe they were holding it wrong ;)
I prefer working with AI but it ain't prefect for sure.
AI is currently being treated as if it's a multi-trillion dollar market. What if it turns out to be more of a, say, tens of billions of dollars market?
If it was treated as a multi-trillion dollar market, and that was necessary to justify the current investments, then it turning out to be a tens of billions of dollar market would make it not useful.
We can go to the most extreme example: Human life, that presumably is invaluable, which would mean that, no matter what, if we have an effective treatment for a life threatning diseases, that's useful. But it clearly is not: If the single treatment cost the GDP of the entire country, we should clearly not do it, even if we technically could. The treatment is simply not useful.
For AI the case is much simpler: If the AI, that we are currently building, will in effect have destroyed economic value, then it will not have been useful (because, as far as I can tell, at a minimum the promise of AI has to be positive economic value).
What else could it possibly do for us?
I had a professor decades ago who was near retirement who related how when he was an undergraduate he had to write a paper about what humans would do with all their newfound free time since they would only need to work a dozen hours a week. I’m sure similar conversations were had at the onset of steam power and electricity; we’ve been crafting the same pipe dream for generations.
My point is that we should care about quality of life as the main measure. Economic output is a proxy for that, and sometimes a poor one.
Counterpoint: the AI fanboys and AI companies, with all their insane funding couldn't come up with a better study and bigger sample size, because LLMs simply don't help experienced developers.
What follows is that the billion dollar companies just couldn't create a better study, either because they tried and didn't like the productivity numbers not being in their favor (very likely), or because they are that sloppy and vibey that they don't know how to make a proper study (I wouldn't be surprised, see ChatGPT's latest features: "study mode" which had a blog post! and you know that the level is not very high).
Again, until there is a better study, the consensus is LLMs are 19% productivity drain for experienced developers, and if they help certain developers, then most likely those developers are not experienced.
How's that for an interpretation?
There seems to be an attitude, or at least a pretended attitude, amongst the true believers that the heretics are dooming themselves, left behind in a glorious AI future. But the AI coding tools du jour are completely different from the ones a year ago! And in six months they'll be different again!
- Cause currently what we see in OSS is LLM trash. https://www.reddit.com/r/webdev/comments/1kh72zf/open_source...
- And a large majority of users don't want that copilot trash in their default github experience: https://www.techradar.com/pro/angry-github-users-want-to-dit...
At what point that trash will become gold? 5 more years? And if it doesn't, at what point trash stays trash?
- When there is a study showing that trash is actually sapping 19% of your performance? https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
- When multiple studies show using it makes you dumb? https://tech.co/news/another-study-ai-making-us-dumb
Cause I am pretty sure NFT still has people who swear by them and say "just give it time". At what point can we confidently declare that NFTs are useless without the cultist fanbase going hurr durr? What about LLMs?
My experience is that, in many cases, people who are very good at doing something by hand are excellent at the process of doing that thing by hand, not generally excellent at doing that thing or talking about that thing or teaching others how to do that thing. And I've found it to be true (sometimes particularly true) for people who have done that thing for a long time, even if they're actively interested in finding new and better ways to do the work. Their skills are fine-tuned for a manual way of working.
Working with AI feels very, very different from writing software by hand. When I let myself lean into AI's strengths, it allows me to move much faster and often without sacrificing quality. But it takes constant effort to avoid the trap of my well-established comfort with doing things by hand.
The people who are best positioned to build incredible things with AI do not have or do not fall into that comfortable habit of manual work. They often aren't expert engineers (yet) in the traditional sense, but in a new sense that's being worked out in realtime by all of us. Gaps in technical understanding are still there, but they're being filled extremely fast by some of the best technology for learning our species has ever created.
It's hard for me to watch them and not see a rapidly approaching future where all this AI skepticism looks like self-soothing delusion from people who will struggle to adjust to the new techniques (and pace) of doing work with these tools.
What do you consider the strengths you're leaning into?
> Many knowledge-work tasks are harder to automate than coding, which benefits from huge amounts of training data and clear definitions of success.
which implies that "coding" is not knowledge work. If "coding" is understood as the mere typing of requirements into executable code, then that simply is not a bottleneck and the gains to be had there are marginal (check out Amdahl's law). And if "coding" is to be understood as the more general endeavour of software development, then the people making these statements have no fucking idea what they're talking about and cannot be taken seriously.
Yeah, there could be a sell-off of tech, but the people who sold will reinvest elsewhere.
Also, 401ks still exist. Every week people get paychecks and they automatically buy stocks, no matter what the market situation. The only thing that can bring that down is if less people have jobs with retirement plans. If AI busts there will be some amount of re-hiring somewhere to cover it.
Could be scary for people with tech stocks, but less scary with index funds going long term.
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
(@dang, why can't I link it under https://news.ycombinator.com/item?id=45161656 ?)
This feels like confusing the economy with the markets. Now, a market crash would cause economic fallout, but the markets are not the economy.
I think what many people are missing is that everybody values AI companies as if they're special, when I'm seeing customers approach it as a commodity - once a task is done it's done. It has about as much lock-in as a brand of toilet paper, and the economics more like steel production than VPS' or social media.
There are some qualitative differences, but none that seems to last for more than 6 months. On the flip side, the costs are energy & location. Scaling won't bring that down.
In a twist of irony, big tech might structurally reduce their high profit margins because of AI.
I'm bullish on AI, and competition is great for consumers in the end.
But first the bubble has to pop, and it is going to hurt.