The (economic) AI Apocalypse Is Nigh
Posted3 months agoActive3 months ago
pluralistic.netOtherstoryHigh profile
heatednegative
Debate
85/100
AI BubbleMarket CrashTech Investment
Key topics
AI Bubble
Market Crash
Tech Investment
The article discusses the potential economic apocalypse caused by the AI bubble, with many commenters debating the likelihood and potential consequences of a market crash.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
22m
Peak period
93
Day 1
Avg / period
32
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 27, 2025 at 6:30 PM EDT
3 months ago
Step 01 - 02First comment
Sep 27, 2025 at 6:52 PM EDT
22m after posting
Step 02 - 03Peak activity
93 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 7, 2025 at 10:03 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45399893Type: storyLast synced: 11/20/2025, 6:27:41 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Check out /r/wallstreetbets for expert advice on this. /s
But also: > "So, you're saying a third of the stock market is tied up in seven AI companies that have no way to become profitable and that this is a bubble that's going to burst and take the whole economy with it?"
> I said, "Yes, that's right."
that is something different in this case isn't it? those seven companies making up a third of the market do not need to become profitable, they are insanely profitable. Mostly they invest a lot in AI but if that doesn't pay out, all but NVidia have their day job to go back to.
It might be worth it just to call it now. All you really have to do is get out of the S&P, you don't have to get out of everything.
This is the crux of bubbles - timing and where do you move assets so they have protection.
1. "Time in market beats timing the market."
2. When diversifying, your profession is is already part of your portfolio.
There's also the political mismanagement of the United States, but that's a whole 'nother can of worms.
I did a lot of projects with Kafka (big data) in cloud environments and companies had big, pie in the sky dreams that came crashing back to reality when they got the bill for compute services. It's happened several times on projects I was on.
Now however, the biggest limit for AI workloads is GPU memory capacity (and bandwidth). The billions invested are going into improving this aspect faster than any other. Expect GPUs with a terabyte of ultra-fast memory by the end of the decade. There are lots… and lots… of applications for something like that, other than just LLMs!
Real estate and crypto on the other hand...
If it’s just profits getting invested + some VC exuberance… I don’t actually know if it matters. If zuck simply shut off the money spigot and never spoke on ai again… would anything actually happen?
(per Bloomberg)
That is even worse. There are many so-called "AI companies" drowning in spending lots of tokens and the majority of them have lots of assumptions when going to raise more money to VCs.
What if the VCs say no?
What if 90% of all these startups get competition from a frontier AI lab and undercuts them? We are seeing this with Cursor and Anthropic already.
What if early-stage startups cannot afford the 100K per H-1B hire anymore AND cannot hire remote overseas due to the HIRE act?
Additionally, we are going to find out what does not mix well together with AI and will inevitably cause a crash that could come unexpectedly.
> Real estate and crypto on the other hand...
At least we do *know* that both of them do not mix well together.
AI + layoffs + mortgages on the other hand...
As a corporate finance and valuation geek, Ill warn you now: dont try and time mood and momentum. Thats what is driving much of the valuations being thrown around.
If this blows up big time and it is found that the Big Tech firms were operating on lies and false hope, there will be consequences - in the form of shareholders demanding cash returned and setting limits on the cash balance held by Google et al. Apple has stayed smart staying out of this nonsense and not doing M&A.
Investing in projects with negative NPV destroys the wealth of shareholders.
Private companies soaring to $100m ARR in 12 months is commonplace now. That's what's driving the valuation.
Uber and Amazon had a very logical path to get there.
The reinvestment is so high that once you tack that onto the earnings youre in a fat negative. What does that mean? You will eat into the cash balance and eventually have to go raise more.
That is a 1999-like bubble and how you get 75 - 90% of these companies crashing when the music stops.
> Private companies soaring to $100m ARR in 12 months is commonplace now. That's what's driving the valuation.
We don't even know if that is even real to begin with. Even if it is, that revenue can be lost as quickly as it is gained.
This happened to Hopin and other companies who grew extremely quickly and then their valuations crashed.
The questions you should be asking yourself is even after looking at the competition, what is the retention and the switching cost of these said "$100m ARR in 12 months" companies if a competitor moves into their core business?
Nobody is predicting that AI is going to do that. One thing I hadn't considered before is how much it was in google's interest to overestimate and market the impact of AI during their antitrust proceedings. For the conspiratorially minded (me), that's why the bottom is being allowed to drop out of the irrational exuberance over AI now, rather than a couple months ago.
... Now that I hear it out loud, I can't help thinking if maybe it's something we should be thinking about.
Subsidization to destroy competitors followed by lock-in is obvious, but is there any way these systems could turn professionals into serfs?
I am humbled by how myopic I was in 2010 cheering for a taxi-hailing smartphone app to create consumer surplus by ordering taxis by calling taxi companies.
Comparing the two makes as much sense as comparing how a $500k rolls royce and a $1k shitbox both get you from point A to point B.
This is a pattern where people have their pre-loaded criticisms of companies/systems and just dump them into any tangentially related discussion rather than engaging with the specific question at hand. It makes it impossible to have focused analytical discussions. Cached selves, but for everything.
Phew, I'm so sad I was a Uber critic from early on...
Now, it is totally possible that their behavior eventually create a backlash which then affects their business, but then that is still a different discussion from what was discussed before.
The original plan worked because in the switch-and-bait phase they were visibly cheaper so in the last year people's mental and speech model changed from "call me a taxi" to "call me an uber". But at least in my local market, the price difference between a taxi an and uber in 2025 is negligible.
Lyft did the same thing, got a bunch of free rides for a while with them, too.
https://jjlegal.com/blog/rideshare-vs-taxis-understanding-ac...
They found the niche and market to operate in and are running with it until the next thing “creatively destroys” their business model.
That’s a far cry from the multi-trillion dollar hype bubble surrounding AI.
Back when everybody got into website building, Microsoft released a software called FrontPage, a WYSIWYG HTML editor that could help you build a website, and some of it's backend features too. With the software you can create a website completed with home, newspages and guestbooks, with ease, compare to writing "raw" code.
Now days however, almost all of us are still writing HTML and backend code manually. Why? I believe it's because the tool is too slow to fit in a quick-moving modern world. It takes Microsoft weeks of work just to come out with something that poorly mimics what was invented by an actual web dev in an afternoon.
Humans are adoptive, tools are not. Some times, tools can better humans in productivity, sometime it can't.
AI is still founding it's use cases. Maybe it's good at acting like a cheap, stupid and spying secretary for everyone, and maybe it can write some code for you, but if you ask it to "write me a YouTube", it just can't help you.
Problem is, real boss/user would demand "write me a YouTube" or "build a Fortnite" or "help me make some money". The fact that you have to write a detailed prompt and then debug it's output, is the exact reason why it's not productive. The reality that it can only help you writing code instead of building an actually usable product based on a simple sentence such as "the company has decided to move to online retail, you need to build a system to enable that" is a proof of LLM's shortcomings.
So, AI has limits, and people are finding out. After that, the bubble will shrink to fit it's actual value.
I think the bubble will be defined on whether these investments pan out in the next two years or if we just have small incremental progress like gpt4 to gpt5, not what products are made with today's llm. It remains to be seen.
Uber was undercutting traditional taxis either through driver incentive or cheaper pricing. Many hot takes were around the sustainability of this business model without VC money. In many places this turned out to be true. Driver incentives are way down and Uber pricing is way up.
That said, this is also conflating one company with an industry. Uber might have survived but how many ride sharing companies have survived in total? How many markets have Uber left because it couldn’t sustain?
In a bubble the destruction is often that some big companies get destroyed and others survive. For every pets.com there is one Amazon. That doesn’t mean Amazon is good example to say naysayers during the dot bubble were wrong.
Uber was undercutting traditional taxis because, at least in the US, the traditional taxis was horrible user experience. No phone app, no way to give feedback on driver, horrible cars, unpredictable charges... This was because taxis had monopoly in most cities, so they really did not care about customers.
The times when Uber was super-cheap have long passed, but I still never plan to ride regular taxis. It's Waymo (when availiable) or Lyft for me.
Uber sunk overall, until profitability, less than $100 billion over nearly 2 decades.
By analogy (which is basically anecdotal evidence but with cognitive rhyme) we should have profitable LLMs in about 320 years.
They did managed to offload price on weaker actors party by simply ugnoring laws and hoping it will work for them. It did, but it was not exactly some grand inspiring victory and more of success of "some dont have to follow the law" corruption.
It makes one tempted to take the sky is falling as a buy signal.
You get a positive feedback loop where the industry hypes their thing which causes the pubic to buy in which causes stocks to go up and the industry to hype more and the public to buy more. Then people point out prices are too high and capital misallocated but that doesn't stop the feedback loop so it goes on longer.
It has to stop at some point though, often because the buyers run out of money to buy with.
Then it goes into reverse - falling prices put off buyers, the industry doesn't get new cash flow to pay for its commitments to GPUs/office leases/mortgages, some of them go bust which puts off buyers even more. Then eventually, after around three years it level out.
I'm a believer in the internet, housing and AI but during the bubbles you get money misallocated on stuff like pets.com or maybe OpenAI spending zillions on data centers to provide free idle chat to the public. Money on fundamental AI research is probably good but all those data centers... dunno.
We see companies layoff workers for all sorts of short-sighted reasons. They'll mass layoff to reduce labor costs for short term profits and stock price increases, so the execs and shareholders can cash out. AI is just the current reason the executive class has decided to use for the layoffs they were going to do regardless.
Performance is difficult to measure and slow to materialise. At the same time, everyone, especially senior leadership and managers, is desperately competitive, even where that competition is on the perception rather than reality of performance. There's a very strong follow-the-herd / follow-the-leader(s) mentality, often itself driven by core investors and creditors.
A consequence is a tremendous amount of cargo-culting, in the sense of aping the manifest symbols of successful (or at least investor-favoured) firms and organisations, even where those policies and strategies end up incurring long-term harms.
Then there's the apparent winner-take-all aspect of AI, which if true would result in tremendous economic power, if not necessarily financial gains, to a very small number of incumbents. Look at the earlier fallout of the railroad, oil, automobile, and electronics industries for similar cases.
(I've found over the years various lists of companies which were either acquired or went belly-up in earlier booms, they're instructive.)
NB: you'll find fad-prone fields anywhere a similar information-theoretic environment exists: fashion, arts, academics, government, fine food, wine collecting, off the top of my head. Oh, and for some reason: software development.
Also, the DEI massacre is probably going to develop (or has developed) into a full scale HR/Social PR massacre. Instead of getting yelled at for doing the wrong thing, better to do nothing but make more money. And a side-benefit is that firing all of those people makes it even easier to fire more people. (Is that the singularity?)
I don't doubt that some industries are going to be nearly wiped out by AI, but they're going to be the ones that make sense. LLMs are basically super-google translate, and translators and maybe even language teachers are in deep trouble. In-betweeners and special effects people might be in even more trouble than they already were. Probably a lot more stuff that we can't even foresee yet. But for people doing actual thinking work, they're just a tool that feeds back to you what you already know in different words. Super useful to help you think but it isn't thinking for you, it's a moron.
Well, depends on which lesson. "The company can still run" or "we actually won't build anything new for years".
Twitter released a couple things that were being worked on before the acquisition, and then nothing else (grok comes from a different company which later was merged into it, but obviously had different employees).
That companies can be kept on KTLO mode with only a skeleton crew?
I think everybody knew that already. The hot takes that Twitter was going to disappear were always silly, probably from people butthurt that a service they liked was being fundamentally changed.
Beautiful description of AI. It’s the tech equivalent of the placebo effect. It does truly work for some, until you look closely and it’s actually a bunch of hot air.
Is a placebo worth a trillion dollars?
Not saying you're wrong, though...
Even if the big AI companies turn off their APIs, people will still be able to run local models as well as some other, new business spun up to run them as SaaS.
I also just enjoy using them for bouncing ideas off of them and doing sanity checks on all sorts of topics, personal and work-related. Sometimes they spark a better idea that I may not have had otherwise. I will still be using them after the bubble bursts.
That being said, I'm also fine if all the current AI companies implode and I'm just running an OSS model locally.
Im sure there are others who find more value from it, however, I dont think that group of people is enough to get OAI to be free cash flow to the firm positive any time soon. Note this is not accounting profit - FCFF takes into account reinvestment, and is the cash profits left after.
And since the entire US economy is being propped up by AI investing, it’s going to be a disaster.
This doesn’t square entirely with the earlier claim that AI companies have (and will continue to have) “dogshit unit economics”.
If you have a bunch of cheap “applied statistician” labor (kind of a reductive take, btw), cheap GPUs, and powerful open source models, it is a near certainty that companies would achieve favorable unit economics by optimizing existing models to run much more efficiently on existing GPUs.
I happily pay $20/month for Google One to use Gemini 2.5 Pro. I don’t really need it to be a whole lot better. It’s a great product. If they can deliver inference of that level with positive margin (and keep it ad free), it’s a viable business.
Investors will likely lose billions, if not trillions, but I don’t think the industry is inherently unprofitable - I just don’t think anybody has been incentivized to optimize for cost yet. Why would you, when investors continue to throw money at you to scale?
So you think those $20/month are generating profits?
Because Google is burning its own cash.
That “if” is doing tons of heavy lifting here.
I have an idea that the market may actually start to react positively to bad job numbers, as that could be taken as a signal that companies are shedding people to replace them with AI (even if that's not the actual reason for the bad numbers). If job numbers started suddenly improving and the unemployment rate dropping, it could be taken to mean that AI is not going to replace everyone after all.
I get that rationale in some bubbles as that means people are not parking their money as cash where they can buy the dip and support the market (tell me if I'm widely off). But I think this case is different because there's actually VAST sums of money being spent in AI by some very big players who will need an return.
I said, "Yes, that's right."
Which companies are those?
I dispute the "no way to become profitable" claim (literally all profitable right now, it's the private ones which probably aren't). I do have other negative sentiments for most of them, but those are the seven that represent about that share of the S&P500.
The author’s thesis seems to lack rigor.
See, I think this is wrong. The unit economics of LLMs are great, and more than that, they have a fuckton of users with obvious paths to funding for those users that aren't paying per unit (https://www.snellman.net/blog/archive/2025-06-02-llms-are-ch...). The problem is the ludicrous up front over infestment, none of which was actually necessary to get to useful foundation models, as we saw with DeepSeek.
So true.
Yes it prints whatever amount they want, even trillions. Magically(!)
> The index aims to provide a comprehensive and balanced representation of the U.S. equity market by including the largest 500 publicly traded equity securities, while specifically excluding the seven largest technology companies commonly referred to as the “Magnificent 7”.
Up 12% in the last year. Unfortunately, it's ten times as expensive (0.35%) as a straight S&P 500 ETF (e.g. VOO, 0.03%).
You're suggesting that _governments_ will bail out the AI industry? I mean, why would they do that?
Sure, the stock price wouldn't be to the moon anymore, but that doesn't materially effect operations if they're still moving product - and gaming isn't going anywhere.
The stock price of a company can crash without it materially effecting the company in anyway...provided the company isn't taking on expansion operations on the basis of that continued growth. Historically, Nvidia have avoided this.
Probably the hoards of startups would be most impacted. It isn’t clear the government would bale them out.
They are not going to zero, but they can lose lot from the current price.
I am sure that you already heard this sort of argument for AI. It's a way to bait that juicy government money.
"unlimited QE"
This hits home. A lot of the supposed claims of improvements due to AI that I see are not really supported by measurements in actual companies. Or they could have been just some regular automation 10 years ago, except requiring less code.
If anything I see a tendency of companies, and especially AI companies, to want developers and other workers to work 996 in exchange for magic beans (shares) or some other crazy stupid grift.
If companies are shipping AI bots with a "human in the loop" to replace what could have been "a button with a human in the loop", but the deployment of the AI takes longer, then it's DEFINITELY not really an improvement, it's just pure waste of money and electricity.
Similarly, what I see different from the pre-AI era are way too many SV and elsewhere companies having roughly the same size and shipping roughly the same amount of features as before (or less!), but are now requiring employees to do 996. That's the definition of loss of productivity.
I'm not saying I hold the truth, but what I see in my day to day is that companies are masters at absorbing any kind of improvement or efficiency gain. Inertia still rules.
As for predicting the moment, the author has made a prediction and wants it to be wrong. They expect the system will continue to grow larger for some time before collapse. They would prefer that this timeline be abbreviated to reduce the negative economic impacts. He is advising others on how to take economic advantage of his prediction and is likely shorting the market in his own way. It may not be options trading, but making plans for the bust is functionally similar.
His points are not backed by much evidence
The 2nd link seems reasonable to me? Why does a study about 25k workers in Denmark (11 occupations, 7k workplaces) not count as evidence? If there was a strong effect to be found globally, it seems likely to be found in Denmark too.
Also, what about the other links? The discussions about the strange accounting and lack of profitability seem like evidence as well.
If anything, this article struck me as well-evidenced.
Wile E Coyote sprints as fast as possible, realizes he zoomed off a cliff, looks down in horror, then takes a huge fall.
Specifically I envision a scenario like: Google applies the research they've been doing on autoformalization and RL-with-verifiable-rewards to create a provably correct, superfast TPU. Initially it's used for a Google-internal AI stack. Gradually they start selling it to other major AI players, taking the 80/20 approach of dominating the most common AI workflows. They might make a deliberate effort to massively undercut NVIDIA just to grab market share. Once Google proves that this approach is possible, it will increasingly become accessible to smaller players, until eventually GPU design and development is totally commoditized. You'll be able to buy cheaper non-NVIDIA chips which implement an identical API, and NVIDIA will lose most of its value.
Will this actually happen? Hard to say, but it certainly seems more feasible than superintelligence, don't you think?
Tesla is currently trading at 260x earnings, so to actually meet that valuation they need to increase earnings by a factor of 10 pretty sharpish.
They're literally not going to do that by selling cars, even if you include Robotaxis, so really it is a bet on the Optimus robots going as well as they possibly can.
If they make $25k profit per Optimus robot (optimistic) then I think they need to sell about a million per year to make enough money to justify their valuation. Of a product that is not even ready to sell, let alone finding out how much demand their truly is, ramping up production, etc.
For comparison the entire industrial robot market is currently about 500k units per year.
I think the market is pricing in absurdly optimistic performance for Tesla, which they're not going to be able to meet.
(I have a tiny short position in Tesla).
Oracle's share price recently went up 40% on an earnings miss, because apart from the earnings miss they declared $455b in "Remaining Performance Obligations" (which is such an unusual term it caused a spike in Google Trends as people try to work out what it means).
Of the $455b of work they expect to do and get paid for, $300b comes from OpenAI. OpenAI has about $10b in annual revenue, and makes a loss on it.
So OpenAI aren't going to be able to pay their obligations to Oracle unless something extraordinary happens with Project Stargate. Meanwhile Oracle are already raising money to fund their obligations to build the things that they hope OpenAI are going to pay them for.
These companies are pouring hundreds of billions of dollars into building AI infrastructure without any good idea of how they're going to recover the cost.
Enron?
https://en.wikipedia.org/wiki/Enron_scandal
The second interesting part is also the part you're assuming in your argument. Does the fact that OpenAI doesn't have 300 billions now and neither has the revenue/profit to generate that much matter? Unless there are deals in the background that already secured funding, this seems very shady accounting.
I guess we'll find out.
Well... to be fair it's only really Anthropic (and the also-ran set like xAI) that runs the risk of being over-leveraged. OpenAI is backstopped by Microsoft at the macro level. They might try to screw over Oracle, but they could pay the bill. So that's not going to move the market beyond those two stocks. And the other big players is obviously Google which has similarly deep pockets.
I don't doubt that there's an AI bubble. But it won't pop like that, given the size of the players. Leverage cycles are very hard to see in foresight; in 2008 no one saw the insanity in the derivatives market until Lehman blew up.
This is not a serious piece of writing.
For the very near term perhaps but the large scale infra rollouts strike me as a 10+ year strategic bet and on that scale what matters is whether this delivers on automation and productivity
If it’s overt then it’s easily filtered out, if it’s baked in to deep the it harms response quality
As a student, you have much more freedom to protest than as an employee, and that is where the resistance must come from.
We also need to take into account that while there is a bubble, most of the insane amounts of investment that were seen in headlines have not materialized.
Nvidia will crash, Tesla will crash (Optimus robot nonsense) but Microsoft and Google should be fine. If there is a bailout, protest again. preferably in the physical space and focusing on economic topics rather than culture wars (which is what the politicians want you to focus on).
As for money side - think it’ll come. There is obvious utility (but not autonomy) and the economics of it will find their equilibrium. They always do
So many AI hucksters these days
> No of course there isn't enough capital for all of this. Having said that, there is enough capital to do this for a at least a little while longer.
https://www.wheresyoured.at/openai-onetrillion/
Seems a bit pessimistic. AGI may not be here next year to keep the bubble going but will probably happen in the next decade or two and do much of the stuff advertised. It's like the dotcom bubble - much of commerce, banking and the like did move to the internet but not till a while after the financial bubble burst.
This actually sounds like a kinda cool outcome as long as you aren’t an applied statistician.
I remember having lunch with a guy who was stubbornly holding on to his Nortel stock, which was worth mere pennies by like 2005 or so. They not only lost their jobs, they lost their 401Ks which were all in company stock. Anyways, this guy was sure it was going to bounce back. I saw in like 2008 where Nortel finally closed its doors and the stock was de-listed at $0. His dream was dead. I never worked for equity after that time period.
The enormous build out of data centers reminds me of that time period. Yeah, it's all going to collapse.
28 more comments available on Hacker News