The AI Bubble Is 17 Times the Size of the Dot-Com Frenzy and Four Times Subprime
Key topics
The article claims that the AI bubble is 17 times larger than the dot-com frenzy and four times the size of the subprime crisis, sparking a heated discussion on the potential risks and consequences of the current AI market valuation.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2m
Peak period
138
0-6h
Avg / period
22.9
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 6, 2025 at 12:46 PM EDT
3 months ago
Step 01 - 02First comment
Oct 6, 2025 at 12:48 PM EDT
2m after posting
Step 02 - 03Peak activity
138 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 10, 2025 at 9:05 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The AI bubble is 17 times the size of the dot-com frenzy, analyst says https://news.ycombinator.com/item?id=45465969 - 2 days ago, 94 comments
There is some higher quality discussion there, and still ongoing.
Edit: I think it's fair to say there is a fair bit antics by companies that are actually illegal, like in every bubble, that have been hidden by the mania. They get exposed as the tide goes out.
Presuppose everything else that contributed to the GFC was straight up fraud: Prevent it all and the crisis is basically just as bad.
It's really a shame that it's really the regular people who are going to suffer the majority of the harm when this bubble pops, as the Wall Street insiders get off mostly scot free yet again but I really don't want the government trying to prop up these assholes again.
The problem, of course, that no one wants to take a lower return on investment.
Let's say you heard Trump's talk of tariffs, got spooked when he won the election, and moved your investments. The S&P 500 has gone up ~33% since election day. How much growth can you realistically miss before correctly hedging against a crash still net loses you money?
And that's assuming you pick the correct hedge. If the US economy well and truly crashes beyond return, to the point where the S&P 500 is actual garbage that will not recover in the next 5-10 years, that might bode poorly for the dollar, so that's it for your treasuries, money market funds, CDs, etc. Gold is always an option, until it's at an all time high because just as the economy is roaring along a ton of people are nervous. So, you can invest in gold, but it's expensive, which means it could actually dip if we get stability. Ok, so you invest in foreign markets - but the Global Financial Crisis was global, the Great Depression was global, everything was global.
What do you actually hedge in? Oil? Well, the rapid development of green energy might actually, finally render that an unsafe investment. Green energy? Think again, the president is on a crusade against it. There's literally nothing that's actually a safe investment, and even if you did find a safe investment, the big crash might not happen until everyone else riding the wave has doubled or tripled their investment, by which point they'll be on par or even above you post crash. I'm actually very happy I'm nowhere near retiring, since regardless of how else you feel of the current governing of the US, I think it would be hard to argue that they're fighting a war against conventional wisdom, and that makes investing very scary.
Don't forget that bubble popping doesn't necessarily mean that the sector is dead. The Dot Com bubble popped but people are still selling things on websites. It just means the hype and unrealistic expectations come crashing down to Earth. AI is going to be with us from now on, but it won't become a God with unlimited productivity next year.
They are no1 to stop banks collapsing and no2 sometimes to try to stop unemployment going up too much.
The way people in the US government has been speaking and acting goes against that hope. Instead, it seems to be unanimous that AI will be the foundation of the economy, the US can't lose its leadership on it for even a second (because when it takes over, it will take over in a second), and it's worth breaking every rule, spending every cent to keep, and cannibalizing every other industry if necessary.
Honestly, you have to deal with the delusional con-man in power ASAP.
This is the widespread sentiment, yes, and why the bubble is so crazy. The feeling a lot of people have is; even if shit hits the fan, rather than let stocks/assets decrease in value relative to cash, governments will just print money and keep stocks/assets "stable" while making cash worthless. The US proved they're willing to do this during COVID. What you're betting on by holding assets despite the bubble is that they're willing to do it again.
Watch what happens if a breakthrough comes out of China, Russia or a shed in Nigeria. What is happening on Wall Street or who is loosing cash or jobs will drop instantly down the list of priorities. Money will be printed to the moon.
Of course not, but with the Fed always standing at the ready to print more money, the answer is yes. The downside is persistent inflation. Inflation and default are two sides of the same coin.
In short it's companies turning to issuing debt, fuelled by increase in M&A activity, potential IPO of OpenAI, followed by collapse as tricks to increase revenue will fail to meet expectations and companies that mismanage debt will go bust.
Is that actually true? And is most of it because of the compute requirements of the models or scaling cost due to exponential growth in usage?
I hope it didn't actually cost ten times more to create ChatGPT-5 than it did ChatGPT-4.
You hear sometimes about the AI singularity and how they will continue to become smarter at an exponential pace until they're basically gods in a box, but the reality is we're already well into the top of the S curve and every advance requires more effort than the one before.
But I can't find anything directly released by OpenAI, so maybe these are all just estimates. And presumably include price of hardware.
In the case of the collapse of an AI bubble, I don't see as much of a direct relationship to effects on the average Joe. Yes all those billions spent by tech investors will get written off, the companies heavily invested in AI will shed well paid tech jobs in a sector that was craving talent anyway.
I think the biggest effect would be the fact that all that capital was spent on AI tech rather than productive assets and businesses. That's a big opportunity cost, and would hit growth, but I don't see it wiping out ordinary people in the same way. The pain will be heavily concentrated on investors and for everyone else it will just be a slow drag but not a catastrophe.
The real problem is if there are other negative economic effects that compound with it.
When you say "lost their homes", do you mean "I owned the house and somehow I now own no house and have no money for it, it just evaporated", or do you mean "I took a loan I could not afford, on a house I could not afford, while investing a tiny amount of money or none at all into the deal, and hoping to profit from ever increasing prices, and using my equity as an infinite-money ATM, and when that stopped, the bank took the house back"? If the latter, then what was lost is not "homes" but unrealistic prospects of profits from the thin air. If the former, I'd like to know how exactly a subprime crisis could cause something like that.
There were plenty of people that bought houses at reasonable prices and down-payments and still lost their ass when downstream ramifications took out unrelated businessss.
Actual people are not relying on actual AI. And I doubt many actual people would be hurt by the AI crash.
And, frankly, it's literally the bank's job not to make loans that people will default on too frequently (for their own sake), so if you're not exceptionally knowledgeable about banking, it's not unreasonable to trust your bank and their advisors not to make a loan you won't be able to pay back. Like, sure, you shouldn't trust them not to screw you on the terms and with interest, but banks mostly are trying to make loans they expect to get paid back, and I would personally expect them to have a good idea of how much they can trust me with.
It's not the bank's job though to decide whether it's ok for you to treat the house as a long-term asset which consumes a part of your cash flow, or as a speculative gamble. You can find a bank that will support either, but it's on you to decide which road to take. And if somebody takes the speculative road and loses, then it's not exactly the banks' fault. The adult should take responsibility for their own actions.
> it's not unreasonable to trust your bank and their advisors not to make a loan you won't be able to pay back.
No, it's not reasonable at all. Loan officers do not have a fiduciary duty towards you. They have a fiduciary duty towards the bank, so that's what they worry about - to take care of bank's interests. Assuming those interests would always align with yours is a dangerous naiveté. There are financial advisors who are fiduciaries - and you can hire one if you need - but you won't find them in your mortage bank's loan office. Yes, the bank is interested, in most cases, not to produce overtly bad loans - but that doesn't mean they care how you are going to pay it, and there's a lot of chance they'd sell your loan to another servicer in a year or two anyway. They have no duty to figure out if taking this loan won't harm you, that's your duty.
But your respin is kind of a whopper too. While there were absolutely people cynically leveraging real estate to make a buck, the overwhelming majority of foreclosures in the wake of the '08 crisis were just regular homeowners. They needed a home (maybe they moved, or got married, grew up, downsized, etc... people need homes!). So they called a real estate agent and a bank to figure out what they could get, and everyone told them (correctly) that they could get a great home at a very reasonable price with very little down payment. Because everyone else was doing it. So they did.
Everyone who took an adjustable loan, interest only loan, etc., who didn’t have an exit strategy already in place in case of inability to refinance, had themselves to blame, regardless of whether “everyone was doing it.“ I don’t mean any criticism toward people who happened to lose their jobs and would’ve otherwise been able to continue paying on the loans they’d taken. Nor am I saying it’s OK to take advantage of people who don’t bother to read or understand the assumptions inherent in the contracts that they’re signing. But people were incredibly naïve if they accepted some broker’s verbal assertion that they’ll always be able to refinance the otherwise-unaffordable house on favorable terms in 3 or 5 years or whatever.
Come on. Median homeowners (even median HN commenters) are hard put to even define those terms, much less execute your strategy correctly. This kind of blame-the-dummies caveat emptor absolutism fails in the modern world. It's like demanding people decide on their own medical diagnoses and select treatments from a menu.
We license realtors and banks, regulate mortgage marketing and have a CFPB for a reason.
Also, I think you're underestimating the intelligence of the median person. If a doctor tells me that for $500, they can surgically implant a chip in me that will give me LeBron James-level basketball skills in 3 years, and I say "Cool, cut me open, Doc!" I am partly to blame because I should have known that isn't possible. Yes, the doctor should still be punished. But people should get multiple opinions for facts so obviously too good to be true and only commit to something when they understand the risks.
That doesn't seem like a good faith analog for "I got a 3.2% mortgage with 5% down and payments less than my last rental".
You keep pretending that the idea that the real estate market was internally overleveraged by repackaged derivatives held by investment banks was some kind of obvious thing that regular homeowners were too stupid to see. And I'm telling you it wasn't, because no one saw it, not even the bankers and regulators, until it was too late. Blaming the homeowners for not "understanding the risks" is unfair, but also frankly non-actionable. They'll never be as smart as you want them to be in hindsight, because no one is.
> And I'm telling you it wasn't, because no one saw it, not even the bankers and regulators, until it was too late
That's not true. A lot of people called it unsustainable at the time. A lot of people said there's a bubble. They were laughed at and shouted down, as doomsayers that are just to much of a buzzkill to let people just enjoy a new cheap house. A lot of people didn't buy into the bubble, because they correctly deduced it's not worth it. You don't hear about them for the same reason why the newspapers don't report there wasn't a murder - there's nothing to report. So you hear the stories of those who chose wrong and got hurt - because there's something to report there. But if a responsible family sees a loan too good to be true on a house they can't afford and walks away - you'd never know about it. But they exist. And there should be more of them.
for 3 or 5 years though. That's part of the terms. Nothing outside of that was promised to them on paper.
It's reasonable to expect someone looking at a 5/1 ARM or an 'Interest only for X years' loan to ask "What can I be guaranteed in writing will happen at the end of that period?" The right answer was "Nothing. Interest rates have historically moved between 3% and 22%. Your new payment could be 4x your old rent, or it could get even cheaper. The value of the home could go up or it could go down. By taking this loan you are betting your house, the down payment, and your credit rating on not just one but multiple assumptions: Low rates and continuing appreciation."
That's setting aside the systemic risks that I agree nobody not in the financial world ought to have been expected to understand.
Yes and no. Yes they were regular homeowners, but they also massively overbid on homes they couldn't afford because it doesn't matter, we'll refinance under new valuation in a couple of years and will only profit from it! And by overbidding, they made the situation worse for more careful buyers, and helped to feed the frenzy. They are not the sole guilty party, there is a lot of guilt to go around, but part of the guilt lies on people who entered into bad deals because they were sure home prices never go down ever, and it doesn't matter how bad the deal is. I've been on a number of realtor presentations at the time that explicitly said things like that. And people bought into it massively. And yes, "everybody else" (well, not literally everybody, but a lot of people) did it.
That's exactly my point. It's still wrong what they did, and if they exercised more restraint and foresight, and less greed, maybe the size of the problem would be less, and less people would be hurt. I lived through it, and I had those doubts also - should I do what "everybody else" is doing? Should I participate in a clearly unsustainable bubble? Am I an idiot to not jump in at the chance of literally free money? Overwhelming majority faced the same questions, and a non-negligible part of them chose the irresponsible answer. And they got hurt. I feel for them. But I also do not forget it was their choice to make.
In some cases they lost their job meaning they needed to move to find work, but that would mean selling a house worth less than they bought it for which mans they'd owe the difference, and the mortgage on the new house would be unaffordable anyway due to the increase in mortgage rates.
Then bear in mind the hyper aggressive marketing tactics, and assurances from financial institutions and politicians that this was all fine and there was no risk.
Ultimately though, this has nothing at all to do with my comment. I meant "they lost their homes" and that's all. I didn't assign any blame to anyone, nor did I try to accuse anyone of anything, all I talked about was the potential economic repercussions.
In that case, a prolonged recession may occur (that would've occurred anyway), and the effect will be felt throughout the economy.
But, again, that's just a general recession being triggered by the AI bubble bursting, i.e. AI no longer propping up the economy, so that's not a bad thing. What the results of that are in terms of severity or impact I wouldn't know, I don't think anyone knows.
I just don't see how the broader market is exposed to an AI crash in the way it was exposed to subprime loans. If OpenAI goes belly up is it really taking anyone else down with it?
Source: https://www.economist.com/finance-and-economics/2025/08/18/h...
So I think if there was an AI crash, US economy goes with it in the short term
Don't get me wrong; I'm no fan of the billionaires. Eat the rich, etc. But I don't want the billionaires to lose everything suddenly, because I'm 100% sure my 401k will go down with them, and 50% sure my job will.
But I agree with you, the article is too light on details for how inflammatory it is.
NVDA, MSFT, AAPL, META, and GOOG are all heavily investing in AI right now, and together make up 28% of the money tied up in S&P 500 indices. Simply investing in the S&P 500, which many people do, exposes you to meaningful downside risk of an AI bubble pop.
You can see him talking about the research here https://youtu.be/uz2EqmqNNlE?t=40
The 17x refers to a macro model based on the "cumulative Wicksell spread" that suggests the stock market may be overvalued due to interest rates, nothing about AI specifically.
The youtube talk, and the slides which are from his report are quite interesting, and I think the economic analysis is quite good, though he's not a tech/AI guy.
As far as I can figure for the Wicksell spread you calculate (annual GDP growth) +2% - (annual interest rates) and then integrate that which gives a graph with bumps on and the current bump is 17x the size of the one at the time of the dot com bubble.
Artificially low interest rates have stimulated investment into AI that has hit scaling limits, says research firm
He blames "low interest rates," yet interest rates have surged since 2022 to their highest levels in decades. He cannot even get the basic facts right, which kills his credibility at the start.
This also torpedos a common narrative that high interest rates are always bad for asset prices. The difference between 1% vs 5% interest rates does not factor much into VC decisions when the expectations are for 40-100+% annual returns with the hottest AI companies, which far exceeds the additional cost of borrowing. A similar pattern was seen in the the '80s and the late '90s, in which high interest rates also coincided with high valuations of tech companies.
This means a much longer effort at reflation, a bit like what we saw in the early 1990s, after the S&L crisis, and likely special measures as well, as the Trump administration seeks to devalue the US$ in an effort to onshore jobs," he says.
In an attempt to paint a negative picture of impending crisis, he gives examples, of 2001 and 1991, of among the mildest recessions ever. The US stock market and economy would go on to boom in 1995, just a few years after the S&L crisis.
If there is a job that AI needs to automate, it's these overpaid and useless analysts.
The plebs can take some inflation,higher taxes and national debt increase for the hucksters to get some yachts and lambos. I mean what were you going to do with that money? Feed your family? Pay rent? Jeeze get a life....
In this round of con there is not even a way for an average pleb to get any scraps since the AI bubble is almost completely hidden behind private equity. Except maybe Nvidia stock.
The bubble is starting to froth/pop IMO. An incestuous cycle of players propping each other up with circular investments(ie I'll loan you money to buy my product so I can sell it to you) began last month with Nvidia "funding" OpenAI datacenters. Actions like that mean they are out of external ways to keep shuffling the debt. Like when WeWork CEO started loaning the company its own money to buy its own product to rent.
Edit: Oh $hit AMD just did the same thing with a circular funding deal with OpenAI. https://news.ycombinator.com/item?id=45490549 Channel Stuffing Money printer go brrr. I can't belive there is even discussion if there is a bubble at this point. Its literally wolf of wall street "Never let them cash out that makes it real" style deals all over the place.
I predict this will happen among Nations as well. US will provide money to Japan so that Japan can invest that money in US (at Trump's discretion) so that everything is kosher.
We also give out tons of subsidies and tax breaks to lure foreign investment to the US.
> the AI bubble is almost completely hidden behind private equity
How are both of these statements true?
Its complicated. Though you could argue its all good business. But first buy my swampland in florida please.
Of course I'm reminded of today's announcement about the strategic partnership between AMD and OpenAI which caused AMD's stock price to jump a whopping ~35%.
But many (many) labs around the world are working on alternative chip designs for math processing.
Once a couple open-source chip designs come online, that can compete with Nvidia, it will all come crashing down.
Think Android vs iPhone.
Stoked.
Google has been moving much of what a western user considers “Android” out of the AOSP code since 2013.
https://en.wikipedia.org/wiki/Google_Play_Services
So yes - while there are others who may yet compete in the data centre compute market, nothing else comes close to the monopolistic total vertical integration nvidia has built over the last decade.
Also I'm not sure that foreign markets would be that thrilled by a US collapse either, at least not immediately.
Assuming you're talking decades away, it usually all comes out in the wash.
Now, where you should really potentially worry is if you were retiring imminently and needed to pull out a bunch of money to make that happen. But if you're retiring in 20 years or whatever, and, say, the S&P halves next year, is it really, in the scheme of things, _that_ big a deal?
If you could time it perfectly, you could come out better by selling now and reinvesting after the crash, but bear in mind that you probably cannot time it perfectly. People were predicting the 2008 crash imminently from about 2004 on, say, whereas the dot-com crash went from dark mutterings to chaos in a year or so. These things are very hard to time.
if the upswing doesn't come our lifestyles are all screwed anyway
Instead of waiting for a warmed over post-mortem, long after the bubble pops, before even considering their options.
It's a bubble. Of course there's malfeasance, lies, corruption, etc. Assume criminality.
These endless boom-bust cycles will continue unabated until there's credible threats of doing some hard time.
Fun fact, we live in a society with rule of law. You can't just assume criminality because you don't like something, you have to actually prove it.
Public companies are outright bribing the President to get mergers passed (Paramount) and the entire confiscating TikTok to give it to his buddy at Oracle on the cheap can’t be ignored.
On the federal level, there really isn’t any legal standard anymore.
A cynic, though certainly not me, would argue that "bubble" is an euphemism for fraud.
Its very unclear to me if there is actually more crime during bubbles or if people want someone to blame so the powers that be investigate harder until they find someone.
So it's not like there isn't a product (there is) and there's no growth prospect (there is). But it is scary how much now hangs in the balance of one bet.
It almost feels like it's going to end badly either way. If the Great AI Bet succeeds, a tiny proportion of the world will own all intellectual power. But if it fails, the impact of the write-offs on the broader economy will be terrifying.
The internet was destined to be big sure during the dot.com but most companies crashed.
The bubble popping issue would be that there isn’t a good way to recover the capital used to build the AI models.
I mean, unless you go back to "tulips are a sure thing; their prices always go up!".
But buying stocks in an S&P massively invested in AI is essentially the same as "buying AI", when the bubble pops.
There is zero evidence that current "AI" (LLMs) are ever going to become "AGI".
If you want a pathological liar for an "AGI", then sure, LLMs are already there.
Currently we have: - cross domain competence - composable tool usage - constant improvement without clear signs of stagnation...some can count gpt 5 as not much progress but there are still world models with huge gains compared to year before
There is a bunch of things missing for sure like mechanistic reasonic, better context lengths, determinism, better world modelling, continuous learning.
The bet is crazy and whole world is gambling on AI, right now, because of those signs of potential. That's precisely why we landed here, the evidence was good enough for big tech to gamble on it...
It depends on who you ask. There is no absolute for definition of "AGI". Sam Altman defines it in monetary terms, because that benefits him. I have a very different idea of what "AGI" means. It's really very subjective, so I don't have a definite answer for you. I'm sure you'd define "AGI" differently than I would, so having this discussion is kind of a waste of my time.
The peo1ple pouring money into "AI" are doing so in the hope that it will become more reliable someday. My educated guess is that it won't, due to the underlying mechanisms that it is built on. Predicting the next word in a sentence according to grammatical rules is a long, long way off from a machine knowing and understanding how truthful the resulting sentence is.
Also I think you will like this interview with founder of Cohere AI, it's much more nuanced and doesn't say AGI is near...more like we are far away, although it's useful. https://m.youtube.com/watch?v=Sw2chzwWLbQ
AI also never meant AGI.
Artificial Intelligence is an entire discipline of the Computer Science field. It encompasses everything from how Pinky, Blinky and Clyde chase Pac-Man, to A* search and similar pure algorithms, to machine learning, computer vision, and LLMs.
It is also a term used widely in popular culture and media to mean, essentially, AGIs—Cortana, Agent Smith, C-3PO.
The problem is not that this term is very broad, and it certainly isn't that it has come to mean something that it didn't before. It is that a bunch of people with a financial interest have been busily trying to convince the world that LLMs are Cortana.
The trillion dollar question isn’t “is AI a bubble”, it’s “which of these companies are pets.com and which are Google (if any)”.
Even investing into a basket might not be a winning strategy even in a world in which AGI is imminent.
But otherwise agreed.
Every single researcher not being paid $$$ by AI companies say there is 0 path between LLMs and AGI, but sure... and the next pfizer drug might make us immortal, who knows, everything is possible after all
Will AI of today definitely become AGI of tomorrow? No, for sure not, and anyone who claims this is at best crazy.
But is it imaginable? I think totally. Andrej Karpathy' blog post about RNN writing Shakespeare 1 character at a time was 10 years ago. GPT-2 was released 6 years ago. In that time we went from something that barely speaks English, never mind any other language, to something that, on a good run, is an excellent language tutor (natural and programming), can play games, can write moderately complex programs, goodness knows what else. For some people, the romance of a ChatGPT-4 was unmatched.
Even if it doesn't become "AGI", it might just get so good at being sub-AGI that the question is irrelevant. We're seriously contemplating a near future where junior devs are replaced by LLMs; and I write this as an AI sceptic who uses LLMs to write a lot of the kind of dumb code a junior dev might do instead.
I don't like AI, in that it nibbles away at my competitive advantage in life. But it's IMO crazy to pretend it is not even potentially a game changer.
I'm not saying it doesn't bring any value, I'm just saying that if you think we should give $7 gazillion to Altman because he's building skynet by 2030 you're smoking crack
I am not saying we give all money to Altman. I'm saying AI is likely overvalued. But can it evolve into something far more capable given investment? Yes, it may.
Flying cars and space travel didn't happen, but might have. And funnily enough, they might still happen, with drone taxis and Virgin Galactic / SpaceX. Might. That's how it works with speculative investment.
LLMs also came out of nowhere, a series of discrete improvements that finally got over the hurdle and acheived an unprecedented functionality. Absolutely no one predicted its emergent capabilities back in 2016 when the transformer paper was proposed.
They didnt see scaling then, they might not see the next thing now, until its found.
So it wont be in the next 2 weeks, and it wont be from OpenAI, but it might be in 10 years from some random researcher at waterloo, or tiktok
Nothing is impossible!
> LLMs also came out of nowhere
No they didn't.
So maybe it's jealousy?
I mean I don't know.
But in all seriousness until someone else invents AGI some other way, you can't disprove that LLMs aren't the way. My intuition says you need more than LLMs but I could very well be wrong.
The dotcom bub, subprime crisis, and AI bubble are all based on real goods being overvalued to hell and then crashing back down to their actual capabilities. The soze of the bubble is the delta between their actual value and the market's percieved value.
In that regard i think AI will be crazy crazy valuable in real terms but not like the market is using it. I think the real value of AI agents is very very low and the market will crash to that level. I think the value of genAI as much simpler interfaces for communication (RAG, translation, NLI) and for automated understanding of static sytems (rather than acting in dynamical systems) is high and will crash a bit before the market learns to use them right and then itll be a party
The talent out there is all focused on a tiny number of tasks and benchmarks in service of the AGI cult whilst the real gain is scattered amongst everything else with no staffing to build it. So I see a bust much like the dotcom bubble in the arbitrary future, but then it corrects surprisingly quickly shortly thereafter and the engineers of that bubble will have gotten out temporarily well ahead of that bust, buying back their assets at 50% or greater discounts and the broader base of retail investors are screwed as usual.
As for impacts on the economy. Meh. They're already starting to ignore the ramblings of the Orange in Chief (the guy shuts down the federal government and the entire stock market yawned last week). The weakly efficient market abides and it's been burned enough times already by the TACO trade. It'll get through this. But oh the whining that will ensue on the corporate media.
Last time, a lot of the companies were public and the general public saw stock losses. This time the only companies that are publicly really exposed by the AI bubble are Nvidia with all of their circular financing and Oracle. Of course Tesla has always been a meme stock.
Defined contribution plans - for now - can’t have private equity in their funds. Defined benefit plans and endowments are exposed.
Apple famously hasn’t invested that much in AI, Google is spending a lot on infrastructure. But between search and GCP and YouTube they have a real business plan and are funding based on profits. Amazon is in the same boat. Microsoft is bowing out of spending money on training and focused on inference - and they also have Azure. Meta is making money using AI for ad targeting and probably in the future to generate ads.
I can also see consulting companies being hurt (I work in cloud consulting) as businesses are throwing money at them to “AI enable” their business.
Given how investing is heavily promoted by all these neobanks I have a feeling a lot of people will get burnt. Back in the days, not even 10 years ago, you had to research and go out of your way to invest, now you can do things like "automatically round up your transactions to buy NVIDIA" from your bank app. The only ways to get out of the middle class are: lotteries, crypto, putting everything stock market for 20 years and living like a student in the meantime.
But funny enough, most of the people I know that do invest in individual stocks, bitcoin etc are people in the servjce industry who are single and make decent money on tips. I live in very heavy tourist town.
But since this is a site of tech heavy participants, if you are a software developer or adjacent, you are on average making twice the median local wage for your area if you are in the US even as a enterprise Dev 2-5 years out of school and should be able to invest at least 15% if your income.
Conversely - any expert (prolific writer, coder, painter, photographer, videographer or log/web designer) isn't as much amused or going to scream out of excitement that what these models are producing (vides, pictures, logos, essays, code) - they could never ever have thought anything better than that.
This fact alone is enough to warrant that a big bust is coming. Not a matter of if but when.
A C programmer will snub an Excel sheet, but it doesn't change the fact that it's genuinely useful for a wide number of use cases.
As an aside, the AI Art Turing Test was a bit eye-opening for me: https://www.astralcodexten.com/p/how-did-you-do-on-the-ai-ar...
I say all of this as someone who hopes that art remains human.
RL is spiky. It produces narrow improvements on specific capabilities. They're not making the model generically smarter, they're RLing in holes in the model's capabilities. In reality we don't have one scaling curve, we have thousands of them. We're in diminishing returns in "top line smarts" but we're raising the floor in a wide variety of areas that people who don't heavily eval models for a living might not notice.
It's not a bad bet that many are making that "changes everything" is real in this case. Hitching a ride to the technology getting civilization over the hill is a reasonable goal.
But what institution, technology, play...?
Following the money shows where bets are being placed. But none are safe, the disruptive consequences of network effects make diversification appear wise. The industry is littered with the bones of giants.
Personally I am looking at ML/AI as analogous to the Great Oxygenation Event. What things look like—at least, on land—afterward, none know, least of all the cranky old oligarchs who are hell bent on consolidating control and ownership.
84 more comments available on Hacker News