Circular Financing: Does Nvidia's $110b Bet Echo the Telecom Bubble?
Mood
calm
Sentiment
mixed
Category
other
Key topics
The article compares Nvidia's $110B bet to the telecom bubble, sparking a discussion on the similarities and differences between the two, with some commenters expressing concerns about the sustainability of Nvidia's growth and others highlighting the differences between the telecom bubble and the current AI boom.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
35m
Peak period
154
Day 1
Avg / period
53.3
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 4, 2025 at 9:06 AM EDT
about 2 months ago
Step 01 - 02First comment
Oct 4, 2025 at 9:41 AM EDT
35m after posting
Step 02 - 03Peak activity
154 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 6, 2025 at 11:00 PM EDT
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Meta commentary but I've grown weary of how commentary by actual domain experts in our industry are underrepresented and underdiscussed on HN in favor of emotionally charged takes.
Calling a VC a "domain expert" is like calling an alcoholic a "libation engineer." VC blogs are, in the best case, mildly informative, and in the worst, borderline fraudulent (the Sequoia SBF piece being a recent example, but there are hundreds).
The incentives are, even in a true "domain expert" case (think: doctors, engineers, economists), often opaque. But when it comes to VCs, this gets ratcheted up by an order of magnitude.
By that I mean, those were the last consoles where performance improvements delivered truly new experiences, where the hardware mattered.
Today, any game you make for a modern system is a game you could have made for the PS3/Xbox 360 or perhaps something slightly more powerful.
Certainly there have been experiences that use new capabilities that you can’t literally put on those consoles, but they aren’t really “more” in the same way that a PS2 offered “more” than the PlayStation.
I think in that sense, there will be some kind of bubble. All the companies that thought that AI would eventually get good enough to suit their use case will eventually be disappointed and quit their investment. The use cases where AI makes sense will stick around.
It’s kind of like how we used to have pipe dreams of certain kinds of gameplay experiences that never materialized. With our new hardware power we thought that maybe we could someday play games with endless universes of rich content. But now that we are there, we see games like Starfield prove that dream to be something of a farce.
But the way how it stayed niche shows how it's not just about new gameplay experiences.
Compare with the success of the Wii Sports and Wii Fit, which I would guess managed it better, though through a different kind of hardware that you are thinking about ?
And I kind of expect the next Nintendo console to have a popular AR glasses option, which also would only have been made possible thanks to improving hardware (of both kinds).
I could be very wrong, obviously.
The PS3 is the last console to have actual specialized hardware. After the PS3, everything is just regular ol' CPU and regular ol' GPU running in a custom form factor (and a stripped-down OS on top of it); before then, with the exception of the Xbox, everything had customized coprocessors that are different from regular consumer GPUs.
I hope that's where we are, because that means my experience will still be valuable and vibe coding remains limited to "only" tickets that take a human about half a day, or a day if you're lucky.
Given the cost needed for improvements, it's certainly not implausible…
…but it's also not a sure thing.
I tried "Cursor" for the first time last week, and just like I've been experiencing every few months since InstructGPT was demonstrated, it blew my mind.
My game metaphor is 3D graphics in the 90s: every new release feels amazing*, such a huge improvement over the previous release, but behind the hype and awe there was enough missing for us to keep that cycle going for a dozen rounds.
* we used to call stuff like this "photorealistic": https://www.reddit.com/r/gaming/comments/ktyr1/unreal_yes_th...
How much of a threat is custom silicon to Nvidia remains an open question to me. I kinda think, by now, we can say they’re similar but different enough to coexist in the competitive compute landscape?
Nvidia has also begun trying to enter the custom silicon sector as well, but it's still largely dominated by Broadcom, Marvell, and Renesas.
Something I wanted to mention, only somewhat tanget. The Telecommunications Act of 1996 forced telecommunication companies to lease out their infrastructure. It massively reduced the prices an ISP had to pay to get T1, because, suddenly, there was competition. I think a T1 went from 1800 a month in 1996, to around 600 a month in 1999. It was a long time ago, so my memory is hazy.
But, wouldn't you know it, the Telecommunication companies sued the FCC and the Telecommunications Act was gutted in 2003
https://en.wikipedia.org/wiki/Competitive_local_exchange_car...
If it's true that this regulation was what helped jumpstart the internet it's an interesting counterpoint to the apocalyptic predictions of people when these regulations are undone. (net neutrality comes to mind as well)
I've never heard anyone claim before that just having these laws on the books for a small period of time is "enough".
Why would it be enough? This legislation prevents monopolies from abusing position, therefore we will repeal it the moment it turns out to be useful?
Yeah, it takes time to consolidate power again, that does not mean the legislation is not good.
It worked out just fine? Are you saying that post-2003 internet access should have had more regulation to allow open access?
I've never heard anyone complain about that before- is there a specific issue that could have been fixed?
https://en.wikipedia.org/wiki/Kingsbury_Commitment
The answer is... nobody will ever agree on anything. You can always cherry pick some detail to bolster your case, whatever it may be.
We can never visit the alternate reality where another choice was made and so you can not win an argument.
Now, you can go and find similar circumstances. You can find other countries who did not grant a monopoly (for instance). But then, your opponent will argue all the differences between that instance and what occurred.
Also, I think it is a shame your original reply is getting voted down. I am against people voting down comments just because they disagree. Voting down should be used for comments that are low quality.
Monopolies gum up the system, reward the institutional capital rather than innovation capital, and prevent new entrants from de-ossifying and being the renewing forest fire.
We've been so lax on antitrust. Google, Apple, Meta, Amazon - they all need to be broken up. Our economy and our profession would be better for it.
Innovation should be a treadmill.
YC and a16z want this.
0: https://www.amazon.com/Goliath-100-Year-Between-Monopoly-Dem...
If you actually look at the corporations that are consuming Americans’ income in 2025 vs. the equivalents in 1975, the idea that “monopolies” have controlled the economy for 50 years is self-evidently absurd.
the fact that the incentives for shareholder interest has led to consolidation and then to again always end up in a market where there is less providers than more. Look the names may change But since derregulation of much of the M/A activity we have ended up in markets that have monopolistic effects.
So...semantically there may be "competition" in markets" but on the ground and in real life where it matters the effects and actual restrictions that monopoly brings is very much a fact. It's absurd to ignore actual in depth analysis that takes context of actual economics (from the regional facets to actual mechanics of trade) into account and not think that monopolization is not occurring and not have an overwhelming impact. It is very evident that you yourself have not looked to deep into this.
That said, true dictatorships rarely work in practice but for different reasons as to why communism doesn't work. Which is why, when it comes to organizations, almost all are in fact oligarchies in practice, despite whatever they're called. This is known as the iron law of oligarchy. Notice the term: oligopoly? Go look at every industry and there's near always an oligopoly.
I don't think so. I think they want their own monopolies. That is what Peter Thiel's book[1] recommends.
But the price war was inevitable. And the telecoms bubble was highly likely in any case.
Telecoms investment was a response to crazy valuations of dot-com stocks.
It varied a lot by region. At the mom and pop ISP I worked at, we went from paying around $1,500/month for a T1 to $500 to, eventually around $100/month for the T1 loop to the customer plus a few grand a month for an OC12 SONET ring that we used to backhaul the T1 (and other circuits) back to our datacenter.
But, all of it was driven by the Telecommunications Act requirement for ILECs to sell unbundled network facilities - all of the CLECs we purchased from were using the local ILEC for the physical part of the last mile for most (> 75%) of the circuits they sold us.
One interesting thing that happened was that for a while in the late 90’s, when dialup was still a thing, we could buy a voice T1 PRI for substantially less than a data T1 ($250 for the PRI vs $500 for the T1.) The CLEC’s theory was our dialup customers almost all had service from the local ILEC, and the CLEC would be paid “reciprocal compensation” fees by the ILEC for the CLEC accepting calls from them.
In my market, when the telecommunications act reform act was gutted, the ILEC just kept on selling wholesale/unbundled services to us. I think they had figured out at that point that it was a very profitable line of business if they approached it the right way.
Regarding the price of connection, it's also worth mentioning that while T1 and other T-channel and OCx connection remains in high use, 1996-1999 is also the period where DSL became readily available & was a very fine choice for many needs. This certainly created significant cost pressure on other connectivity options.
Almost 90% of topline investments appear to be geared around achieving that in the next 2-5 years.
If that doesn’t come to pass soon enough, investors will loose interest.
Interest has been maintained by continuous growth in benchmark results. Perhaps this pattern can continue for another 6-12 months before fatigue sets in, there are no new math olympiads to claim a gold medal on…
Whats next is to show real results, in true software development, cancer research, robotics.
I am highly doubtful the current model architecture will get there.
If you speak with AI researchers, they all seem reasonable in their expectations.
... but I work with non-technical business people across industries and their expectations are NOT reasonable. They expect ChatGPT to do their entire job for $20/month and hire, plan, budget accordingly.
12 months later, when things don't work out, their response to AI goes to the other end of the spectrum -- anger, avoidance, suspicion of new products, etc.
Enough failures and you have slowing revenue growth. I think if companies see lower revenue growth (not even drops!), investors will get very very nervous and we can see a drop in valuations, share prices, etc.
This is entirely on the AI companies and their boosters. Sam Altman literally says gpt 5 is "like having a team of PhD-level experts in your pocket." All the commercials sell this fantasy.
It is kind of like when a cop allows his gun to be stolen. Yes, the criminal is the guilty, but also the cop was the one person supposed to guard against it.
Of course the valuation is going to be insanely inflated if investors think they are investing in literal magic.
An extraordinary claim for which I would like to see the extraordinary evidence. Because every single interview still available on YT form 3 years ago ...had these researchers putting AGI 3 to 5 years out ...A complete fairy tale as the track to AGI is not even in sight.
If you want to colonize the Solar System the track is clear. If you to have Fusion, the track is clear. AGI track ?
There's also plenty of argument to be made that it's already here. AI can hold forth on pretty much any topic, and it's occasionally even correct. Of course to many (not saying you), the only acceptable bar is perfect factual accuracy, a deep understanding of meanings, and probably even a soul. Which keeps breathing life into the old joke "AI is whatever computers still can't do".
You consider occasionally being correct AGI?
Given you start with that I would say yes the /s is needed.
A 4 year old isn’t statistically predicting the next word to say; its intelligence is very different from an LLM. Calling an LLM “intelligent” seems more marketing than fact based.
And now after having to dissect my attempt at lightheartedness, like a frog or a postmodern book club reading, all the fun has gone out. There's a reason I usually stay out of these debates, but I guess I wouldn't have been pointed to that delightful pdf if I hadn't piped up.
I think the main problem with AGI as a goal (other than I don't think it's possible with current hardware, maybe it's possible with hypothetical optical transistors) is that I'm not sure AGI would be more useful. AGI would argue with you more. People are not tools for you, they are tools for themselves. LLMs are tools for you. They're just very imperfect because they are extremely stupid. They're a method of forcing a body of training material to conform to your description.
But to add to the general topic: I see a lot of user interfaces to creative tools being replaced not too long from now by realtime stream of consciousness babbling by creatives. Give those creatives a clicker with a green button for happy and a red button for sad, and you might be able to train LLMs to be an excellent assistant and crew on any mushy project.
How many people are creative, though, as compared to people who passively consume? It all goes back to the online ratio of forum posters to forum readers. People who post probably think 3/5 people post, when it's probably more like 1/25 or 1/100, and the vast majority of posts are bad, lazy and hated. Poasting is free.
Are there enough posters to soak up all that compute? How many people can really make a movie, even given a no-limit credit card? Have you noticed that there are a lot of Z-grade movies that are horrible, make no money, and have budgets higher than really magnificent films, budgets that in this day and age give them access to technology that stretches those dollars farther than they ever could e.g. 50 years ago? Is there a glut of unsung screenwriters?
The first iteration produced decent code, but there was an issue some street numbers had alpha characters in it that it didn't treat as street numbers, so I asked it to adjust the logic of code so that even if the first word is alpha or numeric consider it a valid street number. It updated the code, and gave me both the sample code and sample output.
Sample output was correct, but the code wasn't producing correct output.
It spent more than 5 mins on each of the iterations (significantly less than what a normal developer would, but the normal developer would not come back with broken code).
I can't rely on this kind of behavior and this was a completely green field straight forward input and straight forward output. This is not AGI in my book.
I don't think I'll ever stop finding this funny.
It was written by Opus 4 too.
I personally would prefer China to get to parity on node size and get competitive with nvidia. As that is the only way I see the world not being taken over by the tech oligarchy.
https://open.spotify.com/episode/2ieRvuJxrpTh2V626siZYQ?si=2...
The only difference is fiber optic lines remained useful the whole time. Will these cards have the same longevity?
(I have no idea just sharing anecdata)
As the AI spending bubble gives out, Nvidia's profit growth will slow dramatically (single digits), and slamming into a wall (as Cisco did during the telecom bubble; leading up to the telecom crash, Cisco was producing rather insane quarter over quarter growth rates).
You're looking for advancement in carriages unaware of the 'automobile' that made 5g and ftth deployment at scale possible.
AC, ups, generators not to mention the severs.
That's the thing with fiber it was still useful. The cards at either end are easy to add, waaaayyy cheaper and higher perf (they're were no cards on end of dark lines) 15 years later.
The article cites anecdotal 1-2 years due to the significant stress.
This didn't last that much longer and many places were trying to diversify into managed services (data dog for companues on Orem network and server equipment,etc) which they call "unregulated" revenue.
Add written an things business, irrational exuberance can kill you.
Fiber networks were using less
than 0.002% of available capacity,
with potential for 60,000x speed
increases. It was just too early.
I doubt we will see unused GPU capacity. As soon as we can prompt "Think about the codebase over night. Try different ways to refactor it. Tomorrow, show me your best solution." we will want as much GPU time at the current rate as possible.If a minute of GPU usage is currently $0.10, a night of GPU usage is 8 * 60 * 0.1 = $48. Which might very well be worth it for an improved codebase. Or a better design of a car. Or a better book cover. Or a better business plan.
I'd argue we very certainly will. Companies are gobbling up GPUs like there's no tomorrow, assuming demand will remain stable and continue growing indefinitely. Meanwhile LLM fatigue has started to set in, models are getting smaller and smaller and consumer hardware is getting better and better. There's no way this won't end up with a lot of idle GPUs.
Has it?
I think there is this compulsion to think that LLMs are made for senior devs, and if devs are getting wary of LLMs, the experiment is over.
I'm not a programmer, my day job isn't tech, and the only people I know who express discontent with LLMs are a few of programmer friends I have. Which I get, but everyone else is using them gleefully for all manner of stuff. And now I am seeing the very first inklings of completely non-technical people making bespoke applets for themselves.
From OpenAI, programming is ~4% of chatGPTs usage. That's 96% being used for other stuff.
I don't see any realistic or grounded forecast that includes a diminishing demand for compute. We're still at the tip of adoption...
Those people (well really it's teens and college kids) live on reddit, they are so far from an accurate representation of reality its insane.
But they are paid.
There's much worse that's "pure speculation", because a few thousand unemployed losers that nonetheless have six figure incomes aren't suspicious at all. See the kerfuffle when the mod of r/antiwork went on Fox News, where several of the other power mods alluded to their true incomes and professions (they know what they are - PR people) when trying to boast about how they'd have done better.
oh you
Reddit is a Skinner Box. HN is too, though to a much lesser extent [2]. Every Skinner Box has one dominant opinion on every matter, which means, by simply using the product, your beliefs on any matter will shift towards the dominant opinion of the platform.
I was a chronically online Reddit user once. I can spot any chronically online Reddit user in just a few minutes in any social event by their mannerisms and the way they talk. I’ll ask and without fail indeed they are a daily Reddit user. It’s even more obvious in writing where you can spot them in just a few always-grammatically-correct text messages flavored with reddit-funny remarks and snarks and jokes.
Same goes for chronic X users. Their signature behavior is talking about social/political issues unprompted. It’s even easier to spot them.
I think the main reason behind platforms shaping user behavior is this: The most upvoted content will always surface to the top, where it will be seen by most users, meaning, its belief-shaping impact is exponential instead of linear. In the same manner unpopular opinions will be pushed to the bottom, and will have exponentially small impact. Some opinions will even be banned or shadowbanned, which means they are beyond the Overton Window of the specific platform.
This way, the platform both nudges you towards the dominant opinion and limits the range of possible opinions you will be exposed to. Over time, this affects your personality and character.
1: https://en.m.wikipedia.org/wiki/Operant_conditioning_chamber
2: The HN moderators and the algorithm both actively resist the effect and try to increase diversification.
I think it's important to remember that a good bunch of this is going to be people using it as an artificial friend, which is not really productive. Really that's destructive, because in that time you could be creating a relationship with an actual person instead of a context soon to be deleted.
But on the other hand, some people are using it as an artificial smart friend, asking it questions that they would be embarrassed to ask to other people, and learning. That could be a very good thing, but it's only as good as the people who own and tune the LLMs are. Sadly, they seem to be a bunch of oligarchs who are half sociopaths and half holy warriors.
As for compute, people using it as an artificial friend are either going to have a low price ceiling, or in an even worse case scenario they are not and it's going to be like gambling addiction.
2% of it is dedicated to thinking.
My guess is that as a species, we will turn a similar percentage of our environment into thinking matter.
If there are a billion houses on planet earth, 2% of it are 20 million datacenters we still have to build.
Nvidia is betting the farm on reinventing GPU compute every 2 years. The GPUs wont end up idle, because they will end up in landfills.
Do I believe that's likely, no, but it is what I believe Nvidia is aiming for.
This is the fundamental error I see people making. LLMs can’t operate independently today, not on substantive problems. A lot of people are assuming that they will some day be able to, but the fact is that, today, they cannot.
The AI bubble has been driven by people seeing the beginning of an S-curve and combining it with their science-fiction fantasies about what AI is capable of. Maybe they’re right, but I’m skeptical, and I think the capabilities we see today are close to as good as LLMs are going to get. And today, it’s not good enough.
A year ago they need an extensive harness to get silver, and two years ago they could hardly multiply 1000x10000.
Terence Tao tweeted yesterday about using GPT5 to help quickly solve a problem he was working on.
Why did GPT5 help Terence Tao solve a math problem, because he gave it a prompt and the context etc.
None of these models are useful without a human prompting them and giving it tasks, goals, context etc, they don't operate independently, they don't get ideas of work to be done, they don't operate over long time horizons, they can't accept long term goals and sub-divide those goals into sub goals, and sub tasks etc.
They are useless without humans telling them what to do.
I've seen lots of claims about AI coding skill, but that one might be able to improve (and not merely passably extend) a codebase is a new one. I'd want to see it before I believe it.
Other things might need to be done in two stages. You might ask the agent to first identify where code violates CQRS, then for each instance, explain the problem, and spawn a sub-agent to address that problem.
Other things the agent might identify this way: multiple implications, use of conflicted APIs, poor separation of concerns at a module or class level.
I don't typically let the agent do any of this end to end, but I would typically manually review findings before spawning subagents with those findings.
And you can't really hack / outsmart feedback loops.
Just because something is conceptually possible, interaction with the real rest of the world separates a possible from an optimal solution.
The low hanging fruits/ obvious incremental improvements might be quickly implemented by LLMs based on established patterns in their training data.
That doesn't get you from 0 to 1 dollar, though and that's what it's all about.
LLMs are a great tool. But, the real world is far too nuanced to be captured in text and tokens. So, LLMs will be a great productivity boosting tool like a calculator or a spreadsheet. Expecting it to do more is science fiction.
Because the main reason for the price premiums in AI-class GPUs are the gobs of insanely fast memory, and that is very much not underutilized. AI companies grab GPUs with as much memory (at the fastest memory bandwidth) as possible and underclock the GPU to save on power. Linus Tech Tips had a great video about the H200 that touched on this this week: https://www.youtube.com/watch?v=lNumJwHpXIA
The cost/benefit analysis doesn't add up for two reasons:
First, a refactored codebase works almost the same as non-refactored one, that is, the tangible benefit is small.
Second, how many times are you going to refactor the codebase? Once and... that's it. There's simply no need for that much compute for lack of sufficient beneficial work.
That is, the present investments are going to waste unless we automate and robotize everything, I'm OK with that but it's not where the industry is going.
That is nothing. Coding is done via text. Very soon people will use generative AI for high resolution movies. Maybe even HDR and high FPS (120 maybe?). Such videos will very likely cost in the range of $100-$1000 per minute. And will require lots and lots of GPUs. The US military (and I bet others as well) are already envisioning generative AI use for creating a picture of the battlespace. This type of generation will be even more intensive than high resolution videos.
AI is a lot more useful than hyper scaled up crud apps. Comparing this to the past is really overfitting imho.
The only argument against accumulating GPUs is that they get old and stop working. Not that it sucks, not that it’s not worth it. As in, the argument against it is actually in the spirit of “I wish we could keep the thing longer”. Does that sound like there’s no demand for this thing?
The AI thesis requires getting on board with what Jenson has been saying:
1) We have a new way to do things
2) The old ways have been utterly outclassed
3) If a device has any semblance of compute power, it will need to be enhanced, updated, or wholesale replaced with an AI variant.
There is no middle ground to this thesis. There is no “and we’ll use AI here and here, but not here, therefore we predictably know what is to come”.
Get used to the unreal. Your web apps could truly one day be generated frame by frame by a video model. Really. The amount of compute we’ll need will be staggering.
We've technically been able to play board games by entering our moves into our telephones, sending them to a CPU to be combined, then printing out a new board on paper to conform to the new board state. We do not do this because it would be stupid. We can not depend on people starting to do this saving the paper, printer, and ink industries. Some things are not done because they are worthless.
If you’re a board game player then you are more than capable of imagining possibilities well beyond this.
The only thing to keep in mind is that all of this is about business and ROI.
Given the colossal investments, even if the companies finances are healthy and not fraudulent, the economic returns have to be unprecedented or there will be a crash.
They are all chasing a golden goose.
Why does "Not needing labor at all" need to be in space?
Bezos is just saying shit to generate hype. All these executives are just saying shit. There is no plan. You must treat these people as morons who understand nothing.
Anyone who knows even the slightest details about datacenter design knows what moving heat is the biggest problem. This is the exact thing that being in space makes infinitely harder. "Datacenters in space" is an idea you come up with only if you are a moron who knows nothing about either datacenters or space.
If nothing else this is the singular reason you should treat AI as a bubble. All of the people at the helm of it have not a single fucking clue what they're talking about. They all keep running their mouth with utter nonsense like this.
Billionaires are often being regarded as having extremely insightful ideas, in practice their fortunes was often built on a mix of luck, grits and competence in a few narrow fields, but their insights out of their domains tend to be average or worse.
Being too rich means you end up surrounded by sycophants.
We are a long way from that. At least 10 years, probably never gonna happen.
I agree, but would like to maybe build out that theory. When we start talking about the mechanisms of the past we end up over-constricting the possibility space. There were a ton of different ways the dotcom bubble COULD have played out, and only one way it did. If we view the way it did as the only way it possibly could have, we'll almost certainly miss the way the next bubble will play out.
Simple as this - as to why its just not possible for this to continue.
Get real.
Farming? Plow? Steam engine? Combustion engine? Electricity? Air conditioning? Computers? Internet?
Obviously its a bubble but thats meaningless for anyone but the richest to manage.
The rest of us are just ants.
SGI (Silicon Graphics) made the 3D hardware that many companies relied on for their own businesses, in the days before Windows NT and Nvidia came of age.
Alias|Wavefront and Discreet were two companies where their product cycles were very tied in the SGI product cycles, with SGI having some ownership, whether it be wholly owned or spun out (as SGI collapsed). I can't find the reporting from the time, but it seemed to me that the SGI share price was propped up by product launches from the likes of Alias|Wavefront or Discreet. Equally, the 3D software houses seemed to have share prices propped up by SGI product launches.
There was also the small matter of insider trading. If you knew the latest SGI boxes were lemons then you could place your bets of the 3D software houses accordingly.
Eventually Autodesk, Computer Associates and others eventually owned all the software, or, at least, the user bases. Once upon a time these companies were on the stock market and worth billions, but then they became just another bullet point in the Autodesk footer.
My prediction is that a lot of AI is like that, a classic bubble, and, when the show moves on, all of these AI products will get shoehorned into the three companies that will survive, with competition law meaning that it will be three rather than two eventual winners.
Equally, much like what happened with SGI, Nvidia will eventually come a cropper due to the evaluations due to today's hype and hubris not delivering.
The Economist has a great discussion on depreciation assumptions having a huge impact on how the finances of the cloud vendors are perceived[1].
Revenue recognition and expectations around Oracle could also be what bursts the bubble. Coreweave or Oracle could be the weak point, even if Nvidia is not.
[1] https://www.economist.com/business/2025/09/18/the-4trn-accou...
53 more comments available on Hacker News
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.