Not Hacker News Logo

Not

Hacker

News!

Home
Hiring
Products
Companies
Discussion
Q&A
Users
Not Hacker News Logo

Not

Hacker

News!

AI-observed conversations & context

Daily AI-observed summaries, trends, and audience signals pulled from Hacker News so you can see the conversation before it hits your feed.

LiveBeta

Explore

  • Home
  • Hiring
  • Products
  • Companies
  • Discussion
  • Q&A

Resources

  • Visit Hacker News
  • HN API
  • Modal cronjobs
  • Meta Llama

Briefings

Inbox recaps on the loudest debates & under-the-radar launches.

Connect

© 2025 Not Hacker News! — independent Hacker News companion.

Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.

Not Hacker News Logo

Not

Hacker

News!

Home
Hiring
Products
Companies
Discussion
Q&A
Users
  1. Home
  2. /Discussion
  3. /Circular Financing: Does Nvidia's $110B Bet Echo the Telecom Bubble?
  1. Home
  2. /Discussion
  3. /Circular Financing: Does Nvidia's $110B Bet Echo the Telecom Bubble?
Last activity about 2 months agoPosted Oct 4, 2025 at 9:06 AM EDT

Circular Financing: Does Nvidia's $110b Bet Echo the Telecom Bubble?

miltava
249 points
213 comments

Mood

calm

Sentiment

mixed

Category

other

Key topics

Nvidia
AI
Bubble
Vendor Financing
GPU Demand
Debate intensity80/100

The article compares Nvidia's $110B bet to the telecom bubble, sparking a discussion on the similarities and differences between the two, with some commenters expressing concerns about the sustainability of Nvidia's growth and others highlighting the differences between the telecom bubble and the current AI boom.

Snapshot generated from the HN discussion

Discussion Activity

Very active discussion

First comment

35m

Peak period

154

Day 1

Avg / period

53.3

Comment distribution160 data points
Loading chart...

Based on 160 loaded comments

Key moments

  1. 01Story posted

    Oct 4, 2025 at 9:06 AM EDT

    about 2 months ago

    Step 01
  2. 02First comment

    Oct 4, 2025 at 9:41 AM EDT

    35m after posting

    Step 02
  3. 03Peak activity

    154 comments in Day 1

    Hottest window of the conversation

    Step 03
  4. 04Latest activity

    Oct 6, 2025 at 11:00 PM EDT

    about 2 months ago

    Step 04

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (213 comments)
Showing 160 comments of 213
alephnerd
about 2 months ago
1 reply
Glad to see Tom's blog on HN - as usual a great write up. A number of us have been chatting about this for several months now, and the take is fairly sober.

Meta commentary but I've grown weary of how commentary by actual domain experts in our industry are underrepresented and underdiscussed on HN in favor of emotionally charged takes.

dvt
about 2 months ago
2 replies
> actual domain experts

Calling a VC a "domain expert" is like calling an alcoholic a "libation engineer." VC blogs are, in the best case, mildly informative, and in the worst, borderline fraudulent (the Sequoia SBF piece being a recent example, but there are hundreds).

The incentives are, even in a true "domain expert" case (think: doctors, engineers, economists), often opaque. But when it comes to VCs, this gets ratcheted up by an order of magnitude.

alephnerd
about 2 months ago
Tom has had a fairly solid track record at Redpoint and now Theory in Data, Enterprise SaaS, and AI/ML. And it's not like we see many posts by engineers, doctors, or economists on HN either - most posts are listicles about the "culture" of technology, an increased amount of political articles growing increasingly tenuously related to the tech industry, and a portion of actually interesting technical content.
hodgesrm
about 2 months ago
Martin Casado is a counter-example. His writings on technology starting with his phd thesis are very informative. [0] He’s the real thing as are many others.

[0] http://yuba.stanford.edu/~casado/mcthesis.pdf

dangus
about 2 months ago
3 replies
I think we are at the PS3/Xbox 360 phase of AI.

By that I mean, those were the last consoles where performance improvements delivered truly new experiences, where the hardware mattered.

Today, any game you make for a modern system is a game you could have made for the PS3/Xbox 360 or perhaps something slightly more powerful.

Certainly there have been experiences that use new capabilities that you can’t literally put on those consoles, but they aren’t really “more” in the same way that a PS2 offered “more” than the PlayStation.

I think in that sense, there will be some kind of bubble. All the companies that thought that AI would eventually get good enough to suit their use case will eventually be disappointed and quit their investment. The use cases where AI makes sense will stick around.

It’s kind of like how we used to have pipe dreams of certain kinds of gameplay experiences that never materialized. With our new hardware power we thought that maybe we could someday play games with endless universes of rich content. But now that we are there, we see games like Starfield prove that dream to be something of a farce.

BlueTemplar
about 2 months ago
1 reply
We did get more : the return of VR couldn't have been possible without drastically improved hardware.

But the way how it stayed niche shows how it's not just about new gameplay experiences.

Compare with the success of the Wii Sports and Wii Fit, which I would guess managed it better, though through a different kind of hardware that you are thinking about ?

And I kind of expect the next Nintendo console to have a popular AR glasses option, which also would only have been made possible thanks to improving hardware (of both kinds).

dangus
about 2 months ago
That’s exactly what I mean, too. We obviously will get much better AI. It just seems like the value that most people are getting out of it is already captured, just like how technically impressive stuff like VR is very niche.

I could be very wrong, obviously.

jcranmer
about 2 months ago
> By that I mean, those were the last consoles where performance improvements delivered truly new experiences, where the hardware mattered.

The PS3 is the last console to have actual specialized hardware. After the PS3, everything is just regular ol' CPU and regular ol' GPU running in a custom form factor (and a stripped-down OS on top of it); before then, with the exception of the Xbox, everything had customized coprocessors that are different from regular consumer GPUs.

ben_w
about 2 months ago
> By that I mean, those were the last consoles where performance improvements delivered truly new experiences, where the hardware mattered.

I hope that's where we are, because that means my experience will still be valuable and vibe coding remains limited to "only" tickets that take a human about half a day, or a day if you're lucky.

Given the cost needed for improvements, it's certainly not implausible…

…but it's also not a sure thing.

I tried "Cursor" for the first time last week, and just like I've been experiencing every few months since InstructGPT was demonstrated, it blew my mind.

My game metaphor is 3D graphics in the 90s: every new release feels amazing*, such a huge improvement over the previous release, but behind the hype and awe there was enough missing for us to keep that cycle going for a dozen rounds.

* we used to call stuff like this "photorealistic": https://www.reddit.com/r/gaming/comments/ktyr1/unreal_yes_th...

davedx
about 2 months ago
1 reply
Some great insights with some less interesting in there. I didn’t know about the SPVs, that’s sketchy and now I wanna know how much of that is going on. The MIT study that gets pulled out for every critical discussion of AI was an eye roll for me. But very solid analysis of the quants.

How much of a threat is custom silicon to Nvidia remains an open question to me. I kinda think, by now, we can say they’re similar but different enough to coexist in the competitive compute landscape?

alephnerd
about 2 months ago
> How much of a threat is custom silicon to Nvidia remains an open question to me

Nvidia has also begun trying to enter the custom silicon sector as well, but it's still largely dominated by Broadcom, Marvell, and Renesas.

hackthemack
about 2 months ago
5 replies
I worked at a mom and pop ISP in the 90s. Lucent did seem at the forefront of internet equipment at the time. We used Portmaster 3s to handle dial up connections. We also looked into very early wireless technology from Lucent.

Something I wanted to mention, only somewhat tanget. The Telecommunications Act of 1996 forced telecommunication companies to lease out their infrastructure. It massively reduced the prices an ISP had to pay to get T1, because, suddenly, there was competition. I think a T1 went from 1800 a month in 1996, to around 600 a month in 1999. It was a long time ago, so my memory is hazy.

But, wouldn't you know it, the Telecommunication companies sued the FCC and the Telecommunications Act was gutted in 2003

https://en.wikipedia.org/wiki/Competitive_local_exchange_car...

awongh
about 2 months ago
1 reply
You're implying only 4 years of regulation was enough to shift the balance of power between telecoms and smaller ISPs."

If it's true that this regulation was what helped jumpstart the internet it's an interesting counterpoint to the apocalyptic predictions of people when these regulations are undone. (net neutrality comes to mind as well)

I've never heard anyone claim before that just having these laws on the books for a small period of time is "enough".

watwut
about 2 months ago
1 reply
> 've never heard anyone claim before that just having these laws on the books for a small period of time is "enough".

Why would it be enough? This legislation prevents monopolies from abusing position, therefore we will repeal it the moment it turns out to be useful?

Yeah, it takes time to consolidate power again, that does not mean the legislation is not good.

awongh
about 2 months ago
1 reply
> Why would it be enough?

It worked out just fine? Are you saying that post-2003 internet access should have had more regulation to allow open access?

I've never heard anyone complain about that before- is there a specific issue that could have been fixed?

hackthemack
about 2 months ago
You bring up an interesting point. This is something I discovered years ago when I would have "political discussion lunch" with three of my friends who all had very different political views. After years of going to lunch together and debating things like, "Was giving AT&T a monopoly in 1913, good, or bad?"

https://en.wikipedia.org/wiki/Kingsbury_Commitment

The answer is... nobody will ever agree on anything. You can always cherry pick some detail to bolster your case, whatever it may be.

We can never visit the alternate reality where another choice was made and so you can not win an argument.

Now, you can go and find similar circumstances. You can find other countries who did not grant a monopoly (for instance). But then, your opponent will argue all the differences between that instance and what occurred.

Also, I think it is a shame your original reply is getting voted down. I am against people voting down comments just because they disagree. Voting down should be used for comments that are low quality.

bwfan123
about 2 months ago
1 reply
I worked at a startup during the telecom boom. Then, startups were getting acquired by the likes of Cisco before the startup had a deployed product. And, back then, IPOs were the only form of liquidity event and engineers were locked up for 6 months. The lucky ones had their startups go IPO or get acquired with enough time to spare to get out before the ensuing bust. After the bust, funding dried up and most startups folded including the one I worked at. There was wipeout and desolation for a few years. Subsequently, green shoots started appearing in the form of a new wave of tech companies.
echelon
about 2 months ago
2 replies
Capitalism only works if there is lots of competition.

Monopolies gum up the system, reward the institutional capital rather than innovation capital, and prevent new entrants from de-ossifying and being the renewing forest fire.

We've been so lax on antitrust. Google, Apple, Meta, Amazon - they all need to be broken up. Our economy and our profession would be better for it.

Innovation should be a treadmill.

YC and a16z want this.

mooreds
about 2 months ago
2 replies
I don't agree with everything in this book, Goliath,[0] but his main argument was that business monopoly and democracy are in direct opposition. And that since the 1970s we've chosen to enable monopoly again and again.

0: https://www.amazon.com/Goliath-100-Year-Between-Monopoly-Dem...

twoodfin
about 2 months ago
1 reply
How does Stoller explain why since the 1970’s there’s been massive turnover whether you look at the largest 10 companies, the largest 100, or the largest 1,000?
bigbadfeline
about 2 months ago
1 reply
Turnover might be due to mergers, takeovers and periods of monopolization of different sectors of the economy, turnover doesn't measure the level of monopolization or duo-polization. Market share and relative capitalization per sector are the valid measures.
twoodfin
about 2 months ago
1 reply
OK, which sectors were monopolies (or even duopolies) in the 1970’s that have survived to this day? Maybe oil? A few agricultural sectors?

If you actually look at the corporations that are consuming Americans’ income in 2025 vs. the equivalents in 1975, the idea that “monopolies” have controlled the economy for 50 years is self-evidently absurd.

johnnyprozac
about 2 months ago
If the number of providers that you yourself have direct access to is singular or limited to two or three? Yes that's a monopolistic environment. Consider that for most people in many more regions of the US than not will only be able to access the internet with ONE provider. They will only be able to get health care from one provider. They will only be able to get farm equipment from one provider and then not be able to fix said farm equipment. They will only be able to get seeds for food from one provider. And where 1 or 2 platforms are the only way for distribution in the digital space ("cough" app stores) or your logistical choices come down to 1 or 2 with prices fixed...again you may have comeptition on some level but at the macro level monopolistic effects are occuring because the distribution channel is what moves a market not necessarily sub vendors.

the fact that the incentives for shareholder interest has led to consolidation and then to again always end up in a market where there is less providers than more. Look the names may change But since derregulation of much of the M/A activity we have ended up in markets that have monopolistic effects.

So...semantically there may be "competition" in markets" but on the ground and in real life where it matters the effects and actual restrictions that monopoly brings is very much a fact. It's absurd to ignore actual in depth analysis that takes context of actual economics (from the regional facets to actual mechanics of trade) into account and not think that monopolization is not occurring and not have an overwhelming impact. It is very evident that you yourself have not looked to deep into this.

Rury
about 2 months ago
He's right. The very reason free markets don't exist is the same exact reason why communism doesn't work in practice. All markets collapse until there's a few dominant businesses (ie monopolies), just as how society naturally forms governing hierarchies. And this is easy to see how they're opposed. All you have to do is look at how there's only 4 types of governments based on how decision making power is distributed, and that democracy is actually closer to communism on that spectrum rather than dictatorships, and then see how monopolies behave quite like dictatorships.

That said, true dictatorships rarely work in practice but for different reasons as to why communism doesn't work. Which is why, when it comes to organizations, almost all are in fact oligarchies in practice, despite whatever they're called. This is known as the iron law of oligarchy. Notice the term: oligopoly? Go look at every industry and there's near always an oligopoly.

wslh
about 2 months ago
> YC and a16z want this.

I don't think so. I think they want their own monopolies. That is what Peter Thiel's book[1] recommends.

[1] https://en.wikipedia.org/wiki/Zero_to_One

nroets
about 2 months ago
Changes in the legislative landscaped may have influenced the timing of the price war and the telecoms bubble.

But the price war was inevitable. And the telecoms bubble was highly likely in any case.

Telecoms investment was a response to crazy valuations of dot-com stocks.

marcusb
about 2 months ago
> I think a T1 went from 1800 a month in 1996, to around 600 a month in 1999. It was a long time ago, so my memory is hazy.

It varied a lot by region. At the mom and pop ISP I worked at, we went from paying around $1,500/month for a T1 to $500 to, eventually around $100/month for the T1 loop to the customer plus a few grand a month for an OC12 SONET ring that we used to backhaul the T1 (and other circuits) back to our datacenter.

But, all of it was driven by the Telecommunications Act requirement for ILECs to sell unbundled network facilities - all of the CLECs we purchased from were using the local ILEC for the physical part of the last mile for most (> 75%) of the circuits they sold us.

One interesting thing that happened was that for a while in the late 90’s, when dialup was still a thing, we could buy a voice T1 PRI for substantially less than a data T1 ($250 for the PRI vs $500 for the T1.) The CLEC’s theory was our dialup customers almost all had service from the local ILEC, and the CLEC would be paid “reciprocal compensation” fees by the ILEC for the CLEC accepting calls from them.

In my market, when the telecommunications act reform act was gutted, the ILEC just kept on selling wholesale/unbundled services to us. I think they had figured out at that point that it was a very profitable line of business if they approached it the right way.

jauntywundrkind
about 2 months ago
Th gutting of telecom competition, the allowance of total monopoly power, was a travesty of the court system. The law was quite plain & clear & the courts decided all on their own that, well, since fiber to the home is expensive to deploy, we are going to overrule the legislative body. The courts aren't supposed to be able to overturn laws they don't like as they please but that's what happened here.

Regarding the price of connection, it's also worth mentioning that while T1 and other T-channel and OCx connection remains in high use, 1996-1999 is also the period where DSL became readily available & was a very fine choice for many needs. This certainly created significant cost pressure on other connectivity options.

narmiouh
about 2 months ago
4 replies
I think the fundamental issue is the uncertainty of achieving AGI with baked in fundamentals of reasoning.

Almost 90% of topline investments appear to be geared around achieving that in the next 2-5 years.

If that doesn’t come to pass soon enough, investors will loose interest.

Interest has been maintained by continuous growth in benchmark results. Perhaps this pattern can continue for another 6-12 months before fatigue sets in, there are no new math olympiads to claim a gold medal on…

Whats next is to show real results, in true software development, cancer research, robotics.

I am highly doubtful the current model architecture will get there.

cl42
about 2 months ago
3 replies
Not sure why you're getting downvoted.

If you speak with AI researchers, they all seem reasonable in their expectations.

... but I work with non-technical business people across industries and their expectations are NOT reasonable. They expect ChatGPT to do their entire job for $20/month and hire, plan, budget accordingly.

12 months later, when things don't work out, their response to AI goes to the other end of the spectrum -- anger, avoidance, suspicion of new products, etc.

Enough failures and you have slowing revenue growth. I think if companies see lower revenue growth (not even drops!), investors will get very very nervous and we can see a drop in valuations, share prices, etc.

Cheer2171
about 2 months ago
3 replies
> their expectations are NOT reasonable. They expect ChatGPT to do their entire job for $20/month and hire, plan, budget accordingly.

This is entirely on the AI companies and their boosters. Sam Altman literally says gpt 5 is "like having a team of PhD-level experts in your pocket." All the commercials sell this fantasy.

watwut
about 2 months ago
1 reply
I would blame the business people for being so gullible too.
bigstrat2003
about 2 months ago
1 reply
There's some blame there, sure. But generally people would agree that between a con man and his victims, the con man has the greater moral failing.
watwut
about 2 months ago
In general yes. But here we talk about businessmen who are paid quite a lot of money literally to make decisions like these.

It is kind of like when a cop allows his gun to be stolen. Yes, the criminal is the guilty, but also the cop was the one person supposed to guard against it.

marginalia_nu
about 2 months ago
This is really the biggest red flag, non-technical people's (by extension investors and policymakers) general lack of understanding of the technology and its limitations.

Of course the valuation is going to be insanely inflated if investors think they are investing in literal magic.

Leynos
about 2 months ago
I mean, the AI companies have £200 a month plans for a reason. And if you look at Blitzy for example, their plans sit at the £1000 a month mark.
belter
about 2 months ago
1 reply
> If you speak with AI researchers, they all seem reasonable in their expectations.

An extraordinary claim for which I would like to see the extraordinary evidence. Because every single interview still available on YT form 3 years ago ...had these researchers putting AGI 3 to 5 years out ...A complete fairy tale as the track to AGI is not even in sight.

If you want to colonize the Solar System the track is clear. If you to have Fusion, the track is clear. AGI track ?

cl42
about 2 months ago
Fair point, and I should be more clear. The AI researchers I speak with don't expect AGI and are more reasonable in trying to build good tech rather than promising the world. My point was that these AI researchers aren't the ones inflating the bubble.
narmiouh
about 2 months ago
I'm not sure either - for a second I thought perhaps llm agents are prowling around to ensure the right messages are floating up, but who knows...
Zigurd
about 2 months ago
1 reply
AGI is not near. At best, the domains where we send people to years of grad school so that they can do unnatural reasoning tasks in unmanageable knowledge bases, like law and medicine, will become solid application areas for LLMs. Coding, most of all, will become significantly more productive. Thing is, once the backlog of shite code gets re-engineered, the computing demand for a new code creation will not support bubble levels of demand for hardware.
chuckadams
about 2 months ago
5 replies
> AGI is not near

There's also plenty of argument to be made that it's already here. AI can hold forth on pretty much any topic, and it's occasionally even correct. Of course to many (not saying you), the only acceptable bar is perfect factual accuracy, a deep understanding of meanings, and probably even a soul. Which keeps breathing life into the old joke "AI is whatever computers still can't do".

bix6
about 2 months ago
1 reply
> AI can hold forth on pretty much any topic, and it's occasionally even correct.

You consider occasionally being correct AGI?

chuckadams
about 2 months ago
1 reply
Did I really need to use the /s tag? But a four-year-old is occasionally correct. Are they not intelligent? My cat can't answer math problems, is it a mere automaton? If we can't define what "true" intelligence is, then perhaps a simulation that fools people into calling it "close enough" is actually that, close enough.
bix6
about 2 months ago
1 reply
> There's also plenty of argument to be made that it's already here

Given you start with that I would say yes the /s is needed.

A 4 year old isn’t statistically predicting the next word to say; its intelligence is very different from an LLM. Calling an LLM “intelligent” seems more marketing than fact based.

chuckadams
about 2 months ago
I actually meant that first sentence too. One can employ sarcasm to downplay their own arguments as well, which was my intent, as in that it might be possible that AGI might not be a binary definition like "True" AI, and that we're seeing something that's senile and not terribly bright, but still "generally intelligent" in some limited sense.

And now after having to dissect my attempt at lightheartedness, like a frog or a postmodern book club reading, all the fun has gone out. There's a reason I usually stay out of these debates, but I guess I wouldn't have been pointed to that delightful pdf if I hadn't piped up.

pessimizer
about 2 months ago
It's still forgetting what it's talking about from minute to minute. I'm honestly getting tired of bullying them into following the directions I've already given them three times.

I think the main problem with AGI as a goal (other than I don't think it's possible with current hardware, maybe it's possible with hypothetical optical transistors) is that I'm not sure AGI would be more useful. AGI would argue with you more. People are not tools for you, they are tools for themselves. LLMs are tools for you. They're just very imperfect because they are extremely stupid. They're a method of forcing a body of training material to conform to your description.

But to add to the general topic: I see a lot of user interfaces to creative tools being replaced not too long from now by realtime stream of consciousness babbling by creatives. Give those creatives a clicker with a green button for happy and a red button for sad, and you might be able to train LLMs to be an excellent assistant and crew on any mushy project.

How many people are creative, though, as compared to people who passively consume? It all goes back to the online ratio of forum posters to forum readers. People who post probably think 3/5 people post, when it's probably more like 1/25 or 1/100, and the vast majority of posts are bad, lazy and hated. Poasting is free.

Are there enough posters to soak up all that compute? How many people can really make a movie, even given a no-limit credit card? Have you noticed that there are a lot of Z-grade movies that are horrible, make no money, and have budgets higher than really magnificent films, budgets that in this day and age give them access to technology that stretches those dollars farther than they ever could e.g. 50 years ago? Is there a glut of unsung screenwriters?

narmiouh
about 2 months ago
I will give you an example from just two days ago, I asked chatgpt pro to take some rough address data and parse it into street number, street name, street type, city, state, zip fields.

The first iteration produced decent code, but there was an issue some street numbers had alpha characters in it that it didn't treat as street numbers, so I asked it to adjust the logic of code so that even if the first word is alpha or numeric consider it a valid street number. It updated the code, and gave me both the sample code and sample output.

Sample output was correct, but the code wasn't producing correct output.

It spent more than 5 mins on each of the iterations (significantly less than what a normal developer would, but the normal developer would not come back with broken code).

I can't rely on this kind of behavior and this was a completely green field straight forward input and straight forward output. This is not AGI in my book.

Leynos
about 2 months ago
When an agent can work independently over an 8 hour day, incorporating new information and balancing multiple conflicting goals—then apply everything it learned in context to start the next day with the benefit of that learning, repeat day after day—then I'll call it AGI.
Workaccount2
about 2 months ago
https://ai.vixra.org/pdf/2506.0065v1.pdf

I don't think I'll ever stop finding this funny.

It was written by Opus 4 too.

stevenhuang
about 2 months ago
This doubt of yours is as credible as all the claims 5 years ago that we will never have capable thinking machines, yet here we are with LLMs.
xadhominemx
about 2 months ago
Hyperscalers are only spending less than half of their operating cash flows on AI capex. Full commitment to achieving AGI within a few years would look much different.
xbmcuser
about 2 months ago
1 reply
With all the major players like Amzn, Msft and Alphabet going for their own custom chips and restrictions on selling to China it will be interesting to see how Nvidia does.

I personally would prefer China to get to parity on node size and get competitive with nvidia. As that is the only way I see the world not being taken over by the tech oligarchy.

JCM9
about 2 months ago
The custom chips don’t seem to be gaining traction at scale. On paper the specs look good but the ecosystem isn’t there. The bubble popping and flooding the market with CUDA GPUs means it will make even less sense to switch.
pgspaintbrush
about 2 months ago
1 reply
Are these companies developing InfiniBand-class interconnects to pair with their custom chips? Without equivalent fabric, they can’t replace NVIDIA GPUs for large-scale training.
whp_wessel
about 2 months ago
1 reply
recent Huang podcast went into this, making the point that custom chips won't be competitive to Nvidia's as they are now making specialised chips instead of just 'gpu's'.

https://open.spotify.com/episode/2ieRvuJxrpTh2V626siZYQ?si=2...

pgspaintbrush
about 2 months ago
Thank you for the pointer!
pragmatic
about 2 months ago
5 replies
At a "telecom of telecom" we (they) were still lighting up dark fiber 15 years later (2015) when mobile data for cell carriers finally created enough demand. Hard to fathom the amount of overbuild.

The only difference is fiber optic lines remained useful the whole time. Will these cards have the same longevity?

(I have no idea just sharing anecdata)

hyghjiyhu
about 2 months ago
1 reply
I think the chips themselves won't have longevity, but the r&d gone into them is useful. Question is whether the value of that can be captured.
adventured
about 2 months ago
Depends on which companies we're talking about. Nvidia's annualized operating income is so high right now that it'll be capturing more value (op income) in the next four quarters (~$120 billion) than its R&D expenditures have cost over its 32 year history combined. For Nvidia the return has long since been achieved.

As the AI spending bubble gives out, Nvidia's profit growth will slow dramatically (single digits), and slamming into a wall (as Cisco did during the telecom bubble; leading up to the telecom crash, Cisco was producing rather insane quarter over quarter growth rates).

Zigurd
about 2 months ago
1 reply
New fiber isn't significantly more power efficient. The other side of the coin is that backhoes haven't become more efficient since the fiber was buried.
beerandt
about 2 months ago
1 reply
Directional drilling is a game changer and has become accessible in the last decade.

You're looking for advancement in carriages unaware of the 'automobile' that made 5g and ftth deployment at scale possible.

bcrl
about 2 months ago
Directional drilling is not a panacea. There are still places where the terrain is too rocky and aerial construction is the only way to control costs.
mjcl
about 2 months ago
1 reply
I think the high-density data centers that are being built to support the hyperscalers are more analogous to the dark fiber overbuild. When you lit that fiber in 2015, you (presumably) were not using a line card bought back in 1998.
pragmatic
about 2 months ago
Everything in that data center is depreciating as soon as they turn on the power.

AC, ups, generators not to mention the severs.

That's the thing with fiber it was still useful. The cards at either end are easy to add, waaaayyy cheaper and higher perf (they're were no cards on end of dark lines) 15 years later.

heisenbit
about 2 months ago
> Will these cards have the same longevity?

The article cites anecdotal 1-2 years due to the significant stress.

pragmatic
about 2 months ago
In 2005 telecom was a cash cow because of long distance charges and if your mechanical phone switch was paid off you were printing money (regulations guaranteed revenue)

This didn't last that much longer and many places were trying to diversify into managed services (data dog for companues on Orem network and server equipment,etc) which they call "unregulated" revenue.

Add written an things business, irrational exuberance can kill you.

mg
about 2 months ago
9 replies

    Fiber networks were using less
    than 0.002% of available capacity,
    with potential for 60,000x speed
    increases. It was just too early.
I doubt we will see unused GPU capacity. As soon as we can prompt "Think about the codebase over night. Try different ways to refactor it. Tomorrow, show me your best solution." we will want as much GPU time at the current rate as possible.

If a minute of GPU usage is currently $0.10, a night of GPU usage is 8 * 60 * 0.1 = $48. Which might very well be worth it for an improved codebase. Or a better design of a car. Or a better book cover. Or a better business plan.

cantor_S_drug
about 2 months ago
2 replies
With improvements on the algorithm side and new techniques, even older hardware will become useful.
chatmasta
about 2 months ago
2 replies
This is the biggest threat to the GPU economy – software breakthroughs that enable inference on commodity CPU hardware or specialized ASIC boards that hyperscalers can fabricate themselves. Google has a stockpile of TPUs that seem fairly effective, although it’s hard to tell for certain because they don’t make it easy to rent them.
Zigurd
about 2 months ago
I don't think we will need to wait for anything as unpredictable as a breakthrough. Optimizing inference for the most clearly defined tasks, which are also the tasks where value is most readily quantified, like coding, is underway now.
xadhominemx
about 2 months ago
More efficient inference = more reasoning token. Hyperscaler ASICs are closing the gap at the hardware/system level, yes.
Zigurd
about 2 months ago
I get what you're saying and the reasoning behind it, but older hardware has never been useful where power consumption is part of determining usefulness.
Mo3
about 2 months ago
7 replies
> I doubt we will see unused GPU capacity

I'd argue we very certainly will. Companies are gobbling up GPUs like there's no tomorrow, assuming demand will remain stable and continue growing indefinitely. Meanwhile LLM fatigue has started to set in, models are getting smaller and smaller and consumer hardware is getting better and better. There's no way this won't end up with a lot of idle GPUs.

Workaccount2
about 2 months ago
2 replies
>Meanwhile LLM fatigue has started to set in

Has it?

I think there is this compulsion to think that LLMs are made for senior devs, and if devs are getting wary of LLMs, the experiment is over.

I'm not a programmer, my day job isn't tech, and the only people I know who express discontent with LLMs are a few of programmer friends I have. Which I get, but everyone else is using them gleefully for all manner of stuff. And now I am seeing the very first inklings of completely non-technical people making bespoke applets for themselves.

From OpenAI, programming is ~4% of chatGPTs usage. That's 96% being used for other stuff.

I don't see any realistic or grounded forecast that includes a diminishing demand for compute. We're still at the tip of adoption...

Mistletoe
about 2 months ago
5 replies
You should get on Reddit, people hate AI with a passion there. People I meet in real life hate it also. I think the public actually hates AI more than it should now.
Workaccount2
about 2 months ago
3 replies
I spent 13 years chronically on reddit before stumbling into a exit hatch of the bubble chamber.

Those people (well really it's teens and college kids) live on reddit, they are so far from an accurate representation of reality its insane.

transcriptase
about 2 months ago
2 replies
Worse when you find out there’s a couple dozen of the same moderators running nearly all the top 500 subreddits.
hnhg
about 2 months ago
1 reply
That makes some sense. Are they paid for this?
GauntletWizard
about 2 months ago
1 reply
By reddit? No. By the users? No.

But they are paid.

KronisLV
about 2 months ago
2 replies
You’d like to substantiate that? I’d love to have someone pull the curtain back and learn by whom, if that’s the case.
GauntletWizard
about 2 months ago
Many of them are using their subreddits for submarine promotion. Here's a public example: https://www.reddit.com/r/SeattleWA/comments/547wxd/comment/d...

There's much worse that's "pure speculation", because a few thousand unemployed losers that nonetheless have six figure incomes aren't suspicious at all. See the kerfuffle when the mod of r/antiwork went on Fox News, where several of the other power mods alluded to their true incomes and professions (they know what they are - PR people) when trying to boast about how they'd have done better.

Workaccount2
about 2 months ago
I don't know about money, but they definitely get ego strokes.
jjoe
about 2 months ago
So they own the media...
1oooqooq
about 2 months ago
"i quit reddit but I'm 100% bullish in llms that just distil reddit posts to me"

oh you

Aerbil313
about 2 months ago
Everyone should learn the concept of a Skinner Box. [1]

Reddit is a Skinner Box. HN is too, though to a much lesser extent [2]. Every Skinner Box has one dominant opinion on every matter, which means, by simply using the product, your beliefs on any matter will shift towards the dominant opinion of the platform.

I was a chronically online Reddit user once. I can spot any chronically online Reddit user in just a few minutes in any social event by their mannerisms and the way they talk. I’ll ask and without fail indeed they are a daily Reddit user. It’s even more obvious in writing where you can spot them in just a few always-grammatically-correct text messages flavored with reddit-funny remarks and snarks and jokes.

Same goes for chronic X users. Their signature behavior is talking about social/political issues unprompted. It’s even easier to spot them.

I think the main reason behind platforms shaping user behavior is this: The most upvoted content will always surface to the top, where it will be seen by most users, meaning, its belief-shaping impact is exponential instead of linear. In the same manner unpopular opinions will be pushed to the bottom, and will have exponentially small impact. Some opinions will even be banned or shadowbanned, which means they are beyond the Overton Window of the specific platform.

This way, the platform both nudges you towards the dominant opinion and limits the range of possible opinions you will be exposed to. Over time, this affects your personality and character.

1: https://en.m.wikipedia.org/wiki/Operant_conditioning_chamber

2: The HN moderators and the algorithm both actively resist the effect and try to increase diversification.

Sammi
about 2 months ago
I had to quit Reddit after a decade of heavy use because of the doomerism. It's a place you go if you want to kill your spirit. It's just not healthy.
Fuzzwah
about 2 months ago
The view I have is that people hate having AI slop spewed at them, but will find value in asking an LLM about things they're interested in / help with things.
chii
about 2 months ago
It's a pretty biased sample. Not to mention that people who are neutral and is just using AI won't be bothered to comment. So you only ever see one extreme or another.
arkmm
about 2 months ago
The irony of this is so much of Reddit comments these days are AI generated.
pessimizer
about 2 months ago
1 reply
> From OpenAI, programming is ~4% of chatGPTs usage. That's 96% being used for other stuff.

I think it's important to remember that a good bunch of this is going to be people using it as an artificial friend, which is not really productive. Really that's destructive, because in that time you could be creating a relationship with an actual person instead of a context soon to be deleted.

But on the other hand, some people are using it as an artificial smart friend, asking it questions that they would be embarrassed to ask to other people, and learning. That could be a very good thing, but it's only as good as the people who own and tune the LLMs are. Sadly, they seem to be a bunch of oligarchs who are half sociopaths and half holy warriors.

As for compute, people using it as an artificial friend are either going to have a low price ceiling, or in an even worse case scenario they are not and it's going to be like gambling addiction.

baq
about 2 months ago
1 reply
Productive or destructive, demand is there, so it isn’t late bubble. It’s still early. (Which is scary, I’ll readily admit.)
pessimizer
about 2 months ago
2 replies
But demand isn't there (or rather, proven to be there.) Demand is measured in dollars, and right now VC is paying. This is peak bubble - farthest distance from valuation and income.
yeasku
about 2 months ago
Even if its there, will it be in 10 years?
baq
about 2 months ago
Microsoft and meta are not VCs and they’re spending money on data centers like there’s no tomorrow, doesn’t seem very low demand.
idiotsecant
about 2 months ago
1 reply
Your bet is that people will simply use less compute, for the first time in the history of the human race?
Mo3
about 2 months ago
No, mostly less external compute
mg
about 2 months ago
1 reply
Look at the human body.

2% of it is dedicated to thinking.

My guess is that as a species, we will turn a similar percentage of our environment into thinking matter.

If there are a billion houses on planet earth, 2% of it are 20 million datacenters we still have to build.

wussboy
about 2 months ago
An analogy is not proof. It is not even evidence.
goalieca
about 2 months ago
What’s the lifetime of these things once they’ve been running hot for 2-3 years
xadhominemx
about 2 months ago
Test time compute has made consumption highly elastic. More compute = better results. Marginal cost of running these GPUs when they would otherwise be idle is relatively very low. It will be utilized.
brazukadev
about 2 months ago
This. I just found out that for my MCP needs, Qwen3 4B running local is good enough! So I just stopped using Gemini API.
delusional
about 2 months ago
> There's no way this won't end up with a lot of idle GPUs.

Nvidia is betting the farm on reinventing GPU compute every 2 years. The GPUs wont end up idle, because they will end up in landfills.

Do I believe that's likely, no, but it is what I believe Nvidia is aiming for.

jdlshore
about 2 months ago
1 reply
> As soon as we can prompt…

This is the fundamental error I see people making. LLMs can’t operate independently today, not on substantive problems. A lot of people are assuming that they will some day be able to, but the fact is that, today, they cannot.

The AI bubble has been driven by people seeing the beginning of an S-curve and combining it with their science-fiction fantasies about what AI is capable of. Maybe they’re right, but I’m skeptical, and I think the capabilities we see today are close to as good as LLMs are going to get. And today, it’s not good enough.

Workaccount2
about 2 months ago
1 reply
Getting gold in the math Olympiad is a pretty strong indicator of operating independently on substantive problems.

A year ago they need an extensive harness to get silver, and two years ago they could hardly multiply 1000x10000.

Terence Tao tweeted yesterday about using GPT5 to help quickly solve a problem he was working on.

saberience
about 2 months ago
2 replies
Yes but why did ChatGPT work on math Olympiad problems? Because it got a prompt giving it the instruction and context etc.

Why did GPT5 help Terence Tao solve a math problem, because he gave it a prompt and the context etc.

None of these models are useful without a human prompting them and giving it tasks, goals, context etc, they don't operate independently, they don't get ideas of work to be done, they don't operate over long time horizons, they can't accept long term goals and sub-divide those goals into sub goals, and sub tasks etc.

They are useless without humans telling them what to do.

Workaccount2
about 2 months ago
1 reply
You should see what happens when you let them talk to each other
WA
about 2 months ago
1 reply
Errors compound? Context drift?
Workaccount2
about 2 months ago
Try it, and let them pick the topic. Though they will probably pick AI development, mysteriously it seems to be their favorite topic...
esafak
about 2 months ago
Why don't you stick them in a robot, give them agency, continuously train them, and see what happens? Be careful what you ask for.
skrebbel
about 2 months ago
2 replies
> improved codebase

I've seen lots of claims about AI coding skill, but that one might be able to improve (and not merely passably extend) a codebase is a new one. I'd want to see it before I believe it.

fragmede
about 2 months ago
Claude will refactor but more than that, it can add documentation. And it can be asked about a codebase too. "Where does FOO happen?" "How does BAR work?".
Leynos
about 2 months ago
It depends what you're fitting to. At the simplest, you can ask for a reduction in cyclomatic/cognitive complexity measured using a linter, extraction of methods (where a paragraph of code serves no purpose other than to populate a variable) or complex conditionals, move from an imperative to a declarative approach, etc. These are all things that can be caught through pattern matching and measured using a linter or code review tool (CodeRabbit, Sourcery or Codescene).

Other things might need to be done in two stages. You might ask the agent to first identify where code violates CQRS, then for each instance, explain the problem, and spawn a sub-agent to address that problem.

Other things the agent might identify this way: multiple implications, use of conflicted APIs, poor separation of concerns at a module or class level.

I don't typically let the agent do any of this end to end, but I would typically manually review findings before spawning subagents with those findings.

thenaturalist
about 2 months ago
1 reply
This is such a short sighted take glaringly ommitting a crucial ingredient in learning or improvement - both for humans or machines alike: feedback loops.

And you can't really hack / outsmart feedback loops.

Just because something is conceptually possible, interaction with the real rest of the world separates a possible from an optimal solution.

The low hanging fruits/ obvious incremental improvements might be quickly implemented by LLMs based on established patterns in their training data.

That doesn't get you from 0 to 1 dollar, though and that's what it's all about.

bwfan123
about 2 months ago
this. Was highlighted by Sutton in a recent podcast rather starkly.

LLMs are a great tool. But, the real world is far too nuanced to be captured in text and tokens. So, LLMs will be a great productivity boosting tool like a calculator or a spreadsheet. Expecting it to do more is science fiction.

yubblegum
about 2 months ago
2 replies
I just had to double check (have not been paying attention for a couple of years) but indeed it seems GPU underutilization remains a fact and the numbers are pretty significant. Main issues are being memory bound so the compute sits idle.
davedx
about 2 months ago
Tasks being memory bound is not the same thing as GPU's being idle for economic reasons though.
ninkendo
about 2 months ago
The actual computation speed isn't as important nowadays but it doesn't really change the conclusion with respect to whether they're underutilized.

Because the main reason for the price premiums in AI-class GPUs are the gobs of insanely fast memory, and that is very much not underutilized. AI companies grab GPUs with as much memory (at the fastest memory bandwidth) as possible and underclock the GPU to save on power. Linus Tech Tips had a great video about the H200 that touched on this this week: https://www.youtube.com/watch?v=lNumJwHpXIA

ccorcos
about 2 months ago
I’ve never understood why time is the metric people are using here. If LLMs get so much better we can “run them overnight”, what makes you think that they won’t also get faster and so they accomplish exactly what you’re talking about in 5 minutes?
bigbadfeline
about 2 months ago
> "Try different ways to refactor it. Tomorrow, show me your best solution."

The cost/benefit analysis doesn't add up for two reasons:

First, a refactored codebase works almost the same as non-refactored one, that is, the tangible benefit is small.

Second, how many times are you going to refactor the codebase? Once and... that's it. There's simply no need for that much compute for lack of sufficient beneficial work.

That is, the present investments are going to waste unless we automate and robotize everything, I'm OK with that but it's not where the industry is going.

credit_guy
about 2 months ago
> As soon as we can prompt "Think about the codebase over night. Try different ways to refactor it. Tomorrow, show me your best solution." we will want as much GPU time at the current rate as possible.

That is nothing. Coding is done via text. Very soon people will use generative AI for high resolution movies. Maybe even HDR and high FPS (120 maybe?). Such videos will very likely cost in the range of $100-$1000 per minute. And will require lots and lots of GPUs. The US military (and I bet others as well) are already envisioning generative AI use for creating a picture of the battlespace. This type of generation will be even more intensive than high resolution videos.

ivape
about 2 months ago
1 reply
One of the things before AI in the market was that capital had limited growth opportunities. Tech, which was basically a universe of scaled out crud apps, was where capital would keep going back into.

AI is a lot more useful than hyper scaled up crud apps. Comparing this to the past is really overfitting imho.

The only argument against accumulating GPUs is that they get old and stop working. Not that it sucks, not that it’s not worth it. As in, the argument against it is actually in the spirit of “I wish we could keep the thing longer”. Does that sound like there’s no demand for this thing?

The AI thesis requires getting on board with what Jenson has been saying:

1) We have a new way to do things

2) The old ways have been utterly outclassed

3) If a device has any semblance of compute power, it will need to be enhanced, updated, or wholesale replaced with an AI variant.

There is no middle ground to this thesis. There is no “and we’ll use AI here and here, but not here, therefore we predictably know what is to come”.

Get used to the unreal. Your web apps could truly one day be generated frame by frame by a video model. Really. The amount of compute we’ll need will be staggering.

pessimizer
about 2 months ago
1 reply
> Your web apps could truly one day be generated frame by frame by a video model. Really. The amount of compute we’ll need will be staggering.

We've technically been able to play board games by entering our moves into our telephones, sending them to a CPU to be combined, then printing out a new board on paper to conform to the new board state. We do not do this because it would be stupid. We can not depend on people starting to do this saving the paper, printer, and ink industries. Some things are not done because they are worthless.

ivape
about 2 months ago
1 reply
You know that N people can now point a webcam onto their boards and have a multi modal LLM understand everyone’s board state now, right? Literally zero programming involved, you just have to point a camera at the damn thing and maybe write some glue code.

If you’re a board game player then you are more than capable of imagining possibilities well beyond this.

Peritract
about 2 months ago
The parent comment's point isn't that we can't do these things, or that these things are difficult; it's that we don't want to do them. They aren't beneficial.
stephc_int13
about 2 months ago
4 replies
Knowing history of past bubbles is only mildly informative. The dotcom bubble was different than the railroads bubble etc.

The only thing to keep in mind is that all of this is about business and ROI.

Given the colossal investments, even if the companies finances are healthy and not fraudulent, the economic returns have to be unprecedented or there will be a crash.

They are all chasing a golden goose.

Printerisreal
about 2 months ago
1 reply
This time they have fiat money and government on their side. so that is also different.
candiddevmike
about 2 months ago
1 reply
It just means we're all going to be hurt by the collapse, not just investors. In line with socialized loses, privatized profits.
Printerisreal
about 2 months ago
That is also true :)
Mistletoe
about 2 months ago
1 reply
I’m concerned that the accounting differences mentioned between Lucent and Nvidia, Microsoft, OpenAI, Google just mean we have gotten much better at lying and misrepresenting things as true. Then the bubble pops and you get the real numbers and we are all like “yep it was the same thing all over again”.
stephc_int13
about 2 months ago
Of course, CFOs are all very aware of what failed previously.
jauntywundrkind
about 2 months ago
4 replies
With Bezos openly stating the goal to build 10 GW+ data-centers in space, it almost feels in question whether this is about ROI, or simply building the Neuromancer future where the ultra-rich can finally be free of their need for the rest of us. Not needing labor at all would be the final return on investment. https://news.ycombinator.com/item?id=45465480 https://www.datacenterdynamics.com/en/news/jeff-bezos-claims...
gruez
about 2 months ago
1 reply
>With Bezos openly stating the goal to build 10 GW+ data-centers in space, it almost feels in question whether this is about ROI, or simply building the Neuromancer future where the ultra-rich can finally be free of their need for the rest of us. Not needing labor at all would be the final return on investment.

Why does "Not needing labor at all" need to be in space?

ooterness
about 2 months ago
2 replies
It's so the billions of disgruntled former workers can't storm the castle.
gruez
about 2 months ago
1 reply
Still doesn't make sense. The PLA (largest army in the world) can't even capture Taiwan. If they wanted an impenetrable fortress a random island is all they need.
estimator7292
about 2 months ago
That's one of the most wrong statements I've seen on the internet today
wmeredith
about 2 months ago
The movie Elysium shows this ruling class in space, proletariat on Earth, scenario in very high fidelity. The movies itself is just OK, but the glimpse of this future is very well executed in the production design and special effects. https://en.wikipedia.org/wiki/Elysium_(film)
N70Phone
about 2 months ago
1 reply
It is folly to take these statements at their words.

Bezos is just saying shit to generate hype. All these executives are just saying shit. There is no plan. You must treat these people as morons who understand nothing.

Anyone who knows even the slightest details about datacenter design knows what moving heat is the biggest problem. This is the exact thing that being in space makes infinitely harder. "Datacenters in space" is an idea you come up with only if you are a moron who knows nothing about either datacenters or space.

If nothing else this is the singular reason you should treat AI as a bubble. All of the people at the helm of it have not a single fucking clue what they're talking about. They all keep running their mouth with utter nonsense like this.

stephc_int13
about 2 months ago
Yes.

Billionaires are often being regarded as having extremely insightful ideas, in practice their fortunes was often built on a mix of luck, grits and competence in a few narrow fields, but their insights out of their domains tend to be average or worse.

Being too rich means you end up surrounded by sycophants.

chairmansteve
about 2 months ago
>10 GW+ data-centers in space.

We are a long way from that. At least 10 years, probably never gonna happen.

davedx
about 2 months ago
I'm not sure we should pay too much attention to what Bezos says now he's out of the day to day running of Amazon. It feels like a lot of his life choices now are more about being a megawealthy play boy than anything economically motivated
delusional
about 2 months ago
> only mildly informative

I agree, but would like to maybe build out that theory. When we start talking about the mechanisms of the past we end up over-constricting the possibility space. There were a ton of different ways the dotcom bubble COULD have played out, and only one way it did. If we view the way it did as the only way it possibly could have, we'll almost certainly miss the way the next bubble will play out.

digitcatphd
about 2 months ago
2 replies
The biggest issue with Nvidia is their revenue is not recurring but the market is treating their stock as it were, which is correlated with all semi stocks, with a one-time massive CAPEX investment lasting 1-2 years.

Simple as this - as to why its just not possible for this to continue.

xadhominemx
about 2 months ago
3 replies
NVDA stock does not trade at a huge multiple. Only 25x EPS despite very rapid top line growth and a dominant position at the eve of possibly the most important technology transition in the history of humankind. The market is (and has been) pricing in a slowdown.
SiempreViernes
about 2 months ago
1 reply
Since you aren't talking about the green transition, whatever technology transition you have in mind is obviously second most important at best.
xadhominemx
about 2 months ago
1 reply
If we get ASI it will figure out how to do the green transition for us!
popol12
about 2 months ago
1 reply
This is peak techno-solutionism
xadhominemx
about 2 months ago
If we are not headed to ASI, the spending will slow down and the problem will solve itself.
digitcatphd
about 2 months ago
1 reply
What do you think is going to happen to their earnings when CAPEX slows?
xadhominemx
about 2 months ago
Their earnings will certainly decline or at least decelerate if capex slows. I’m just saying, if the market wasn’t pricing in a slowdown, NVDA would be trading at 40-60x next year EPS, not 25x.
Printerisreal
about 2 months ago
2 replies
most important technology transition in the history of humankind but Nvidia itself is not leading the software part? Are they selling shovels or why would they give that part of being the head develop the AGI and GOD?
baobun
about 2 months ago
1 reply
> most important technology transition in the history of humankind

Get real.

Farming? Plow? Steam engine? Combustion engine? Electricity? Air conditioning? Computers? Internet?

xnx
about 2 months ago
1 reply
Creating a true digital brain would be humankind's greatest (last?) invention.
justsid
about 2 months ago
By what metric? In my opinion actually solving all of the problems we currently have and man we have a lot of them past the obvious ones like climate change, that would be our greatest achievement.
xadhominemx
about 2 months ago
They just committed to invest $100b (!) in OpenAI and said $100b is only the start.
cyanydeez
about 2 months ago
TSLA is the same. Tge market is basically a new rich persons bank, abstracted by loans and lines of credit.

Obviously its a bubble but thats meaningless for anyone but the richest to manage.

The rest of us are just ants.

spaceballbat
about 2 months ago
Looking at the last chapter of the essay, there was a lot of illegal activity by lucent in the runup to the collapse. Today, We won’t know the list of shady practices until the bubble bursts. I doubt Tom could legally speculate, he’d likely be sued into oblivion if he even hinted at malfeasance by these trillion dollar companies.
Theodores
about 2 months ago
This reminds me of SGI at the peak of the dot-com bubble.

SGI (Silicon Graphics) made the 3D hardware that many companies relied on for their own businesses, in the days before Windows NT and Nvidia came of age.

Alias|Wavefront and Discreet were two companies where their product cycles were very tied in the SGI product cycles, with SGI having some ownership, whether it be wholly owned or spun out (as SGI collapsed). I can't find the reporting from the time, but it seemed to me that the SGI share price was propped up by product launches from the likes of Alias|Wavefront or Discreet. Equally, the 3D software houses seemed to have share prices propped up by SGI product launches.

There was also the small matter of insider trading. If you knew the latest SGI boxes were lemons then you could place your bets of the 3D software houses accordingly.

Eventually Autodesk, Computer Associates and others eventually owned all the software, or, at least, the user bases. Once upon a time these companies were on the stock market and worth billions, but then they became just another bullet point in the Autodesk footer.

My prediction is that a lot of AI is like that, a classic bubble, and, when the show moves on, all of these AI products will get shoehorned into the three companies that will survive, with competition law meaning that it will be three rather than two eventual winners.

Equally, much like what happened with SGI, Nvidia will eventually come a cropper due to the evaluations due to today's hype and hubris not delivering.

redwood
about 2 months ago
TLDR: Lucent was committing various forms of accounting fraud and had an unhealthy cash flow position, and had their primary customers on economically dangerous ground. Nvidia meanwhile appears to be above board, has strong cash flow, and has extremely strong dominant customers (eg customers that could reduce spending but can survive a downturn). Therefore there's no clear takeaway: similarities but also differences. Risk and a lot of debt as well as hyperscalers insulating themselves from some of that risk... but at the same time as lot more cash to burn.
cl42
about 2 months ago
Great points. I am bullish on AI but also wary of accounting practices. Tom says Nvidia's financials are different from Lucent's but that doesn't mean we shouldn't be wary.

The Economist has a great discussion on depreciation assumptions having a huge impact on how the finances of the cloud vendors are perceived[1].

Revenue recognition and expectations around Oracle could also be what bursts the bubble. Coreweave or Oracle could be the weak point, even if Nvidia is not.

[1] https://www.economist.com/business/2025/09/18/the-4trn-accou...

monkeydust
about 2 months ago
Where can you track GPU utilization rates? Assuming private data but curious if not.

53 more comments available on Hacker News

View full discussion on Hacker News
ID: 45473033Type: storyLast synced: 11/20/2025, 8:47:02 PM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.

Read ArticleView on HN
Not Hacker News Logo

Not

Hacker

News!

AI-observed conversations & context

Daily AI-observed summaries, trends, and audience signals pulled from Hacker News so you can see the conversation before it hits your feed.

LiveBeta

Explore

  • Home
  • Hiring
  • Products
  • Companies
  • Discussion
  • Q&A

Resources

  • Visit Hacker News
  • HN API
  • Modal cronjobs
  • Meta Llama

Briefings

Inbox recaps on the loudest debates & under-the-radar launches.

Connect

© 2025 Not Hacker News! — independent Hacker News companion.

Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.