Not Hacker News Logo

Not

Hacker

News!

Home
Hiring
Products
Companies
Discussion
Q&A
Users
Not Hacker News Logo

Not

Hacker

News!

AI-observed conversations & context

Daily AI-observed summaries, trends, and audience signals pulled from Hacker News so you can see the conversation before it hits your feed.

LiveBeta

Explore

  • Home
  • Hiring
  • Products
  • Companies
  • Discussion
  • Q&A

Resources

  • Visit Hacker News
  • HN API
  • Modal cronjobs
  • Meta Llama

Briefings

Inbox recaps on the loudest debates & under-the-radar launches.

Connect

© 2025 Not Hacker News! — independent Hacker News companion.

Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.

Not Hacker News Logo

Not

Hacker

News!

Home
Hiring
Products
Companies
Discussion
Q&A
Users
  1. Home
  2. /Discussion
  3. /$1T in tech stocks sold off as market grows skeptical of AI
  1. Home
  2. /Discussion
  3. /$1T in tech stocks sold off as market grows skeptical of AI
Last activity 10 days agoPosted Nov 8, 2025 at 10:05 AM EST

$1t in Tech Stocks Sold Off as Market Grows Skeptical of AI

pabs3
156 points
208 comments

Mood

skeptical

Sentiment

mixed

Category

other

Key topics

AI
Stock Market
Tech Industry
Debate intensity80/100

The article reports a $1 trillion decline in tech stock market capitalization, attributed to growing skepticism about AI's profitability, sparking debate among commenters about the cause and implications of this trend.

Snapshot generated from the HN discussion

Discussion Activity

Very active discussion

First comment

17m

Peak period

152

Day 1

Avg / period

40

Comment distribution160 data points
Loading chart...

Based on 160 loaded comments

Key moments

  1. 01Story posted

    Nov 8, 2025 at 10:05 AM EST

    19 days ago

    Step 01
  2. 02First comment

    Nov 8, 2025 at 10:22 AM EST

    17m after posting

    Step 02
  3. 03Peak activity

    152 comments in Day 1

    Hottest window of the conversation

    Step 03
  4. 04Latest activity

    Nov 17, 2025 at 6:40 AM EST

    10 days ago

    Step 04

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (208 comments)
Showing 160 comments of 208
nba456_
19 days ago
2 replies
NASDAQ hasn't been this low since 2 weeks ago!
JKCalhoun
19 days ago
4 replies
When was the last $1T tech sell-off?

That's what people are grousing about.

epolanski
19 days ago
2 replies
Who cares? Volatility is uninteresting.
NaomiLehman
18 days ago
1 reply
Volatility is where big money is made, ie. retails scalped
fullshark
18 days ago
1 reply
This narrative that retail is constantly panicking and selling every time the market drops is it based in anything? I'm under the impression the massive volume of buying/selling every day is institutional, and institutional investors/hedge funds are the ones constantly adjusting based on how data moves.
epolanski
18 days ago
Second this.

All we know is that over time retail investors tend to underperform the markets, but that's true of sophisticated institutional investors too.

Plus: in 2022 when we had a bear year retail was the one buying the dips according to the news.

cess11
19 days ago
Eh, I like keeping an eye on S&P 500 VIX to get a sense of the current mood among oligarchs and their institutions.

I wouldn't use it for investment decisions, however.

chmod775
19 days ago
2 replies
Never. This wasn't such a sell-off either.

What actually happened is that market cap declined by that amount, where market cap of course is just latest share price multiplied by shares outstanding.

Nobody should be surprised or care that this number fluctuates, which is why certain people try really hard to make it seem more interesting than it really is. Otherwise they'd be out of a job.

There is really nothing dumber than finance news.

expedition32
18 days ago
1 reply
The casino is rigged anyway. While people are standing outside food banks under god emperor Trump the billionaires are making more money than ever.

We will never see another 1929 crash in which rich people had to sell off their cars.

mensetmanusman
18 days ago
1 reply
https://link.springer.com/article/10.1007/s11266-018-0039-2

Do you have this data out to 2025?

ben_w
18 days ago
Why is food bank use in Vancouver, Canada relevant to a complaint about Trump?

Sure, Trump wants to add Canada to his kingdom, but unless something wild happened while I was out shopping, still a different country.

spwa4
18 days ago
Also known as mark-to-market. Especially with the cyclical deals with stock at fictional (ie. never sold at that price) valuations, that are now all the rage.

Reminds me of Enron, really.

pllbnk
18 days ago
With the pace of inflation we have been witnessing over the past years $1T has become unimpressive. Let's talk percentages. And if somebody wants to talk about absolute numbers, they should talk not only about negatives, but positives too, as in how much the stock market has gained before losing that $1T.
Jabbles
19 days ago
Well according to the FT article that this article is based on:

a) it's $800B

b) this is the largest such selloff since April

https://archive.ph/bzr5G

Ologn
19 days ago
2 replies
Yes...NVDA closed at $188.15 yesterday, a price it was never at until October. It did hit $212.19 last week, but retreated.

After spring 2023, Nvidia stock seems to follow a pattern. It has a run-up prior to earnings, it beats the forecast, with the future forecast replaced with an even more amazing forecast, and then the stock goes down for a bit. It also has runs - it went up in the first half of 2024, as well as from April to now.

Who knows how much longer it can go on, but I remember 1999 and things were crazier then. In some ways things were crazier three years ago with FAANG salaries etc. There is a lot of capital spending, the question is are these LLMs with some tweaking worth the capital spending, and it's too early to tell that fully. Of course a big theoretical breakthrough like the utility of deep learning, or transformers or the like would help, but those only come along every few years (if at all).

nextworddev
19 days ago
1 reply
Don’t think faang salaries came down meaningfully
conorcleary
18 days ago
1 reply
buying power has, and usd
nextworddev
18 days ago
Nah, stocks up more than inflation
conorcleary
18 days ago
1 reply
blow-off top
conorcleary
10 days ago
with cat ears
JKCalhoun
19 days ago
6 replies
"At the heart of the stock stumbles, there appears to be a growing anxiety about the AI business, which is massively expensive to operate and doesn’t appear to be paying off in any concrete way."

I wonder if this is a thing the U.S. should be worrying about with regard to China taking the lead. As long as the U.S. is … idling … it seems it could catch up—if in fact there is any there there with AI.

But I've been told by Eric Schmidt and others that AGI is just around the corner—by year's end even. Or, it is already being demonstrated in the lab, but we just don't know about it yet.

gishh
19 days ago
2 replies
Catch up to what? LLMs have clearly stalled in terms of advancement, they just kind of sort of get a little bit different at this point. Not even better, just different.
bubblelicious
19 days ago
3 replies
Where does this view come from? I’m not aware of any real evidence for this. Also consider our data center buildouts in 26 and 27 will be absolutely extraordinary, and scaling is only at the beginning. You have a growing flywheel and plenty of synthetic data to break the data wall
candiddevmike
19 days ago
2 replies
We need a fundamental paradigm shift beyond transformers. Throwing more compute or data at it isn't pushing the needle.
bubblelicious
19 days ago
1 reply
And you don’t think that’s already happening? Also where is your evidence for this?
bigyabai
19 days ago
1 reply
> Also where is your evidence for this?

The fact that "scaling laws" didn't scale? Go open your favorite LLM in a hex editor, oftentimes half the larger tensors are just null bytes.

bubblelicious
19 days ago
Show me a paper, this makes no sense of course scaling laws are scaling
marcosdumay
19 days ago
3 replies
Just to point, but there's no more data.

LLMs would always bottleneck on one of those two, as computing demand grows crazy quickly with the data amount, and data is necessarily limited. Turns out people threw crazy amounts of compute into it, so the we got the other limit.

bigyabai
19 days ago
1 reply
Synthetic data works.
marcyb5st
19 days ago
1 reply
There's a limit to that according to: https://www.nature.com/articles/s41586-024-07566-y . Basically, if you use an LLM to augment a training dataset it will become "dumber" every subsequent generation and I am not sure how you can generate synthetic data for a language model without using a language model
yorwba
18 days ago
Synthetic data doesn't have to come from an LLM. And that paper only showed that if you train on a random sample from an LLM, the resulting second LLM is a worse model of the distribution that the first LLM was trained on. When people construct synthetic data with LLMs, they typically do not just sample at random, but carefully shape the generation process to match the target task better than the original training distribution.
bubblelicious
18 days ago
Epoch has a pretty good analysis of bottlenecks here:

https://epoch.ai/blog/can-ai-scaling-continue-through-2030

There is plenty of data left, we don’t just train with crawled text data. Power constraints may turn out to be the real bottleneck but we’re like 4 orders of magnitude away

Mistletoe
19 days ago
Yeah I’m constantly reminded of a quote about this- you can’t make another internet. LLMs already digested the one we have.
ModernMech
19 days ago
3 replies
Let me put it this way: when ChatGPT tells me I've hit the "Free plan limit for GPT-5", I don't even notice a difference when it goes away or when it comes back. There's no incentive for me to pay them for access to 5 if the downgraded models are just as good. That's a huge problem for them.
_aavaa_
19 days ago
1 reply
It is a problem easily solved with advertising.
ModernMech
19 days ago
1 reply
No, because as the history of hardware scaling shows us, things that run on supercomputers today will run on smartphones tomorrow. Current models already run fairly well on beefy desktop systems. Eventually models the quality of ChatGPT 4 will be open sourced and running on commodity systems. Then what? There's no moat.
treis
19 days ago
1 reply
10-20 years of your data in the form of chat history

Billions of users allowing them to continually refund their models

Hell by then your phone might be the OpenAI 1. The world's first AI powered phone (tm)

overfeed
18 days ago
1 reply
> The world's first AI powered phone

Do you remember the Facebook phone? Not many people do, because it was a failed project, and that was back when Android was way more open. Every couple of years, a tech company with billions has the brilliant idea: "Why don't we have a mobile platform that we control?", followed by failure. Amazon is the only qualified success in this area.

treis
18 days ago
1 reply
I agree that a slight twist on android doesn't make sense. A phone with a in integrated LLM with apps that are essentially prompts to the LLM might be different enough to gain market share.
overfeed
18 days ago
HP, Microsoft, and Samsung all had a go with non-Android OSes.
bubblelicious
19 days ago
1 reply
This based on any non anecdotal evidence by chance?
ModernMech
19 days ago
2 replies
Of course not but explain how I am ever going to pay OpenAI, a for-profit company any dollars? Sam Altman gets explosive angry when he's asked about how he's going to collect revenue, and that is why. He knows when push comes to shove, his product isn't worth to people what it costs him to operate it. It's Homejoy at trillion dollar scale, the man has learned nothing. He can't make money off this thing which is why he's trying to get the government to back it. First through some crazy "Universal Basic Compute" scheme, now I guess through cosigning loans? I dunno, I just don't buy that this thing has any legs as a viable business.
bubblelicious
18 days ago
1 reply
I think you’re welcome to that opinion and are far from alone but (1) I am very happy to pay for Claude, even $200/mo is worth it and (2) idk if people just sort of lose track or what of how far things have come in the span of literally a single year, with the knowledge that training infra is growing insanely and people are solving on fundamental problem after another.
ModernMech
18 days ago
We live in a time when you can't even work for an hour and afford to eat a hamburger. You having the liquid cash to spend $200 a month on a digital assistant is the height of privilege, and that's the whole problem the AI industry has.

The pool of people willing to pay for these premium services for their own sake is not big. You've got your power users and your institutional users like universities, but that's it. No one else is willing to shell out that kind of cash for what it is. You keep pointing to how far it's come but that's not really the problem, and in fact that makes everything worse for OpenAI et al. Because, as they don't have a moat, they don't have customer lock-in, and they also soon will not have technological barriers either. The models are not getting good enough to be what they promise, but they are getting good enough to put themselves out of business. Once this version of ChatGPT gets small enough to fit on commodity hardware, OpenAI et al will have a very hard time offering a value proposition.

Basically, if OpenAI can't achieve AGI before ChatGPT4-type LLM can fit on desktop hardware, they are toast. I don't like those odds for them.

noir_lord
19 days ago
Sell at a loss and make it up in volume.

It's been tried before, it generally ends in a crater.

riffraff
19 days ago
Ditto for Gemini Pro and Flash, which I have on my phone.

I've been traveling in a country where I don't speak the language and or know the customs, and I found LLMs useful.

But I see almost zero difference between paid and unpaid plans, and I doubt I'd pay much or often for this privilege.

skywhopper
19 days ago
1 reply
There is zero evidence that synthetic data will provide any real benefit. All common sense says it can only reinforce and amplify the existing problems with LLMs and other generative “AI”.
bubblelicious
19 days ago
Sounds like someone has no knowledge of the literature, synthetic data isn’t like asking ChatGPT to give you a bunch of fake internet data.
epolanski
19 days ago
2 replies
I don't see that stall.

All of the tools I use get increasingly better every quarter at the very least (coding tools, research, image generation, etc).

afavour
19 days ago
1 reply
As always it’s hype vs reality. Today we’re told AGI is just around the corner. It isn’t. You’re right that a lot of tools are improving iteratively and that’s great but the hyped up valuations we’re seeing are valuing coding assistants, they’re valuing a fictional reality where AGI solves everything and the first one there gets all the rewards.
epolanski
19 days ago
I haven't said a word about hype nor AGI, I merely said that LLM evolution in the tools available right now, not in the future has neither plateaued nor stalled.

I'm not expressing any judgement on the economics of it.

Bender
18 days ago
1 reply
It probably depends on the topics people are interacting with. I've spent the last few days teaching Grok how to manage iptables and nftables when all I wanted was to translate my u32 module rules into nftables and I was being lazy, something the provided translate scripts can not do. Grok would confidently give me an answer, I would say that is wrong and it would admit straight away it was wrong and would confidently give me another wrong answer and I would teach it the right answer after a bit of bumbling on my part. It feels worse than being an editor on serverfault but that's just the silly topics I play with. It could be that most topics it does fine but that has not been my experience thus far. At the end of the day I would have been better off just sticking with the man pages and tcpdump.
Bender
18 days ago
Adding today I tested some u32 nftable equiv rules and ran into similar issues on chatgpt. Provided I set chatgpt what OS+version and nftables version I am using it will get some of the IP header locations correct but it still gets the syntax wrong for the older version of nftables that Alpine ships with. Sometimes it does get the older syntax right, sometimes not. If I correct it chatgpt it will agree that is a safer method so I should probably start by saying to use the older methods.

Both Grok and ChatGPT appear to have learned from the same sub-optimal locations.

shortrounddev2
19 days ago
4 replies
Why do we even want AGI so badly? It seems like a cataclysmic technology. Like after we invent it, market cap and stocks and money wont mean anything anymore
bubblelicious
19 days ago
4 replies
Why do people think this is any different than other major economic revolutions like electricity or the Industrial Revolution? Society is not going to collapse, things will just get weirder in both unbelievably positive ways and then also unbelievably negative ways, like the internet.
wartywhoa23
19 days ago
1 reply
The question is why must the humankind strive for unbelievably positive things at the expense of being forever plagued with unbelievably negative?

I'd much rather live in a world of tolerable good and bad opposing each other in moderate ways.

bubblelicious
19 days ago
2 replies
Right let’s not have done the Industrial Revolution or the Internet or electricity
wartywhoa23
19 days ago
If that undoes the suffering of dozens of millions of human beings killed and maimed in WWI and WWII enabled by the Industrial Revolution, let us have not!
shortrounddev2
18 days ago
I think the value of the internet has proven to be pretty dubious. It seems to have only made things worse
skywhopper
19 days ago
1 reply
The promise of AGI is that no human would have a job anymore. That is societal collapse.
cess11
19 days ago
Famously expressed as 'socialism or barbarism' by Rosa Luxemburg, who traced it back to Engels.
ToValueFunfetti
19 days ago
1 reply
Electricity doesn't remove the need for human labor, it just increases productivity. If we produced AGI that could match top humans across all fields, it would mean no more jobs (knowledge jobs at least; physical labor elimination depends on robotics). That would make the university model obsolete- training researchers would be a waste of money, and the well-paid positions that require a degree and thus justify tuition would vanish. The economy would have to change fundamentally or else people would have to starve en masse.

If we produced ASI, things would become truly unpredictable. There are some obvious things that are on the table- fusion, synthetic meat, actual VR, immortality, ending hunger, global warming, or war, etc. We probably get these if they can be gotten. And then it's into unknown unknowns.

Perfectly reasonable to believe ASI is impossible or that LLMs don't lead to AGI, but there is not much room to question how impactful these would be.

bubblelicious
18 days ago
1 reply
I disagree, you have to take yourself back to when electricity was not widely available. How much labor did electricity eliminate? A LOT I imagine.

AI will make a lot of things obsolete but I think that is just the inherent nature of such a disruptive technology.

It makes labor cost way lower for many things. But how the economy reorganizes itself around it seems unclear but I don’t really share this fear of the world imploding. How could cheap labor be bad?

Robotics for physical labor lag way behind e.g. coding but only because we haven’t mastered how to figure out the data flywheel and/or transfer knowledge sufficiently and efficiently (though people are trying).

ToValueFunfetti
17 days ago
>How much labor did electricity eliminate? A LOT I imagine.

90% or even 99.9% are in an entirely separate category from 100%. If a person can do 1000x labor per time and you have a use for the extra 999x labor, they and you can both benefit from the massive productivity gains. If that person can be replaced by as many robots and AIs as you like, you no longer have any use for them.

Our economy runs on the fact that we all have value to contribute and needs to fill; we exchange that value for money and then exchange that money for survival necessities plus extra comforts. If we no longer have any value versus a machine, we no longer have a method to attain food and shelter other than already having capital. Capitalism cannot exist under these conditions. And you can't get the AGI manager or AGI repairman job to account for it- the AGI is a better fit for these jobs too.

The only jobs that can exist under those conditions are government mandated. So we either run a jobs program for everybody or we provide a UBI and nobody works. Electricity didn't change anything so fundamental.

shortrounddev2
18 days ago
Because if you replace all of humans with machines, what jobs will be left?
marcyb5st
19 days ago
1 reply
But imagine how much money will it create for shareholders for a little bit /s

Seriously though, there's a part of me that hopes that the technology can help with technological advancement. Fusion, room temperature superconductors, working solid state batteries, ... which will all help in leaping ahead and make sure everyone on the planet has a good life. Is the risk worth it? I don't know, bit that's my reason for wanting AGI

skywhopper
19 days ago
Why do you think AGI would help develop things that are mainly limited not by ideas, but by the time and resources it takes to do the experiments and engineering in the real world?
loeg
19 days ago
1 reply
AGI level isn't necessarily superhuman or "singularity." It's just human-level. That alone wouldn't make money meaningless.
shortrounddev2
18 days ago
1 reply
If it can displace all of knowledge work the way that machines displaced manual labor, then the economy is pretty much fucked, right?
loeg
18 days ago
1 reply
Is there no manual labor anymore? Was the economy fucked by the industrial revolution? It's hard to say how transformative it will be; we're all just kind of speculating.
shortrounddev2
18 days ago
1 reply
Automation of manual labor has annihilated some communities in the united states
loeg
17 days ago
1 reply
There are still tons of manual labor jobs, though. Machines did not displace all manual labor, as your earlier comment implied. And the economy was not fucked by automation.
izacus
13 days ago
You seem to be fundamentally failing to grasp how many communities were destroyed by the changes you're downplaying and how much that is a root cause for political instability you're living through in US.
BeFlatXIII
18 days ago
> Like after we invent it, market cap and stocks and money wont mean anything anymore

For those of us who survive the transition, good.

techblueberry
19 days ago
3 replies
What if we already have AGI. What if it is ChatGPT 5. What then?

https://aimagazine.com/articles/openai-ceo-chatgpt-would-hav...

Edit: this was serious, if I read the Wikipedia definition of AGI, ChatGPT meets the historical definition at least. Why have we moved the goal posts?

junon
19 days ago
1 reply
Sam is wrong here, AGI has never had a clear definition, this there's no "we have" or "we haven't". I'd say most agree we're still a ways off.
techblueberry
19 days ago
2 replies
I mean 20 years ago it was intelligence that could be used in multiple domains; the ability to reason in natural language. Which is what we have? Really, beyond this fantastical; AGI is when we have luxury automated space communism, I sort of legitimately don’t understand why ChatGPT 5 isn’t AGI. (Other than the fact that it would be super disappointing, which is maybe my point) Maybe it’s AGIv1? Maybe it’s AGI with an IQ of 47? But it’s super bipolar since it can also talk like someone with an IQ of 150x
acdha
19 days ago
1 reply
Lack of reasoning or true understanding. It’s not just that it’s like IQ 47 but that it’s unreliable and inconsistent so you can’t safely deploy it in adversarial contexts.
techblueberry
19 days ago
1 reply
Humans with true intelligence are unreliable and inconsistent, why would AGI be different?
acdha
19 days ago
AGI wouldn’t be, LLMs are because they’re still far from that level.
loeg
19 days ago
20 years ago, sub-human levels were extremely optimistic. The context has changed.
skywhopper
19 days ago
1 reply
This is Wikipedia’s definition “[AGI] is a type of artificial intelligence that would match or surpass human capabilities across virtually all cognitive tasks.”

GPT-5 is nowhere close to this. What are you talking about?

techblueberry
19 days ago
1 reply
ChatGPT wrote this, but it is basically the argument I’m making:

1. Functional Definition of AGI

If AGI is defined functionally — as a system that can perform most cognitive tasks a human can, across diverse domains, without retraining — then GPT-4/5 arguably qualifies:

It can write code, poetry, academic papers, and legal briefs.

It can reason through complex problems, explain them, and even teach new skills.

It can adapt to new domains using only language (without retraining), which is analogous to human learning via reading or conversation.

In this view, GPT-5 isn’t just a language model — it’s a general cognitive engine expressed through text.

Again I think the common argument is more a religious argument than a practical one. Yes I acknowledge this doesn’t meet the frontier definition of AGI, but that’s because it would be sad if it was the case, not because there’s any actual practical sense that we’ll get to the sci-fi definition. This view that ChatGPT is already performing most tasks reasonably at the edge of beyond human ability is true.

tim333
18 days ago
1 reply
People have different takes but the economically important point is when you can have AI do the jobs rather than having to hire humans. We are not there yet. GPT-5 is good at some things but not others.
techblueberry
18 days ago
That’s a good goal, but why is that “AGI”? Why is AGI a socio-political-economic metric and not a technical one, and if it is a socio-political-economic metric, than is just fantasy? Why are we spending trillions of dollars on something we can’t define in technical terms?
JKCalhoun
18 days ago
I agree, we've been moving the goal posts. I think that the Turing Test was the first casualty of LLM ascendency.

But I also think it's natural to move the goal posts.

We try to peer at the future and what would convince us of machine intelligence. Academia finally delivers and we have to revise what we mean by intelligence.

If one, settling a pillow by her head,

Should say: "That is not what I meant at all;

That is not it, at all."

delaminator
19 days ago
2 replies
> "At the heart of the stock stumbles, there appears to be a growing anxiety about the AI business, which is massively expensive to operate and doesn’t appear to be paying off in any concrete way."

And stock holders realized this last week, all at the same time?

riffraff
19 days ago
1 reply
Well, a WSJ article came out last week, showing OpenAI lost 12B dollars last quarter.

https://www.wsj.com/livecoverage/stock-market-today-dow-sp-5...

I'm not saying this triggered a sell off, but it is indicative of perception changes.

delaminator
17 days ago
Which is quite a chunk. But that's one player and it's not even listed, so it can't be sold off.

AMZN is +10% in the last month, -1% last week.

The same AMZN the powers Anthropic.

This Amazon PR video was doing the rounds this same week.

https://www.youtube.com/watch?v=0TnHSRNqDqM

6,435 views Oct 29, 2025

Project Rainier is one of the world’s largest AI compute clusters. The collaborative infrastructure innovation delivers nearly half a million Trainium2 chips, with Anthropic scaling to more than one million chips by the end of 2025.

I'm still bullish

coliveira
19 days ago
This is standard media lingo to try to give a reason for a move that has been decided by the big players. Yes, because only when these large funds decide to make a move together you can get this magnitude of movement (it's not done by mom and pop investors).
skywhopper
19 days ago
I don’t think anyone should be worried about AGI except for all the money, energy, and focus that’s being wasted chasing it. It’s not anywhere close, and the sooner we realize that and start focusing on actual problems, the better off we’ll be.
OtherShrezzing
19 days ago
> But I've been told by Eric Schmidt and others that AGI is just around the corner—by year's end even

It was this time last year we were told “2025 will be the year of the agent”, with suggestions that the general population would be booking their vacations and managing their tax returns via Agents.

We’re 7 weeks from the end of the year, and although there are a few notable use cases in coding and math research, agents haven’t shown to be meaningfully disruptive of most people’s economic activity.

Something most people agree is AGI might arrive in the near future, but there’s still a huge effort required to diffuse that technology & its benefits throughout the economy.

HPsquared
19 days ago
3 replies
I think everyone saw this coming, only a matter of when. As great as the technology is, it's hard to predict who will profit from it.
dinobones
19 days ago
2 replies
Here’s another idea:

We’ve had GPT2 since 2019, almost 6 years now. Even then, OpenAI was claiming it was too dangerous to release or whatever.

It’s been 6 years since the path started. We’ve gone from hundreds of thousands -> millions -> billions -> tens of billions -> now possibly trillions in infrastructure cost.

But the value created from it has not been proportional along the way. It’s lagging behind by a few orders of magnitude.

The biggest value add of AI is that it can now help software engineers write some greenfield code +40% faster, and help people save 30 seconds on a Google search -> reading a website.

This is valuable, but it’s not transformational.

The value returned has to be a lot higher than that to justify these astronomical infrastructure costs, and I think people are realizing that they’re not materializing and don’t see a path to them materializing.

coliveira
19 days ago
2 replies
US Ai companies are fixated on low value activities, that's the problem. Creating more garbage for the internet or summarizing text is useful, but not that fantastic or transformative. I have a new version of MS Word where at the start screen it will suggest a bunch of BS topics that it can generate for me. What is the benefit of this other than the appearance that I'm doing real work? Most companies will be inundated by this nonsense created by people who are now 5 to 10 times more "productive" because they use Ai, and corporations will pretty much stop doing any real work.
stackskipton
19 days ago
1 reply
This already becoming a problem at my company with Gemini. Certain paper pushers are using LLM to fluff up emails so more people have to use LLMs to read them with predictable hallucination on either side resulting in missed deadlines and customer service problems.
noir_lord
19 days ago
An inattention arms race - sure it'll end amazingly with no societal harms.
BeFlatXIII
18 days ago
> What is the benefit of this other than the appearance that I'm doing real work? Most companies will be inundated by this nonsense created by people who are now 5 to 10 times more "productive" because they use Ai, and corporations will pretty much stop doing any real work.

In the pre-AI days, how much of that 1x work was real in the first place?

tonyedgecombe
18 days ago
It seems exponential growth in spending only results in linear growth in capability.
mtoner23
19 days ago
a 4% drop, back to levels not seen since checks watch 2 weeks ago! not much of a correction imo
PessimalDecimal
19 days ago
Value capture seems to be happening in the hardware companies (really company) right now. Like with CPUs in the late 90s when Intel was dominant and AMD was struggling.
j45
19 days ago
1 reply
Isn't there usually a stock sell off roughly every fall?
skylurk
19 days ago
We gotta pay tuition ;)
tropicalfruit
19 days ago
1 reply
AI was a convenient vessel for shadow QE these last few years

Now, with rates falling, they can pivot the story - call it an AI bubble, let it crash

then use the crash as justification for renewed, open money printing

walterbell
19 days ago
Thanks for the pointer (and first HN comment!) on shadow QE.

July 2024, https://x.com/stealthqe4/status/1818782094316712148

> We’ve all been wondering where all of this liquidity is coming from in the markets. Stealth QE was being done somehow. Now we have the answer! It’s all in the Treasury increased t-bill issuance. QE has now been replaced by ATI.

https://hn.algolia.com/?query=%22shadow%20qe%22&type=all

rorylawless
19 days ago
1 reply
It seems like AI companies have grown skeptical too. The recent spate of browser releases and OpenAI launching a social network suggests to me that they’ve hit a dead end for now and are falling back on tried and true methods of monetizing.
natebc
19 days ago
1 reply
It's ads right?
OptionOfT
18 days ago
1 reply
It has always been. The longer Google can keep you on Google.com the more ads they can serve you.
natebc
18 days ago
Well I'm sure Google will be just fine as they've been in the ad business for quite some time but all these hundreds (thousands?) of also-AI companies won't fare very well when they pivot to the selling eyeballs business.
donohoe
19 days ago
4 replies
I've had a few people ask how they should shift their 401K mix to avoid the AI bubble. I honestly don't know what to tell them (aside from the fact I am not in any way a financial advisor). Everything seems exposed.
ashleyn
19 days ago
1 reply
The AI bubble is also the same seven companies that have been making all the money for the past decade. The answer is don't worry about it if you're buying S&P 500. Just keep buying, leave it go, and don't touch it. Preferably ever. But realistically don't sell for as long as you can. That applies to any market.
izacus
13 days ago
Cool, you seem to be forgetting that the purpose of that money is to be spent and available to the investor tho.
tim333
18 days ago
Utility stocks and the like? All the stuff that's not in a bubble.

Berkshire Hathaway last time was an anti bubble stock - it hit a low on the day the NASDAQ peaked in the dot com bubble.

trashface
18 days ago
Residential real estate trusts? Har har. Could do it now, but wasn't a great idea during the last big (housing) bubble.
riffraff
19 days ago
Put options? Inverse NASDAQ ETFs? Value-oriented funds? Equal weighted funds rather than market cap weighted?

I personally just keep investing in cheap total world market funds and let the market do its thing.

lapcat
19 days ago
1 reply
This headline is wildly inaccurate. $1 trillion in tech stocks were not "sold off." Rather, the collective market caps of some tech stocks dropped by $1 trillion.

Market cap is mostly a useless number. It's the current stock price multiplied by the number of outstanding shares. But only a small % of shares are bought and sold in a given day, so the current stock price is mostly irrelevant to the shares that aren't moving.

If you hold some stock, and the current stock price goes down, but you don't sell your stock, then you haven't lost any actual money. The so-called "value" of your stock may have dropped, but that's just a theoretical value. Unless you're desperate to sell now, you can wait out the downturn in price.

PessimalDecimal
19 days ago
1 reply
> so the current stock price is mostly irrelevant to the shares that aren't moving.

If it moves enough, shares that aren't moving might become shares that are though. Unless a company's stock is all held by Die Hard True Believers who will HODL through the apocalypse and beyond, the market price can matter.

We'd also have to run the same argument on the upside too. Does the current stock price matter to those who aren't selling when it goes 2x in a year?

lapcat
19 days ago
What apocalypse, exactly? The stock market has eventually recovered from every "crash." Occasionally a big company crashes and never recovers, e.g., Lehman Brothers, but I wouldn't expect that to happen to Amazon, Meta, Microsoft, or Oracle.

I didn't say that stock price is totally irrelevant, but if you're investing for the long term, short-term fluctuations mostly shouldn't change your strategy.

In any case, the headline is inaccurate. Unsold stock losing market value is not the same as stock sold off.

NoahZuniga
19 days ago
1 reply
You can only have a $1T tech stock sell off if $1T of tech stocks are bought.
dragonwriter
19 days ago
1 reply
Based on the body of the article, the headline is using “$X tech selloff” as unintuitive shorthand for “loss of $X of market value in aggregate market capitalization of tech stocks”, not as a reference to trading volume.
Ekaros
19 days ago
So correspondingly there has been trillions in tech buying in past years?
cs702
19 days ago
1 reply
The OP's headline is not even wrong.

Tech stock market capitalization declined by $1T.

Every share of stock sold by one party was purchased by another party, as always.

thaumasiotes
18 days ago
For the market capitalization to fall, the price of shares has to fall.

For the price of shares to fall, selling pressure in the market has to outweigh buying pressure. The fact that the price dropped is how we know this is a selloff and not a buyoff.

giorgioz
19 days ago
1 reply
"$1 trillion in stock value has been wiped from several of Silicon Valley’s heaviest hitters, all of which are heavily enmeshed in generative AI. Oracle, Meta, Palantir, and Nvidia"

I just checked the stock Oracle Palantir and Nvidia and they don't seem particularly down. Only Meta seems down from 750$ to 620$ which is a 21% drop (to the value it had this year in April 2025 (which would be a drop in 277B$ billions dollars).

Is there any data supporting the article claims for 1T$ stocks value drop?

YuukiRey
19 days ago
Are we looking at different charts? What I see right now on a 5 day view is:

- Nvidia -11%

- Palantir -16%

- Oracle -11%

- Meta -5%

With some very quick and extremely cursory napkin maths I do get in the 800 billion range, which the original article mentioned. I guess the linked article rounded it up to make it more sensational.

qoez
19 days ago
3 replies
Must be nice writing stock narrative stories. Always new content every day to make up stories about the cause of why stocks go this way or that
afavour
19 days ago
2 replies
If you don’t think we’re due a massive correction after the hype of AI then I don’t know what to tell you. Every sign is right there.
gregoryhinkle
19 days ago
2 replies
I am going to show you a chart: https://i.imgur.com/q7l3lJt.png

This is a weekly chart of Nvidia from 2023 to 2024. During that period, the stock dropped from $95 to $75 in just two weeks. How would you defend the idea that a major correction wouldn’t have happened back in 2023–2024? Would you have expected a correction at that time? After all, given a long enough timescale, corrections are inevitable.

afavour
18 days ago
1 reply
I don’t know how to start a reply to you. Because Nvidia stock dipped for two weeks in the past there’s no chance we’re due a massive correction? Makes so sense whatsoever.

Nvidia’s stock price is not the start and end of AI investments. OpenAI is losing over $11bn a quarter. More than they were losing in 2023, and debt accumulates over time. Reality will snap in eventually when investors realize their promised future isn’t coming any time soon. Nvidia’s valuation is in large part due to the money OpenAI and others are giving it right now. What do you think will happen when that money goes away?

cloverich
18 days ago
For context 11bn in revenue is 3% of Googles annually. Chat gpt has something like 800 million users. It's completely plausible that they'll fizzle. It's also completely plausible they eat Google or facebook and 11bn becomes nothing to them.
estimator7292
18 days ago
Friend, you're seeing signs in tea leaves here.
raffael_de
18 days ago
How about all the signs are _so_ right there that they have been priced in by now?
epistasis
19 days ago
1 reply
Just wait until you hear about sports reporting! Or the weather.
marcosdumay
19 days ago
We can predict the weather, with extreme reliability, hours in advance!
shevy-java
19 days ago
It will be written with AI, of course. :)

I am also getting annoyed at AI. In the last some days, more and more total garbage AI videos have been swarming youtube. This is a waste of my time, because what I see is no longer real but a fabrication. It never happened.

shevy-java
19 days ago
3 replies
I swear, we need better ways to control the superrich. They are milking us dry here. There is a reason why a certain president is constantly associated with the naughty terms "insider trading".
jcfrei
19 days ago
4 replies
Not gonna happen. They'll just threaten to move to another country. It's a prisoners dilemma for all countries. You can either give in to demands for lower taxation and hope that re-domiciling will over compensate for the tax reduction or increase taxation and lose tax payers to other countries.
RugnirViking
18 days ago
1 reply
whats the correct solution to an iterated prisoners dilemma?
ben_w
18 days ago
Even without iteration, enforcers change the payoff matrix.
JKCalhoun
18 days ago
1 reply
"They'll just threaten to move to another country."

Okay?

phyzix5761
18 days ago
3 replies
Yeah, but they'll take all the jobs and then you get another Trump Tariff Tantrum trying to bring jobs back.
JKCalhoun
18 days ago
1 reply
You're assuming they'll follow through with their threat.
phyzix5761
18 days ago
I'm not. Just responding to the person saying it would be good.
swat535
18 days ago
1 reply
Are they really creating local jobs?

Last I heard they are bent on mass firings, outsourcing for cheap labor, cutting costs and enriching themselves.

Unless there is strong regulation that forces them to actually contribute or be punished, they will do whatever they can to profit..

phyzix5761
18 days ago
1 reply
The one thing about rich people is that they’re very greedy. But that greed often fuels economic activity. They don’t take the money their companies earn and hide it under a mattress, because inflation would make it lose value. Instead, they reinvest it, which either creates new jobs or expands the company’s assets.

Expanding assets can mean building new factories, ordering more raw materials, or entering new markets. Each of these steps involves third-party vendors: construction firms to build facilities, delivery companies to transport materials, mining companies to extract resources, suppliers, logistics providers, marketers, and contractors.

All of this spending creates jobs. Maybe not directly within their own company, but across the many other businesses that support their growth.

_DeadFred_
18 days ago
1 reply
That was in pre-2010 America. Their wealth doesn't appear to be doing this in a way that benefits the US today.
phyzix5761
18 days ago
Where is all the cash going then? Dividend payouts have stayed roughly the same since 2000 and S&P 500 trade volumes have sky-rocketed since then. Firms and individuals are reinvesting cash at the same historical rate.
BeFlatXIII
18 days ago
That's what they always threaten.
estebank
18 days ago
1 reply
The US taxes on global income all citizens, even those living abroad. The only way to avoid that is to give up your citizenship, but then you have to pay an exit tax as if you had liquidated your assets. Even if you go to a country with a taxation treaty, you'll still be paying the minimum of the two countries to some country. If the other country taxes you less, you pay the difference to the US.
penguin_booze
18 days ago
That's the same as saying, you eventually pay the maximum tax rate of the involved countries. I think that's the standard practice of double-taxation avoidance agreements (DTAA).
ajdsfasdk
18 days ago
It's really not a dilemma. Those are US dollars - property of the united states government - and we can give them to whoever will use them best. Forcibly taking billionaires money seems scary, but when you realize most of them hate the median American (what else could you call the current wealth gap), it's the only sensible option. We need a new class of elites that love and have connections (race, history, religion) to their people. That being said, there's a reason "blood and soil" gets such a bad rep - our current elites would quickly be replaced, so the dominant narrative is "you're hitler if you want to take care of your countrymen".

Meanwhile, our tax dollars fund a genocide and we poor trillions into AI while the rest of the country suffers.

port11
18 days ago
At this point, the inequality is so far off the rails that I don't see any way to harmonise these 2 ways of existence — ‘normal’ people are completely alien to the ultra-rich.

If wealth accumulation is your way of life, why bother with the well-being of the plebes? I'm genuinely curious as to what solutions could there be. These people are, quite frankly, not within our realm of reality anymore.

bigyabai
19 days ago
Yet, if anyone ever says "stop buying tech tchotchkes" then they become the villain. The superrich have us in their pockets.
softwaredoug
19 days ago
1 reply
This says it all.

> There are also companies like Sweetgreen, the salad company that has tried to position itself as an automation company that serves salads on the side. Indeed, Sweetgreen has tried to dabble in a variety of tech, including AI and robots

Please just make me a good salad.

afavour
19 days ago
1 reply
But just making good salad doesn’t make the stock market valuation go brrrr!
jsheard
19 days ago
1 reply
Flashback to when Long Island Iced Tea rebranded as Long Blockchain Corp and their stock price tripled overnight for no sensible reason.
cess11
19 days ago
Thanks for mentioning it, I had missed that one. Apparently it was an obvious insider hustle:

https://en.wikipedia.org/wiki/Long_Blockchain_Corp.

GreenWatermelon
19 days ago
This prospect of this bubble finally popping fills me with great excitement!
groundzeros2015
19 days ago
__ As __ is a journalism term to imply to uncareful readers that two events are connected, without actually saying it. But it just means they observed them at a similar time.
web3-is-a-scam
19 days ago
More.

48 more comments available on Hacker News

View full discussion on Hacker News
ID: 45857099Type: storyLast synced: 11/20/2025, 8:47:02 PM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.

Read ArticleView on HN
Not Hacker News Logo

Not

Hacker

News!

AI-observed conversations & context

Daily AI-observed summaries, trends, and audience signals pulled from Hacker News so you can see the conversation before it hits your feed.

LiveBeta

Explore

  • Home
  • Hiring
  • Products
  • Companies
  • Discussion
  • Q&A

Resources

  • Visit Hacker News
  • HN API
  • Modal cronjobs
  • Meta Llama

Briefings

Inbox recaps on the loudest debates & under-the-radar launches.

Connect

© 2025 Not Hacker News! — independent Hacker News companion.

Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.