$1t in Tech Stocks Sold Off as Market Grows Skeptical of AI
Mood
skeptical
Sentiment
mixed
Category
other
Key topics
The article reports a $1 trillion decline in tech stock market capitalization, attributed to growing skepticism about AI's profitability, sparking debate among commenters about the cause and implications of this trend.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
17m
Peak period
152
Day 1
Avg / period
40
Based on 160 loaded comments
Key moments
- 01Story posted
Nov 8, 2025 at 10:05 AM EST
19 days ago
Step 01 - 02First comment
Nov 8, 2025 at 10:22 AM EST
17m after posting
Step 02 - 03Peak activity
152 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 17, 2025 at 6:40 AM EST
10 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
That's what people are grousing about.
All we know is that over time retail investors tend to underperform the markets, but that's true of sophisticated institutional investors too.
Plus: in 2022 when we had a bear year retail was the one buying the dips according to the news.
I wouldn't use it for investment decisions, however.
What actually happened is that market cap declined by that amount, where market cap of course is just latest share price multiplied by shares outstanding.
Nobody should be surprised or care that this number fluctuates, which is why certain people try really hard to make it seem more interesting than it really is. Otherwise they'd be out of a job.
There is really nothing dumber than finance news.
We will never see another 1929 crash in which rich people had to sell off their cars.
Do you have this data out to 2025?
Sure, Trump wants to add Canada to his kingdom, but unless something wild happened while I was out shopping, still a different country.
Reminds me of Enron, really.
a) it's $800B
b) this is the largest such selloff since April
After spring 2023, Nvidia stock seems to follow a pattern. It has a run-up prior to earnings, it beats the forecast, with the future forecast replaced with an even more amazing forecast, and then the stock goes down for a bit. It also has runs - it went up in the first half of 2024, as well as from April to now.
Who knows how much longer it can go on, but I remember 1999 and things were crazier then. In some ways things were crazier three years ago with FAANG salaries etc. There is a lot of capital spending, the question is are these LLMs with some tweaking worth the capital spending, and it's too early to tell that fully. Of course a big theoretical breakthrough like the utility of deep learning, or transformers or the like would help, but those only come along every few years (if at all).
I wonder if this is a thing the U.S. should be worrying about with regard to China taking the lead. As long as the U.S. is … idling … it seems it could catch up—if in fact there is any there there with AI.
But I've been told by Eric Schmidt and others that AGI is just around the corner—by year's end even. Or, it is already being demonstrated in the lab, but we just don't know about it yet.
The fact that "scaling laws" didn't scale? Go open your favorite LLM in a hex editor, oftentimes half the larger tensors are just null bytes.
LLMs would always bottleneck on one of those two, as computing demand grows crazy quickly with the data amount, and data is necessarily limited. Turns out people threw crazy amounts of compute into it, so the we got the other limit.
https://epoch.ai/blog/can-ai-scaling-continue-through-2030
There is plenty of data left, we don’t just train with crawled text data. Power constraints may turn out to be the real bottleneck but we’re like 4 orders of magnitude away
Billions of users allowing them to continually refund their models
Hell by then your phone might be the OpenAI 1. The world's first AI powered phone (tm)
Do you remember the Facebook phone? Not many people do, because it was a failed project, and that was back when Android was way more open. Every couple of years, a tech company with billions has the brilliant idea: "Why don't we have a mobile platform that we control?", followed by failure. Amazon is the only qualified success in this area.
The pool of people willing to pay for these premium services for their own sake is not big. You've got your power users and your institutional users like universities, but that's it. No one else is willing to shell out that kind of cash for what it is. You keep pointing to how far it's come but that's not really the problem, and in fact that makes everything worse for OpenAI et al. Because, as they don't have a moat, they don't have customer lock-in, and they also soon will not have technological barriers either. The models are not getting good enough to be what they promise, but they are getting good enough to put themselves out of business. Once this version of ChatGPT gets small enough to fit on commodity hardware, OpenAI et al will have a very hard time offering a value proposition.
Basically, if OpenAI can't achieve AGI before ChatGPT4-type LLM can fit on desktop hardware, they are toast. I don't like those odds for them.
It's been tried before, it generally ends in a crater.
I've been traveling in a country where I don't speak the language and or know the customs, and I found LLMs useful.
But I see almost zero difference between paid and unpaid plans, and I doubt I'd pay much or often for this privilege.
All of the tools I use get increasingly better every quarter at the very least (coding tools, research, image generation, etc).
I'm not expressing any judgement on the economics of it.
Both Grok and ChatGPT appear to have learned from the same sub-optimal locations.
I'd much rather live in a world of tolerable good and bad opposing each other in moderate ways.
If we produced ASI, things would become truly unpredictable. There are some obvious things that are on the table- fusion, synthetic meat, actual VR, immortality, ending hunger, global warming, or war, etc. We probably get these if they can be gotten. And then it's into unknown unknowns.
Perfectly reasonable to believe ASI is impossible or that LLMs don't lead to AGI, but there is not much room to question how impactful these would be.
AI will make a lot of things obsolete but I think that is just the inherent nature of such a disruptive technology.
It makes labor cost way lower for many things. But how the economy reorganizes itself around it seems unclear but I don’t really share this fear of the world imploding. How could cheap labor be bad?
Robotics for physical labor lag way behind e.g. coding but only because we haven’t mastered how to figure out the data flywheel and/or transfer knowledge sufficiently and efficiently (though people are trying).
90% or even 99.9% are in an entirely separate category from 100%. If a person can do 1000x labor per time and you have a use for the extra 999x labor, they and you can both benefit from the massive productivity gains. If that person can be replaced by as many robots and AIs as you like, you no longer have any use for them.
Our economy runs on the fact that we all have value to contribute and needs to fill; we exchange that value for money and then exchange that money for survival necessities plus extra comforts. If we no longer have any value versus a machine, we no longer have a method to attain food and shelter other than already having capital. Capitalism cannot exist under these conditions. And you can't get the AGI manager or AGI repairman job to account for it- the AGI is a better fit for these jobs too.
The only jobs that can exist under those conditions are government mandated. So we either run a jobs program for everybody or we provide a UBI and nobody works. Electricity didn't change anything so fundamental.
Seriously though, there's a part of me that hopes that the technology can help with technological advancement. Fusion, room temperature superconductors, working solid state batteries, ... which will all help in leaping ahead and make sure everyone on the planet has a good life. Is the risk worth it? I don't know, bit that's my reason for wanting AGI
For those of us who survive the transition, good.
https://aimagazine.com/articles/openai-ceo-chatgpt-would-hav...
Edit: this was serious, if I read the Wikipedia definition of AGI, ChatGPT meets the historical definition at least. Why have we moved the goal posts?
GPT-5 is nowhere close to this. What are you talking about?
1. Functional Definition of AGI
If AGI is defined functionally — as a system that can perform most cognitive tasks a human can, across diverse domains, without retraining — then GPT-4/5 arguably qualifies:
It can write code, poetry, academic papers, and legal briefs.
It can reason through complex problems, explain them, and even teach new skills.
It can adapt to new domains using only language (without retraining), which is analogous to human learning via reading or conversation.
In this view, GPT-5 isn’t just a language model — it’s a general cognitive engine expressed through text.
Again I think the common argument is more a religious argument than a practical one. Yes I acknowledge this doesn’t meet the frontier definition of AGI, but that’s because it would be sad if it was the case, not because there’s any actual practical sense that we’ll get to the sci-fi definition. This view that ChatGPT is already performing most tasks reasonably at the edge of beyond human ability is true.
But I also think it's natural to move the goal posts.
We try to peer at the future and what would convince us of machine intelligence. Academia finally delivers and we have to revise what we mean by intelligence.
If one, settling a pillow by her head,
Should say: "That is not what I meant at all;
That is not it, at all."
And stock holders realized this last week, all at the same time?
https://www.wsj.com/livecoverage/stock-market-today-dow-sp-5...
I'm not saying this triggered a sell off, but it is indicative of perception changes.
AMZN is +10% in the last month, -1% last week.
The same AMZN the powers Anthropic.
This Amazon PR video was doing the rounds this same week.
https://www.youtube.com/watch?v=0TnHSRNqDqM
6,435 views Oct 29, 2025
Project Rainier is one of the world’s largest AI compute clusters. The collaborative infrastructure innovation delivers nearly half a million Trainium2 chips, with Anthropic scaling to more than one million chips by the end of 2025.
I'm still bullish
It was this time last year we were told “2025 will be the year of the agent”, with suggestions that the general population would be booking their vacations and managing their tax returns via Agents.
We’re 7 weeks from the end of the year, and although there are a few notable use cases in coding and math research, agents haven’t shown to be meaningfully disruptive of most people’s economic activity.
Something most people agree is AGI might arrive in the near future, but there’s still a huge effort required to diffuse that technology & its benefits throughout the economy.
We’ve had GPT2 since 2019, almost 6 years now. Even then, OpenAI was claiming it was too dangerous to release or whatever.
It’s been 6 years since the path started. We’ve gone from hundreds of thousands -> millions -> billions -> tens of billions -> now possibly trillions in infrastructure cost.
But the value created from it has not been proportional along the way. It’s lagging behind by a few orders of magnitude.
The biggest value add of AI is that it can now help software engineers write some greenfield code +40% faster, and help people save 30 seconds on a Google search -> reading a website.
This is valuable, but it’s not transformational.
The value returned has to be a lot higher than that to justify these astronomical infrastructure costs, and I think people are realizing that they’re not materializing and don’t see a path to them materializing.
In the pre-AI days, how much of that 1x work was real in the first place?
Now, with rates falling, they can pivot the story - call it an AI bubble, let it crash
then use the crash as justification for renewed, open money printing
July 2024, https://x.com/stealthqe4/status/1818782094316712148
> We’ve all been wondering where all of this liquidity is coming from in the markets. Stealth QE was being done somehow. Now we have the answer! It’s all in the Treasury increased t-bill issuance. QE has now been replaced by ATI.
Berkshire Hathaway last time was an anti bubble stock - it hit a low on the day the NASDAQ peaked in the dot com bubble.
I personally just keep investing in cheap total world market funds and let the market do its thing.
Market cap is mostly a useless number. It's the current stock price multiplied by the number of outstanding shares. But only a small % of shares are bought and sold in a given day, so the current stock price is mostly irrelevant to the shares that aren't moving.
If you hold some stock, and the current stock price goes down, but you don't sell your stock, then you haven't lost any actual money. The so-called "value" of your stock may have dropped, but that's just a theoretical value. Unless you're desperate to sell now, you can wait out the downturn in price.
If it moves enough, shares that aren't moving might become shares that are though. Unless a company's stock is all held by Die Hard True Believers who will HODL through the apocalypse and beyond, the market price can matter.
We'd also have to run the same argument on the upside too. Does the current stock price matter to those who aren't selling when it goes 2x in a year?
I didn't say that stock price is totally irrelevant, but if you're investing for the long term, short-term fluctuations mostly shouldn't change your strategy.
In any case, the headline is inaccurate. Unsold stock losing market value is not the same as stock sold off.
Tech stock market capitalization declined by $1T.
Every share of stock sold by one party was purchased by another party, as always.
For the price of shares to fall, selling pressure in the market has to outweigh buying pressure. The fact that the price dropped is how we know this is a selloff and not a buyoff.
I just checked the stock Oracle Palantir and Nvidia and they don't seem particularly down. Only Meta seems down from 750$ to 620$ which is a 21% drop (to the value it had this year in April 2025 (which would be a drop in 277B$ billions dollars).
Is there any data supporting the article claims for 1T$ stocks value drop?
- Nvidia -11%
- Palantir -16%
- Oracle -11%
- Meta -5%
With some very quick and extremely cursory napkin maths I do get in the 800 billion range, which the original article mentioned. I guess the linked article rounded it up to make it more sensational.
This is a weekly chart of Nvidia from 2023 to 2024. During that period, the stock dropped from $95 to $75 in just two weeks. How would you defend the idea that a major correction wouldn’t have happened back in 2023–2024? Would you have expected a correction at that time? After all, given a long enough timescale, corrections are inevitable.
Nvidia’s stock price is not the start and end of AI investments. OpenAI is losing over $11bn a quarter. More than they were losing in 2023, and debt accumulates over time. Reality will snap in eventually when investors realize their promised future isn’t coming any time soon. Nvidia’s valuation is in large part due to the money OpenAI and others are giving it right now. What do you think will happen when that money goes away?
I am also getting annoyed at AI. In the last some days, more and more total garbage AI videos have been swarming youtube. This is a waste of my time, because what I see is no longer real but a fabrication. It never happened.
Okay?
Last I heard they are bent on mass firings, outsourcing for cheap labor, cutting costs and enriching themselves.
Unless there is strong regulation that forces them to actually contribute or be punished, they will do whatever they can to profit..
Expanding assets can mean building new factories, ordering more raw materials, or entering new markets. Each of these steps involves third-party vendors: construction firms to build facilities, delivery companies to transport materials, mining companies to extract resources, suppliers, logistics providers, marketers, and contractors.
All of this spending creates jobs. Maybe not directly within their own company, but across the many other businesses that support their growth.
Meanwhile, our tax dollars fund a genocide and we poor trillions into AI while the rest of the country suffers.
If wealth accumulation is your way of life, why bother with the well-being of the plebes? I'm genuinely curious as to what solutions could there be. These people are, quite frankly, not within our realm of reality anymore.
> There are also companies like Sweetgreen, the salad company that has tried to position itself as an automation company that serves salads on the side. Indeed, Sweetgreen has tried to dabble in a variety of tech, including AI and robots
Please just make me a good salad.
48 more comments available on Hacker News
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.