Jeff Bezos Says AI Is in a Bubble but Society Will Get 'gigantic' Benefits
Posted3 months agoActive2 months ago
cnbc.comTechstoryHigh profile
heatedmixed
Debate
85/100
AIBubbleTech Investment
Key topics
AI
Bubble
Tech Investment
Jeff Bezos comments that AI is in a bubble but will bring 'gigantic' benefits to society, sparking debate among HN users about the validity of his statement and the potential impact of AI on society.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
14m
Peak period
156
Days 1-2
Avg / period
32
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 3, 2025 at 12:00 PM EDT
3 months ago
Step 01 - 02First comment
Oct 3, 2025 at 12:13 PM EDT
14m after posting
Step 02 - 03Peak activity
156 comments in Days 1-2
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 26, 2025 at 11:53 AM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45464429Type: storyLast synced: 11/20/2025, 8:09:59 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Also nit: Typo right in the digest I assume, assuming “suring” is “during”, does cnbc proofread their content?
One smug English faculty said, "well it's not that hard. You just look for dashes in their writing."
I responded with, "you know you can just tell it not to use those, right?"
Blank stares.
I hate it.
Banning advanced graph calculators on undergrad math exams is not because "they don't want to use new tools", either.
Appealing to honor is a partial solution at best. Cheating is a problem at West Point, let alone the majority of places with a less disciplined culture. It's sad, but true. The fact that you and I would never cheat on exams simply does not generalize.
https://www.armytimes.com/news/your-army/2021/04/18/51-west-...
edit: good on West Point for actually following up on the cheating. I've witnessed another institution sweeping it under the rug even when properly documented and passed up two or three levels of reporting. As an academic director and thus manager of professors this was infuriating and demoralizing for all concerned.
Use it as a peer review. Use it during brainstorming. Use it to clarify ambiguous thoughts. Use it to challenge your arguments and poke holes in your assumptions.
BTW if you haven't, I encourage reading this.
https://zenodo.org/records/17065099
I'm not saying throw the doors open and let loose. I'm saying that we need to find places where using these tools makes sense, follows a sense of professional ethics, and encourages (rather than replaces) critical thinking.
And the problem with your cited paper is that people who kick and scream the loudest about this at my institution (again, this is just at mine and is in no way indicative of any other institution) are the ones who have not updated their courses since I was in college. I mean that quite literally. I attended the institution I currently work at. Decades later and I could turn in papers that I wrote for their classes my freshman year and pass their classes.
Three of them sent me that same linked article. But instead of seeing the message "we need to think about how to use these things responsibly" they just read "you can do what you've done for years and nothing needs to change."
That "research" article isn't as impactful as the faculty at my institution thought.
I'm all for the thoughtful integration or rejection of these technologies based on sound pedagogical practices rooted in student learning theory. At my institution, and I want to stress n=1, they literally do not want to take time updating lessons, regardless of the reason. Llm's are just a convenient scapegoat right now.
I would argue that it's more unethical to not update your classroom lessons in over 2 decades than it is to use llm's to supplement learning.
But of course, every company needs to slap AI on their product now just to be seen as a viable product.
Personally, I look forward to seeing the bubble burst and being left with a more rational view of AI and what it can (and can not) do.
(Like those people that casually assume that everyone has a therapist or a lawyer.)
it was making money off those idea at the valuations expected that was problem.
the Internet really did revolutionize things, in substantial ways, but not to the tune of millions of dollars for pets.com
The difference now is that this is all (or mostly) idle cash being invested. The massive warchests built up by FAANG over the last decade are finally being deployed meaningfully rather than sitting in bonds or buying back stock. Much different scenario than companies with non-viable business models going IPO on a wish and a dream.
Will the contagion be limited to a few companies, or will the S&P crumble under the bubble's collapse?
And everybody used them.
Nowdays everybody see them as useless.
Not web chat rooms.
Discord is not.
Chats did not evolve to social media.
Facebook is a descendant of hotornot not icq or chats.
Ws was seen as free sms.
AI is more useful than social media. This is not financial advice, but I lean more toward not a bubble.
Communication is a fundamental human need.
Generating slop isn't.
The challenge is the rest of the industry funding dead companies with billions of dollars on the off chance they replicate OpenAI’s success.
Some other company, that doesn't have a giant pile of debt will then pick up the pieces and make some money though. Once we dig out of the resulting market crash.
Some marginal investors know this but they are okay because the music is still playing - but when they think its time to leave the bubble will pop.
People seem to forget that its not about whether or not its actually a bubble, its really about when will certain people who set these stock prices for valuations, decide its time to exit and take their profit.
Uber and Amazon are really bad examples. Who was Amazons competition? Nobody. By the time anyone woke up and took them seriously it was too late.
Uber only had to contend with Lyft and a few other less funded firms. Less funded being a really important thing to consider. Not to mention the easy access to immense amounts of funding Uber had.
Every company seems to be putting all their eggs in the AI basket. And that is causing basic usability and feature work to be neglected. Nobody cares because they are betting that AI agents will replace all that. But it won't and meanwhile everything else about these products will stagnate.
It's a disasterous strategy and when it comes crashing down and the layoffs start, every CEO will get a pass on leading this failure because they were just doing what everyone else is doing.
Sure, many of these "thin prompt wrapper around the OpenAI API" product "businesses" will all be gone within a few years. But AI? That is going to be here indefinitely.
The "it'll make all your devs 6x as productive by the end of the year" types of promises. But those probably explain the valuations
They already had many "explanations" and models for why the planets were seemingly moving back and forth in the sky during the year. Their models were more complicated than necessary simply because they didn't want to consider the different premise.
The technology - for what it is being used vs what is invested - does not match up at all. This is what happened to the dot-com bubble. Theres was a whole bunch of innovation that was needed to come to bring a delightful UX to bring swathes of people onto the internet.
So far this is true about LLMs. Could this change? Sure. Will it change meaningful? Personally I dont believe so.
The internet at its core was all about hooking up computers so they they could transform from just computational beasts to communication. There was a tremendous amount of potentitial that was very very real. It just so happens if computers can communicate we can do a whole bunch of stuff - as is going on today.
What are LLMS? Can someone please explain in a succint way...? Im yet to see something super crystal clear.
The dotcom bubble was not about "the internet" itself. The Internet was fine and pretty much already proven as a very useful communication tool. It was about business that made absolutely no sense getting extremely high valuations just because they operated - however vaguely - over the internet.
Generative AI have never reached the level of usability of the Internet itself, and likely never will.
S Jobs called it back in 1995-97 - he referred to it as shopping for information and shopping for good and services.
Nobody has this crystal clear, tangible vision re. LLMs. Nobody at all. That is a big problem.
I found the interview: https://www.youtube.com/watch?v=MqSfFcaluHc&t=1700s
It was more of web 2.0 company.
> The term "Web 2.0" was coined by Darcy DiNucci, an information architecture consultant, in her January 1999 article "Fragmented Future" [...] her "2.0" designation refers to the next version of the Web that does not directly relate to the term's current use.
> The term Web 2.0 did not resurface until 2002.
Google's first big Web 2.0 products were GMail (beta launched in 2004, just before Google's IPO) and Google Maps (2005).
Ultimately it doesn't matter who survives the AI bubble, because they are all more or less equivalent, proposing the same technical solution.
A lot of us clocked the crypto bullshit waaaay before the crash.
A Money Market Fund gives you interest if you are able to access it.
This is kind of a pattern:
1. There is some regulation that is inefficient ( e.g. taxi medallions, KYC, copyright protection ...)
2. New technology comes about which allows startups to claim that they have invented a new area that should be regulated differently
3. Turns out (2) is not true and new technology can easily be mapped to existing regulation but it would look bad for the regulator to take away the punchbowl
4. There is some down-turn (bubble pops) and the regulator takes away the punchbowl OR investors have accumulated so much money/power that they corrupt the government to have new rules for their businesses
I'm sorry what crash are you talking about?
(title fixed now)
Title needs to be changed to something like
"Bezos says AI is in industrial bubble yet promises huge benefits"
There's still depreciation, but it's not the same. Also look at other forms of hardware, like RAM, and the bonus electrical capacity being built.
I have not seen the prices of GPUs, CPU or RAM going down, on the contrary, each day it gets more expensive.
As tempting as it is, it leads to false outcomes because you are not thinking about how this particular situation is going to impact society and the economy.
Its much harder to reason this way, but isnt that the point? personally I dont want to hear or read analogies based on the past - I want to see and read stuff that comes from original thinking.
This guy gets it - https://www.youtube.com/watch?v=kxLCTA5wQow
Instead of plainly jumping on the bubble bandwagon he actually goes through a thorough analysis.
The current AI bubble is leading to trained models that won't be feasible to retrain for a decade or longer after the bubble bursts.
The trenches for the cables even longer than that.
In 2002 I was woking making webs and setting up linux servers and I did not have internet at home.
https://www.statista.com/statistics/189349/us-households-hom...
Low Global Penetration: Only 361 million people had internet access worldwide in 2000, a small fraction of the global population. Specific Country Examples United States: The US had a significant portion of the world's internet users, making up 31.1% of all global users in 2000. Its penetration rate was 43.1%.
It had the Microsoft network or whatever it was called.
I guess another example of the same thing is power generation capacity, although this comes online so much more slowly I'm not sure the dynamics would work in the same way.
It's hugely expensive, which is why the big cloud infrastructure companies have spent so much on optimizing every detail they can.
To give a different example, right now, some of the most prized sites for renewable energy are former coal plant sites, because they already have big fat transmission lines ready to go. Yesterday's industrial parks are now today's gentrifying urban districts, and so on.
Evnn moreso for carrier lines of course. Nimbyism is a strong block on right-of-way needs (except the undersea ones obviously).
Of course this does make some moderate assumptions that it was a solid build in the first place, not a flimsy laptop, not artificially made obsolete/slow, etc. Even then, "install an SSD" and "install more RAM" is most of everything.
Of course, if you are a developer you should avoid doing these things so you won't get encouraged to write crappy programs.
Markets for electronics have momentum, and estimating that momentum is how chip producers plan for investment in manufacturing capacity, and how chip consumers plan for deprecation.
- better weather forecasts
- modeling intermittent generation on the grid to get more solar online
- drug discovery
- economic modeling
- low cost streaming games
- simulation of all types
Not to mean that we're still nowhere near close to solving the broadband coverage problem, especially in less developed countries like the US and most of the third world. If anything, it seems like we're moving towards satellite internet and cellular for areas outside of the urban centers, and those are terrible for latency-sensitive applications like game streaming.
This is not particularly true.
Even top of the line AAA games make sure they can be played on the current generation consoles which have been around for the last N years. Right now N=5.
Sure you’ll get much better graphics with a high end PC, but those looking for cloud gaming would likely be satisfied with PS5 level graphics which can be pretty good.
The Dotcom boom was probably good for everyone in some way, but it was much, much better for the extremely wealthy people that have gained control of everything.
Even for the average person in America, the ability to do so many activities online that would have taken hours otherwise (eg. shopping, research, DMV/government activities, etc). The fact that we see negative consequences of this like social network polarization or brainrot doesn't negate the positives that have been brought about.
In fact, the technology was introduced out here assuming corporate / elite users. The market reality became such that telcos were forced kicking and screaming to open up networks to everybody. The Telecom Regulatory Authority of India (back then) mandated rural <> urban parity of sorts. This eventually forced telcos to share infrastructure costs (share towers etc.) The total call and data volumes are eye-watering, but low-yield (low ARPU). I could go on and on but it's just batshit crazy.
Now UPI has layered on top of that---once again, benefiting from Reserve Bank of India's mandate for zero-fee transactions, and participating via a formal data interchange protocol and format.
Speaking from India, having lived here all my life, and occasionally travelled abroad (USAmerica, S.E. Asia).
We, as a society and democracy, are also feeling the harsh, harsh hand of "Code is Law", and increasingly centralised control of communication utilities (which the telecoms are). The left hand of darkness comes with a lot of darkness, sadly.
Which brings me to the moniker of "third world".
This place is insane, my friend --- first, second, third, and fourth worlds all smashing into each others' faces all the time. In so many ways, we are more first world here than many western countries. I first visited USAmerica in 2015, and I could almost smell an empire in decline. Walking across twitter headquarters in downtown SF of all the places, avoiding needles and syringes strewn on the sidewalk, and avoiding the completely smashed guy just barely standing there, right there in the middle of it all.
That was insane.
However, it wasn't just that, and the feeling has only solidified in three further visits. It isn't rational, very much a nose thing, coming from an ordinary software programmer (definitely not an economist, sociologist, think tank).
Now we need X*0.75 people to do meet Y demand.
However, those savings are partially piped to consumers, and partially piped to owners.
There is only so much marginal propensity to spend that rich people have, so that additional wealth is not resulting in an increase in demand, at least commensurate enough to absorb the 25% who are unemployed or underemployed.
Ideally that money would be getting ploughed back into making new firms, or creating new work, but the work being created requires people with PHDs, and a few specific skills, which means that entire fields of people are not in the work force.
However all that money has to go somewhere, and so asset classes are rising in value, because there is no where else for it to go.
Or, the returns on capital exceed the rate of economic growth (r > g), if you like Piketty's Capital in the Twenty First Century.
One of the central points is about how productivity and growth gains increasingly accrue to capital rather than labor, leading to capital accumulation and asset inflation.
https://en.m.wikipedia.org/wiki/Special:BookSources/97806742...
This is how GDP/person has increased 30x the last 250 years.
What always happens is that the no longer needed X*0.25 people find new useful things to do and we end up 33% richer.
It's actually, "they end up" and the 33% gains you're talking about aren't realized en masse until all the coal miners have black lung. It's really quite the, "dealy" as Homer Simpson would say. See, "Charles Dickens" or, "William Blake" for more. #grease
For sure, we can shop faster, and (attempt) research and admin faster. But…
Shopping: used to be fun. You’d go with friends or family, discuss the goods together, gossip, bump into people you knew, stop for a sandwich, maybe mix shopping and a cinema or dinner trip. All the while, you’d be aware of other peoples’ personal space, see their family dynamics. Queuing for event tickets brought you shoulder to shoulder with the crowd before the event began… Today, we do all this at home; strangers (and communities) are separated from us by glass, cables and satalites, rather than by air and shouting distance. I argue that this time saving is reducing our ability to socialise.
Research: this is definitely accelerated, and probably mostly for the better. But… some kinds of research were mingled with the “shopping” socialisation described above.
Admin: the happy path is now faster and functioning bureaucracy is smoother in the digital realm. But, it’s the edge cases which are now more painful. Elderly people struggle with digital tech and prefer face to face. Everyone is more open to more subtle and challenging threats (identity theft, fraud); we all have to learn complex and layered mitigation strategies. Also: digital systems are very fragile: they leak private data, they’re open to wider attack surfaces, they need more training and are harder to intuit without that training; they’re ripe for capture by monopolists (Google, Palantir).
The time and cost savings of all these are not felt by the users, or even the admins of these systems. The savings are felt only by the owners of the systems.
Technologgy has saved billions of person-hours individual costs, in travel, in physical work. Yet, wemre working longer, using fewer ranges of motions, are less fit, less able to tolerste others’ differences and the wealth gap is widening.
Be careful about making narratives that don’t line up with industry data.
There’s a lot of brick and mortar retail going on. It just doesn’t look like the overbuilt mall infrastructure of the 1970s-1980s.
"Quality of life" is a hugely privileged topic to be zooming in on. For the vast majority of people both inside and outside the US, Time and Money are by far the most important factors in their lives.
If you had a huge pile of money but still lived in a shack in a slum, you’d still have a terrible quality of life.
If you argument were true, and people are saving time (or money) due to these new systems, why is the wealth gap widening?
You are describing platform capture. Be it Google Search, YouTube, TikTok, Meta, X, App Store, Play Store, Amazon, Uber - they have all made themselves intermediaries between public and services, extracting a huge fee. I see it like rent going up in a region until it reaches maximum bearable level, making it almost not worth it to live and work there. They extract value both directions, up and down, like ISPs without net-neutrality.
But AI has a different dynamic, it is not easy to centrally control ranking, filtering and UI with AI agents. You can download a LLM, can't download a Google or Meta. Now it is AI agents that got the "ear" of the user base.
It's not like before it was good - we had a generation of people writing slop to grab attention on web and social networks, from the lowest porn site to CNN. We all got prompted by the Algorithm. Now that Algorithms is replaced by many AI agents that serve users more directly than before.
You can download a model. That doesn't necessarily mean you can download the best model and all the ancillary systems attached to it by whatever service. Just like you can download a web index but you probably cannot download google's index and certainly can't download their system of crawlers for keeping it up to date.
In poor countries, they may not have access to clean running water but it's almost guaranteed they have cell phones. We saw that in a documentary recently. What's good about that? They use cell phones not only to stay in touch but to carry out small business and personal sales. Something that wouldn't have been possible before the Internet age.
Just as I'm getting to the point where I can see retirement coming from off in the distance. Ugh.
As an owner of a web host that probably sees advantage to increased bot traffic, this statement is just more “just wait AI will be gigantic any minute now, keep investing in it for me so my investments stay valuable”.
482 more comments available on Hacker News