Ai's Dial-Up Era
Posted2 months agoActiveabout 2 months ago
wreflection.comTechstoryHigh profile
heatedmixed
Debate
80/100
AIBubbleTech Investment
Key topics
AI
Bubble
Tech Investment
The article 'AI's Dial-Up Era' compares the current AI boom to the early days of the internet, sparking debate among commenters about the validity of the analogy and the future of AI.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
89
0-6h
Avg / period
16
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Nov 3, 2025 at 4:01 PM EST
2 months ago
Step 01 - 02First comment
Nov 3, 2025 at 5:05 PM EST
1h after posting
Step 02 - 03Peak activity
89 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 7, 2025 at 1:01 PM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45804377Type: storyLast synced: 11/20/2025, 8:14:16 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
When the railroad bubble popped we had railroads. Metal and sticks, and probably more importantly, rights-of-way.
If this is a bubble, and it pops, basically all the money will have been spent on Nvidia GPUs that depreciate to 0 over 4 years. All this GPU spending will need to be done again, every 4 years.
Hopefully we at least get some nuclear power plants out of this.
I think we may not upgrade every 4 years, but instead upgrade when the AI models are not meeting our needs AND we have the funding & political will to do the upgrade.
Perhaps the singularity is just a sigmoid with the top of the curve being the level of capex the economy can withstand.
Trains are closer to $50-100,000 per mile per year.
If there's no money for the work it's a prioritization decision.
Heck if nothing else all the new capacity being created today may translate to ~zero cost storage, CPU/GPU compute and networking available to startups in the future if the bubble bursts, and that itself may lead to a new software revolution. Just think of how many good ideas are held back today because deploying them at scale is too expensive.
Note that these are just power purchase agreements. It's not nothing, but it's a long ways away from building nuclear.
I agree the depreciation schedule always seems like a real risk to the whole financial assumptions these companies/investors make, but a question I've wondered: - Will there be an unexpected opportunity when all these "useless" GPUs are put out to pasture? It just seems like saying a factory will be useless because nobody wants to buy an IBM mainframe, but an innovative company can repurpose a non-zero part of that infrastructure for another use case.
I'm still a fan of the railroad comparisons though for a few additional reasons:
1. The environmental impact of the railroad buildout was almost incomprehensibly large (though back in the 1800s people weren't really thinking about that at all.)
2. A lot of people lost their shirts investing in railroads! There were several bubbly crashes. A huge amount of money was thrown away.
3. There was plenty of wasted effort too. It was common for competing railroads to build out rails that served the same route within miles of each other. One of them might go bust and that infrastructure would be wasted.
It takes China 5 years now, but they've been ramping up for more than 20 years.
When this article are claiming both sides of the debate, I believe only one of them are real (the ones hyping up the technology). While there are people like me who are pessimistic about the technology, we are not in any position of power, and our opinion on the matter is basically a side noise. I think a much more common (among people with any say in the future of this technology) is the believe that this technology is not yet at a point which warrants all this investment. There were people who said that about the internet in 1999, and they were proven 100% correct in the months that followed.
1. the opening premise comparing AI to dial-up internet; basically everyone knew the internet would be revolutionary long before 1995. Being able to talk to people halfway across the world on a BBS? Sending a message to your family on the other side of the country and them receiving it instantly? Yeah, it was pretty obvious this was transformative. The Krugman quote is an extreme, notable outlier, and it gets thrown out around literally every new technology, from blockchain to VR headsets to 3DTVs, so just like, don't use it please.
2. the closing thesis of
> Consider the restaurant owner from earlier who uses AI to create custom inventory software that is useful only for them. They won’t call themselves a software engineer.
The idea that restaurant owners will be writing inventory software might make sense if the only challenge of creating custom inventory software, or any custom software, was writing the code... but it isn't. Software projects don't fail because people didn't write enough code.
That sounds pretty similar to long-distance phone calls? (which I'm sure was transformative in its own way, but not on nearly the same scale as the internet)
Do we actually know how transformative the general population of 1995 thought the internet would or wouldn't be?
As soon as the internet arrived, a bit late for us (I'd say 1999 maybe) due to the minitel's "good enough" nature, it just became instantly obvious, everyone wanted it. The general population was raving mad to get an email address, I never heard anyone criticize the internet like I criticize the fake "AI" stuff now.
I was only able to do this because I had some prior programming experience but I would imagine that if AI coding tools get a bit better they would enable a larger cohort of people to build a personal tool like I did.
I have a suspicion this is LLM text, sounds corny. There are dozens open source solutions, just look one up.
Because some notable people dismissed things that wound up having profound effect on the world, it does not mean that everything dismissed will have a profound effect.
We could just as easily be "peak Laserdisc" as "dial-up internet".
There's another presumably unintended aspect of the comparison that seems worth considering. The Internet in 2025 is certainly vastly more successful and impactful than the Internet in the mid-90s. But dial-up itself as a technology for accessing the Internet was as much of a dead-end as Laserdisc was for watching movies at home.
Whether or not AI has a similar trajectory as the Internet is separate from the question of whether the current implementation has an actual future. It seems reasonable to me that in the future we're enjoying the benefits of AI while laughing thinking back to the 2025 approach of just throwing more GPUs at the problem in the same way we look back now and get a chuckle out of the idea of "shotgun modems" as the future.
> 1. Economic strain (investment as a share of GDP)
> 2. Industry strain (capex to revenue ratios)
> 3. Revenue growth trajectories (doubling time)
> 4. Valuation heat (price-to-earnings multiples)
> 5. Funding quality (the resilience of capital sources)
> His analysis shows that AI remains in a demand-led boom rather than a bubble, but if two of the five gauges head into red, we will be in bubble territory.
This seems like a more quantitative approach than most of "the sky is falling", "bubble time!", "circular money!" etc analyses commonly found on HN and in the news. Are there other worthwhile macro-economic indicators to look at?
It's fascinating how challenging it is meaningfully compare current recent events to prior economic cycles such as the y2k tech bubble. It seems like it should be easy but AFAICT it barely even rhymes.
Stockmarket capitalisation as a percentage of GDP AKA the Buffett indicator.
https://www.longtermtrends.net/market-cap-to-gdp-the-buffett...
Good luck, folks.
I'm sure there are other factors that make this metric not great for comparisons with other time periods, e.g.:
- rates
- accounting differences
If that bothers you, just multiply valuations by .75
Doesn’t make much difference even without doing the same adjust for previous eras.
Buffett indicator survives this argument. He’s a smart guy.
That is the real dial-up thinking.
Couldn't AI like be their custom inventory software?
Codex and Claud Code should not even exist.
Absolutely not. It's inherently a software with a non-zero amount of probability in every operation. You'd have a similar experience asking an intern to remember your inventory.
Like I enjoy Copilot as a research tool right but at the same time, ANYTHING that involves delving into our chat history is often wrong. I own three vehicles, for example, and it cannot for it's very life remember the year, make and model of them. Like they're there, but they're constantly getting switched around in the buffer. And once I started positing questions about friend's vehicles that only got worse.
Really. Tool use is a big deal for humans, and it's just as big a deal for machines.
This your first paradigm shift? :-P
I'm not saying it is useless tech, but no it's not my first paradigm shift, and that's why I can see the difference.
I look forward to the "personal computing" period, with small models distributed everywhere...
Like 50% of internet users are already interacting with one of these daily.
You usually only change your habit when something is substantially better.
I don't know how free versions are going to be smaller, run on commodity hardware, take up trivial space and ram etc, AND be substantially better
No, you usually only change your habit when the tools you are already using are changed without consulting you, and the statistics are then used to lie.
If you are using an Apple product chances are you are already using self-hosted models for things like writing tools and don't even know it.
Like the web, which worked out great?
Our Internet is largely centralized platforms. Built on technology controlled by trillion dollar titans.
Google somehow got the lion share of browser usage and is now dictating the direction of web tech, including the removal of adblock. The URL bar defaults to Google search, where the top results are paid ads.
Your typical everyday person uses their default, locked down iPhone or Android to consume Google or Apple platform products. They then communicate with their friends over Meta platforms, Reddit, or Discord.
The decentralized web could never outrun money. It's difficult to out-engineer hundreds of thousands of the most talented, most highly paid engineers that are working to create these silos.
Fr tho, no ads - I'm not making money off them, I've got no invite code for you, I'm a human - I just don't get it. I've probably told 500 people about Brave, I don't know any that ever tried it.
I don't ever know what to say. You're not wrong, as long as you never try to do something else.
As someone who has been using brace since it was first announced and very tightly coupled to the BAT crypto token I must say it is much less effective nowadays.
I often still see a load of ads and also regularly have to turn off the shields for some sites.
I never have to turn off shields - on one hand the number of times I've had to do that.
Maybe I have something additional installed I don't know.
Or rather, they'd block Brave.
Data General and Unisys did not create PCs - small disrupters did that. These startups were happy to sell eggs.
Because someone else can sell the goose and take your market.
Apple is best aligned to be the disruptor. But I wouldn’t underestimate the Chinese government dumping top-tier open-source models on the internet to take our tech companies down a notch or ten.
It's very risky play, and if it doesn't work it leaves China in a much worse place than before, so ideally you don't make the play unless you're already facing some big downside, sort of as a "hail Mary" move. At this point I'm sure they're assuming Trump is glad handing them while preparing for military action, they might even view invasion of Taiwan as defensive if they think military action could be imminent anyhow.
And you know we'd be potting their transport ships, et cetera, from a distance the whole time, all to terrific fanfare. The Taiwan Strait would become the new training ground for naval drones, with the targets being almost exclusively Chinese.
Taiwan fields strong air defenses backed up by American long-range fortifications.
The threat is covert decapitation. A series of terrorist attacks carried out to sow confusion while the attack launches.
Nevertheless, unless China pulls off a Kabul, they’d still be subject to constant cross-Strait harassment.
If Apple does finally come up with a fully on-device AI model that is actually useful, what makes you think they won't gate it behind a $20/mo subscription like they do for everything else?
Non sequitur.
If a market is being ripped off by subscription, there is opportunity in selling the asset. Vice versa: if the asset sellers are ripping off the market, there is opportunity to turn it into a subscription. Business models tend to oscillate between these two for a variety of reasons. Nothing there suggets one mode is infinitely yielding.
> If Apple does finally come up with a fully on-device AI model that is actually useful, what makes you think they won't gate it behind a $20/mo subscription like they do for everything else?
If they can, someone else can, too. They can make plenty of money selling it straight.
Only in theory. Nothing beats getting paid forever.
> Business models tend to oscillate between these two for a variety of reasons
They do? AFAICT everything devolves into subscriptions/rent since it maximizes profit. It's the only logical outcome.
> If they can, someone else can, too.
And that's why companies love those monopolies. So, no... other's can't straight up compete against a monopoly.
It's this disruptor Apple in the room with us now?
Apple's second biggest money source is services. You know, subscriptions. And that source keeps growing: https://sixcolors.com/post/2025/10/charts-apple-caps-off-bes...
It's also that same Apple that fights tooth and nail every single attempt to let people have the goose or even the promise of a goose. E.g. by saying that it's entitled to a cut even if a transaction didn't happen through Apple.
Assuming consumers even bother to set up a coop in their living room...
Selling eggs is better how?
One could argue that this period was just a brief fluke. Personal computers really took off only in the 1990s, web 2.0 happened in the mid-2000s. Now, for the average person, 95%+ of screen time boils down to using the computer as a dumb terminal to access centralized services "in the cloud".
And AI just further normalizes the need for connectivity; cloud models are likely to improve faster than local models, for both technical and business reasons. They've got the premium-subscriptions model down. I shudder to think what happens when OpenAI begins hiring/subsuming-the-knowledge-of "revenue optimization analysts" from the AAA gaming world as a way to boost revenue.
But hey, at least you still need humans, at some level, if your paperclip optimizer is told to find ways to get humans to spend money on "a sense of pride and accomplishment." [0]
We do not live in a utopia.
[0] https://www.guinnessworldrecords.com/world-records/503152-mo... - https://www.reddit.com/r/StarWarsBattlefront/comments/7cff0b...
The thing we do need to be careful about is regulatory capture. We could very well end up with nothing but monolithic centralized systems simply because it's made illegal to distribute, use, and share open models. They hinted quite strongly that they wanted to do this with deepseek.
There may even be a case to be made that at some point in the future, small local models will outperform monoliths - if distributed training becomes cheap enough, or if we find an alternative to backprop that allows models to learn as they infer (like a more developed forward-forward or something like it), we may see models that do better simply because they aren't a large centralized organism behind a walled garden. I'll grant that this is a fairly polyanna take and represents the best possible outcome but it's not outlandishly fantastic - and there is good reason to believe that any system based on a robust decentralized architecture would be more resilient to problems like platform enshittification and overdeveloped censorship.
At the end of the day, it's not important what the 'average' user is doing, so long as there are enough non-average users pushing the ball forward on the important stuff.
It only has to be good enough to do what we want. In the extreme, maybe inference becomes cheap enough that we ask “why do I have to wake up the laptop’s antenna?”
You could say the same about all self-hosted software, teams with billions of dollars to produce and host SaaS will always have an advantage over smaller, local operations.
There might be also local/global bias strategies. A tiny local model trained on your specific code/document base may be better aligned to match your specific needs than a galaxy scale model. If it only knows about one "User" class, the one in your codebase, it might be less prone to borrowing irrelevant ideas from fifty other systems.
We're already very, very close to "smart enough for most stuff". We just need that to also be "tuned for our specific wants and needs".
Most open source development happens on GitHub.
You'd think non-average developers would have noticed their code is now hosted by Microsoft, not the FSF. But perhaps not.
The AI end game is likely some kind of post-Cambrian, post-capitalist soup of evolving distributed compute.
But at the moment there's no conceivable way for local and/or distributed systems to have better performance and more intelligence.
Local computing has latency, bandwidth, and speed/memory limits, and general distributed computing isn't even a thing.
Also, is percentage of screentime the relevant metric? We moved TV consumption to the PC, does that take away from PCs?
Many apps moved to the web but that's basically just streamed code to be run in a local VM. Is that a dumb terminal? It's not exactly local compute independent...
Nearly entirety of the use cases of computers today don't involve running things on a 'personal computer' in any way.
In fact these days, every one kind of agrees as little as hosting a spreadsheet on your computer is a bad idea. Cloud, where everything is backed up is the way to go.
PC was never 'no web'. No one actually 'counted every screw in their garage' as the PC killer app. It was always the web.
This whole idea that you can connect lots of cheap low capacity boxes and drive down compute costs is already going away.
In time people will go back to thinking compute as a variable of time taken to finish processing. That's the paradigm in the cloud compute world- you are billed for the TIME you use the box. Eventually people will just want to use something bigger that gets things done faster, hence you don't have to rent them for long.
The personal computer arguably begins with VisiCalc in 1979.
> Through the 1970s, personal computers had proven popular with electronics enthusiasts and hobbyists, however it was unclear why the general public might want to own one. This perception changed in 1979 with the release of VisiCalc from VisiCorp (originally Personal Software), which was the first spreadsheet application.
https://en.wikipedia.org/wiki/History_of_personal_computers#...
Mainstream use of the web really took off in the second half of the 1990s. Arbitrarily, let's say with the release of Windows 95. That's a quarter of a century you'd be blinking for.
The web really pushed adoption, much more than a person computation machine. It was the main use case for most folks.
Some of the online only games I am thinking of are CoD, Fortnite, LoL and Minecraft. The online-first DRM I am thinking of is Steam.
That exists, too, with GeForce Now etc, which is why I said mostly.
The killer apps in the 80s were spreadsheets and desktop publishing.
Would you classify eg gmail as 'content streaming'?
As far as how Gmail's existing offline mode works, I don't know.
(I have to use some weaselwording here, because GMail had decent spam detection since basically forever, and whether you call that AI or not depends on where we have shifted the goalposts at the moment.)
But yes I am looking forward to having my own LMS on my PC which only I have access to.
Web browsers aren't quite that useless with no internet connection, some sites do offer offline capabilities (for example gmail). but even then, the vast majority of offline experiences exist to tide the user over until network can be re-established, instead of truly offering something useful to do locally. Probably the only mainstream counter-examples would be games.
The mail server is the mail server even for Outlook.
Outlook gives you a way to look through email offline. Gmail apps and even Gmail in Chrome have an offline mode that let you look through email.
It's not easy to call it fully offline, nor a dumb terminal.
I was just probing the 'content _streaming_' term. As you demonstrate, you'd have to squint really hard to describe GMail as content streaming.
'Offline' vs 'content streaming' is a false dichotomy. There's more different types of products and services.
(Which reminds me a bit of crypto-folks calling everything software that's not in crypto "web2", as if working on stodgy backends in a bank or making Nintendo Switch games has anything to do with the web at all.)
Our personal devices are far from thin clients.
The text content of a weather app is trivial compared to the UI.
Same with many web pages.
Desktop apps use local compute, but that's more a limitation of latency and network bandwidth than any fundamental need to keep things local.
Security and privacy also matter to some people. But not to most.
Turn off internet on they iPad and see how many apps that people use still work.
The Ipad is a high performance computer, not just because Apple think that's fun, but out of necessity given its ambition: the applications people use on it require local storage and rather heavy local computation. The web browser standards if nothing else have pretty much guaranteed that the age of thin clients is over: a client needs to supply a significant amount of computational resources and storage to use the web generally. Not even Chromebooks will practically be anything less than rich clients.
Going back to the original topic (and source of the analogy), IOS hosts an on-device large language model.
Maybe a PC without a hard drive (PXE the OS), but if it has storage and can install software, its not dumb.
I think it depends on if you see the browser for content or as a runtime environment.
Maybe it depends on the application architecture...? I.e., a compute-heavy WASM SPA at one end vs a server-rendered website.
Or is it an objective measure?
Web2.0 discarded the protocol approach and turned your computer into a thin client that does little more than render webapps that require you to be permanently online.
There was also FidoNet with offline message readers.
People must have been pretty smart back then. They had to know to hang up the phone to check for new messages.
Just a bunch of billionaires jockeying for not being poor.
A single image generally took nothing like a minute. Most people had moved to 28.8K modems that would deliver an acceptable large image in 10-20 seconds. Mind you, the full-screen resolution was typically 800x600 and color was an 8-bit palette… so much less data to move.
Moreover, thanks to “progressive jpeg”, you got to see the full picture in blocky form within a second or two.
And of course, with pages was less busy and tracking cookies still a thing of the future, you could get enough of a news site up to start reading in less time that it can take today.
One final irk is that it’s little overdone to claim that “For the first time in history, you can exchange letters with someone across the world in seconds”. Telex had been around for decades, and faxes, taking 10-20 seconds per page were already commonplace.
273 more comments available on Hacker News