AI Might Yet Follow the Path of Previous Technological Revolutions
Posted4 months agoActive4 months ago
economist.comTechstoryHigh profile
calmmixed
Debate
60/100
AITechnological RevolutionsInnovation
Key topics
AI
Technological Revolutions
Innovation
The article discusses the idea that AI might be considered 'normal technology' rather than a revolutionary force, sparking a thoughtful discussion among commenters about the potential impact and limitations of AI.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
3h
Peak period
43
3-6h
Avg / period
13.3
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 8, 2025 at 8:49 AM EDT
4 months ago
Step 01 - 02First comment
Sep 8, 2025 at 11:43 AM EDT
3h after posting
Step 02 - 03Peak activity
43 comments in 3-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 9, 2025 at 8:46 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45167625Type: storyLast synced: 11/20/2025, 8:23:06 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Four decades ago was 1985. The thing is, there was a huge jump in progress from then until now. If we took something which had a nice ramped progress, like computer graphics, and instead of ramping up we went from '1985' to '2025' in progress over the course of a few months, do you think there wouldn't be a lot of hype?
Don't remind me.
Whether the particular current AI tech is it or not, I have yet to be convinced that the singularity is practically impossible, and as long as things develop in the opposite direction, I get increasingly unnerved.
It's quite telling how much faith you put in humanity though, you sound fully bought in.
It’s wrong to commit to either end of this argument, we don’t know how it’ll play out, but the potential for humans drastically reducing our own numbers is very much still real.
¹https://gemini.google.com/share/d9b505fef250
But as I alluded to earlier, we’re working towards plenty of other collapse scenarios, so who knows which we’ll realize first…
Humans have always believed that we are headed for imminent total disaster. In my youth it was WW3 and the impending nuclear armageddon that was inevitable. Or not, as it turned out. I hear the same language being used now about a whole bunch of other things. Including, of course, the evangelist Rapture that is going to happen any day now, but never does.
You can see the same thing at work in discussions about AI - there's passion in the voices of people predicting that AI will destroy humanity. Something in our makeup revels in the thought that we'll be the last generation of humans, that the future is gone and everything will come to a crashing stop.
This is human psychology at work.
The observation is, humans tend to think that annihilation is inevitable, it hasn't happened yet so therefore it will never be inevitable.
In fact, _anything_ could happen. Past performance does not guarantee future results.
If you need cognitive behavioral therapy, fine.
But to casually cite nuclear holocaust as something people irrationally believed in as a possibility is dishonest. That was (and still is) a real possible outcome.
Whats somewhat funny here is is if youre wrong, it doesnt matter. But that isnt the same as being right.
> Something in our makeup revels in the thought that we'll be the last generation of humans, that the future is gone and everything will come to a crashing stop
And yet there _will_ (eventually) be one generation that is right.
Most likely outcome would be that humans evolve into something altogether different rather than go extinct.
Particularly considering the law of large numbers in play where incalculable large chances have so far shown only one sign of technologically-capable life —— ours, and zero signs of any other example of a tech species evolving into something else or even passing the Great Filter.
Even where life may have developed, it's incredibly unlikely that sentient intelligence developed. There was never any guarantee that sentience would develop on Earth and about a million unlikely events had to converge in order for that to occur. It's not a natural consequence of evolution, it's an accident of Earth's unique history and several near-extinction level events and drastic climate changes had to occur to make it possible.
The "law of large numbers" is nothing when the odds of sentient intelligence developing are so close to zero. If such a thing occurred or occurs in the future at some location other than Earth, it's reasonably likely that it's outside of our own galaxy or so far from us that we will never meet them. The speed of light is a hell of a thing.
We are living in a historically excepcional time of geological, environmental, ecological stability. I think that saying that nothing ever happens is like standing downrange to a stream of projectiles and counting all the near misses as evidence for your future safety. It's a bold call to inaction.
It's not that it can't happen. It obviously can. I'm more talking about the human belief that it will happen, and in our lifetime. It probably won't.
I also wonder if we can even power an AI singularity. I guess it depends on what the technology is. But it is taking us more energy than really reasonable (in my opinion) just to produce and run frontier LLMs. LLMs are this really weird blend of stunningly powerful, yet with a very clear inadequacy in terms of sentient behaviour.
I think the easiest way to demonstrate that, is that it did not take us consuming the entirety of human textual knowledge, to form a much stronger world model.
There was a lot of "LLMs are fundamentally incapable of X" going around - where "X" is something that LLMs are promptly demonstrated to be at least somewhat capable of, after a few tweaks or some specialized training.
This pattern has repeated enough times to make me highly skeptical of any such claims.
It's true that LLMs have this jagged capability profile - less so than any AI before them, but much more so than humans. But that just sets up a capability overhang. Because if AI gets to "as good as humans" at its low points, the advantage at its high points is going to be crushing.
A serious paper would start by acknowledging that every previous general-purpose technology required human oversight precisely because it couldn't perceive context, make decisions, or correct errors - capabilities that are AI's core value proposition. It would wrestle with the fundamental tension: if AI remains error-prone enough to need human supervisors, it's not transformative; if it becomes reliable enough to be transformative, those supervisory roles evaporate.
These two Princeton computer scientists, however, just spent 50 pages arguing that AI is like electricity while somehow missing that electricity never learned to fix itself, manage itself, or improve itself - which is literally the entire damn point. They're treating "humans will supervise the machines" as an iron law of economics rather than a temporary bug in the automation process that every profit-maximizing firm is racing to patch. Sometimes I feel like I'm losing my mind when it's obvious that GPT-5 could do better than Narayanan and Kapoor did in their paper at understanding historical analogies.
Delusional.
I could ask the same thing then. When will you take "AI" seriously and stop attributing the above capabilities to it?
Through this lens it's way more normal
We only have two states of causality, so calling something "just" deterministic doesn't mean much, especially when "just random" would be even worse.
For the record, LLMs in the normal state use both.
LLMs are machine learning models used to encode and decode text or other-like data such that it is possible to efficiently do statistical estimation of long sequences of tokens in response to queries or other input. It is obvious that the behavior of LLMs is neither consistent nor standardized (and it's unclear whether this is even desirable---in the case of floating-point arithmetic, it certainly is). Because of the statistical nature of machine learning in general, it's also unclear to what extent any sort of guarantee could be made on the likelihoods of certain responses. So I am not sure it is possible to standardize and specify them along the lines of IEEE754.
The fact that a forward pass on a neural network is "just deterministic matmul" is not really relevant.
In practice, outcome of floating point computation depends on compiler optimizations, order of operations, and rounding used.
1. Compiler optimizations can be disabled. If a compiler optimization violates IEEE754 and there is no way to disable it, this is a compiler bug and is understood as such.
2. This is as advertised and follows from IEEE754. Floating point operations aren't associative. You must be aware of the way they work in order to use them productively: this means understanding their limitations.
3. Again, as advertised. The rounding mode is part of the spec and can be controlled. Understand it, use it.
They are deterministic, and they follow clear rules, but they can't represent every number with full precision. I think that's a pretty good analogy for LLMs - they can't always represent or manipulate ideas with the same precision that a human can.
They're a fixed precision format. That doesn't mean they're ambiguous. They can be used ambiguously, but it isn't inevitable. Tools like interval arithmetic can mitigate this to a considerable extent.
Representing a number like pi to arbitrary precision isn't the purpose of a fixed precision format like IEEE754. It can be used to represent, say, 16 digits of pi, which is used to great effect in something like a discrete Fourier transform or many other scientific computations.
Let's not forget there has been times when if-else statements were considered AI. NLP used to be AI too.
It doesn't think, it doesn't reason, and it doesn't listen to instructions, but it does generate pretty good text!
People constantly assert that LLMs don't think in some magic way that humans do think, when we don't even have any idea how that works.
The proof burden is on AI proponents.
There's this very cliched comment to any AI HN headline which is this:
"LLM's don't REALLY have <vague human behavior we don't really understand>. I know this for sure because I know both how humans work and how gigabytes of LLM weights work."
or its cousin:
"LLMs CAN'T possibly do <vague human behavior we don't really understand> BECAUSE they generate text one character at a time UNLIKE humans who generate text one character a time by typing with their fleshy fingers"
Intelligent living beings have natural, evolutionary inputs as motivation underlying every rational thought. A biological reward system in the brain, a desire to avoid pain, hunger, boredom and sadness, seek to satisfy physiological needs, socialize, self-actualize, etc. These are the fundamental forces that drive us, even if the rational processes are capable of suppressing or delaying them to some degree.
In contrast, machine learning models have a loss function or reward system purely constructed by humans to achieve a specific goal. They have no intrinsic motivations, feelings or goals. They are statistical models that approximate some mathematical function provided by humans.
We don't just study it in humans. We look at it in trees [0], for example. And whilst trees have distributed systems that ingest data from their surroundings, and use that to make choices, it isn't usually considered to be intelligence.
Organizational complexity is one of the requirements for intelligence, and an LLM does not reach that threshold. They have vast amounts of data, but organizationally, they are still simple - thus "ai slop".
[0] https://www.cell.com/trends/plant-science/abstract/S1360-138...
In my opinion AI slop is slop not because AIs are basic but because the prompt is minimal. A human went and put minimal effort into making something with an AI and put it online, producing slop, because the actual informational content is very low.
And you'd be disagreeing with the vast amount of research into AI. [0]
> Moreover, they exhibit a counter-intuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget.
[0] https://machinelearning.apple.com/research/illusion-of-think...
That doesn't mean such claims don't need to made as specific as possible. Just saying something like "humans love but machines don't" isn't terribly compelling. I think mathematics is an area where it seems possible to draw a reasonably intuitively clear line. Personally, I've always considered the ability to independently contribute genuinely novel pure mathematical ideas (i.e. to perform significant independent research in pure maths) to be a likely hallmark of true human-like thinking. This is a high bar and one AI has not yet reached, despite the recent successes on the International Mathematical Olympiad [3] and various other recent claims. It isn't a moved goalpost, either - I've been saying the same thing for more than 20 years. I don't have to, and can't, define what "genuinely novel pure mathematical ideas" means, but we have a human system that recognises, verifies and rewards them so I expect us to know them when they are produced.
By the way, your use of "magical" in your earlier comment, is typical of the way that argument is often presented, and I think it's telling. It's very easy to fall into the fallacy of deducing things from one's own lack of imagination. I've certainly fallen into that trap many times before. It's worth honestly considering whether your reasoning is of the form "I can't imagine there being something other than X, therefore there is nothing other than X".
Personally, I think it's likely that to truly "do maths" requires something qualitatively different to a computer. Those who struggle to imagine anything other than a computer being possible often claim that that view is self-evidently wrong and mock such an imagined device as "magical", but that is not a convincing line of argument. The truth is that the physical Church-Turing thesis is a thesis, not a theorem, and a much shakier one than the original Church-Turing thesis. We have no particularly convincing reason to think such a device is impossible, and certainly no hard proof of it.
[1] Individual behaviours of LLMs are "not understood" in the sense that there is typically not some neat story we can tell about how a particular behaviour arises that contains only the truly relevant information. However, on a more fundamental level LLMs are completely understood and always have been, as they are human inventions that we are able to build from scratch.
[2] Anybody who thinks we understand how brains work isn't worth having this debate with until they read a bit about neuroscience and correct their misunderstanding.
[3] The IMO involves problems in extremely well-trodden areas of mathematics. While the problems are carefully chosen to be novel they are problems to be solved in exam conditions, not mathematical research programs. The performance of the Google and OpenAI models on them, while impressive, is not evidence that they are capable of genuinely novel mathematical thought. What I'm looking for is the crank-the-handle-and-important-new-theorems-come-out machine that people have been trying to build since computers were invented. That isn't here yet, and if and when it arrives it really will turn maths on its head.
And here's some more goalpost-shifting. Most humans aren't capable of novel mathematical thought either, but that doesn't mean they can't think.
As for most humans not being mathematicians, it's entirely irrelevant. I gave an example of something that so far LLMs have not shown an ability to do. It's chosen to be something that can be clearly pointed to and for which any change in the status quo should be obvious if/when it happens. Naturally I think that the mechanism humans use to do this is fundamental to other aspects of their behaviour. The fact that only a tiny subset of humans are able to apply it in this particular specialised way changes nothing. I have no idea what you mean by "goalpost-shifting" in this context.
If we knew that, we wouldn't need LLMs; we could just hardcode the same logic that is encoded in those neural nets directly and far more efficiently.
But we don't actually know what the weights do beyond very broad strokes.
we understand on this low level, but LLMs through the training converge to something larger than weights, there is a structure of these weights which emerged and allow to perform functions, and this part we do not understand, we just observe it as a black box, and experimenting on the level: we put this kind of input to black box and receive this kind of output.
Why? Team "Stochastic Parrot" will just move the goalposts again, as they've done many times before.
It doesn't matter anyway. The marquee sign reads "Artificial Intelligence" not "Artificial Human Being". As long as AI displays intelligent behavior, it's "intelligent" in the relevant context. There's no basis for demanding that the mechanism be the same as what humans do.
And of course it should go without saying that Artificial Intelligence exists on a continuum (just like human intelligence as far as that goes) and that we're not "there yet" as far as reaching the extreme high end of the continuum.
Aircraft and submarines belong to a different category and of the same category, than AI.
Humans are not all that original, we take what exists in nature and mangle it in some way to produce a thing.
The same thing will eventually happen with AI - not in our lifetime though.
That doesn't mean much.
No it doesn't, this is an overgeneralization.
When you type the next word you also put a word that fits some requirement. That doesn't mean you're not thinking.
- We have a sense of time (ie, ask an LLM to follow up in 2 minutes)
- We can follow negative instructions ("don't hallucinate, if you don't know the answer, say so")
Until we see an LLM that is capable of this, then they aren't capable of it, period
That's the difference. AI cannot be held responsible for hallucinations that cause harm, therefore it cannot be incentivized to avoid that behavior, therefore it cannot be trusted
Simple as that
The general notion of passage of time (i.e. time arrow) is the only thing that appears to be intrinsic, but it is also intrinsic for LLMs in a sense that there are "earlier" and "later" tokens in its input.
Imagine a process called A, and, as you say, we've no idea how it works.
Imagine, then, a new process, B, comes along. Some people know a lot about how B works, most people don't. But the people selling B, they continuously tell me it works like process A, and even resort to using various cutesy linguistic tricks to make that feel like it's the case.
The people selling B even go so far as to suggest that if we don't accept a future where B takes over, we won't have a job, no matter what our poor A does.
What's the rational thing to do, for a sceptical, scientific mind? Agree with the company, that process B is of course like process A, when we - as you say yourself - don't understand process A in any comprehensive way at all? Or would that be utterly nonsensical?
It's like we're pretending cognition is a solved problem so we can make grand claims about what LLM's aren't really doing.
It turns out we didn't need a specialist technique for each domain, there was a reliable method to architect a model that can learn itself, and we could already use the datasets we had, they didn't need to be generated in surveys or experiments. This might seem like magic to an AI researcher working in the 1990's.
A lot of this is marketing bullshit. AFAIK, even "machine learning" was a term made up by AI researchers when the AI winter hit who wanted to keep getting a piece of that sweet grant money.
And "neural network" is just a straight up rubbish name. All it does is obscure what's actually happening and leads the proles to think it has something to do with neurons.
LLMs are one of the first technologies that makes me think the term "AI effect" needs to be updated to "AGI effect". The effect is still there, but it's undeniable that LLMs are capable of things that seem impossible with classical CS methods, so they get to retain the designation of AI.
They still are, as far as the marketing department is concerned.
Among most people, you're thinking of things that were debatably AI, today we have things that are AI (again, not due to any concrete definition, simply due to accepted usage of the term.)
Artificial Intelligence is a whole subfield of Computer Science.
Code built of nothing but if/else statements controlling the behavior of game NPCs is AI.
A* search is AI.
NLP is AI.
ML is AI.
Computer vision models are AI.
LLMs are AI.
None of these are AGI, which is what does not yet exist.
One of the big problems underlying the current hype cycle is the overloading of this term, and the hype-men's refusal to clarify that what we have now is not the same type of thing as what Neo fights in the Matrix. (In some cases, because they have genuinely bought into the idea that it is the same thing, and in all cases because they believe they will benefit from other people believing it.)
Will it change everything? IDK, moving everything self-hosted to the cloud was supposed to make operations a thing of the past, but in a way it just made ops an even bigger industry than it was.
Spreadsheets don’t really have the ability to promote propaganda and manipulate people the way LLM-powered bots already have. Generative AI is also starting to change the way people think, or perhaps not think, as people begin to offload critical thinking and writing tasks to agentic ai.
May I introduce you to the magic of "KPI" and "Bonus tied to performance"?
You'd be surprised how much good and bad in the world has come out of some spreadsheet showing a number to a group of promotion chasing type-a otherwise completely normal people.
If you need an interface for something (e.g. viewing data, some manual process that needs your input), the agent will essentially "vibe code" whatever interface you need for what you want to do in the moment.
e.g. Alexa for voice, REST for talking to APIs, Zapier for inter-app connectdness.
(not trying to be cynical, just pointing out that the technology to make it happen doesn't seem to be the blocker)
REST is actually a huge enabler for agents for sure, I think agents are going to drive everyone to have at least an API, if not a MCP, because if I can't use your app via my agent and I have to manually screw around in your UI, and your competitor lets my agent do work so I can just delegate via voice commands, who do you think is getting my business?
Ironically the outro of a YouTube video I just watched. I'm just a few hundred ms of latency away from being a cyborg.
I guess I've always been more of a "work to live" type.
Seems to be the referenced paper?
If so previously discussed here: https://news.ycombinator.com/item?id=43697717
135 more comments available on Hacker News