Abundant Intelligence
Mood
heated
Sentiment
mixed
Category
other
Key topics
Sam Altman's blog post on 'Abundant Intelligence' sparks debate on the future of AI, its potential impact, and the massive energy requirements for its development, with commenters expressing both excitement and skepticism.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
37m
Peak period
39
Hour 2
Avg / period
7.5
Based on 128 loaded comments
Key moments
- 01Story posted
Sep 23, 2025 at 9:45 AM EDT
2 months ago
Step 01 - 02First comment
Sep 23, 2025 at 10:22 AM EDT
37m after posting
Step 02 - 03Peak activity
39 comments in Hour 2
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 24, 2025 at 5:58 AM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Did Donald call him ?
interestingly, it doesn’t seem to be linked from the “news” section of their website.
I don't buy it at all.
This sounds like complete and total bullshit to me.
it's a fucking dud.
In the real world, it's immensely useful to millions of people. It's possible for a thing to both be incredibly useful and overhyped at the same time.
What evidence are you aware of that counters it?
Try asking "what evidence supports your conclusions?".
At least the statement starts with a conditional, even if it is a silly one.
If you know your growth curve is ultimately going to be a sigmoid, fitting a model with only data points before the inflection point is underdetermined.
> If AI stays on the trajectory that we think it will
Is a statement that no amount of prior evidence can support.
AI boosters are going to spam the replies to your comment in attempts to muddy the waters.
That being said the current models are transformative on their own, once the systems catch up to the models that will be glaringly obvious to everyone.
Also you can most certainly fit a sigmoid function only from past data points. Any projection will obviously have error, but your error at any given point should be smaller than for an exponential function with the same sampling.
In order to help reduce global poverty (much of which was caused by colonialism), it is the moral and ethical duty of the Global North to adopt LLMs on a mass scale and use them in every field imaginable, and then give jobs to the global poor to fix the resulting mess.
I am only 10% joking.
I found the 10%
This is also a pretty good joke
You can get your drinking water from a utility, or you can get bottled water. Guess which one he's gonna be selling?
And if you think for a second that the "utility" models will be trained on data as pristine as the data that the "bottled" models will be trained on, I've got a bridge in Brooklyn to sell you. (The "utility" models will not even have any access to all of the "secret sauce" currently being hoarded inside these labs.)
Essentially we can all expect to go back to the Lexis-Google type dichotomy. You can go into court on Google searches, nothing's stopping you. But nearly everyone will pay for LexisNexis because they're not idiots and they actually want to compete.
My product is going to be the fundamental driver of the economy. Even a human right!
> Maybe with 10 gigawatts of compute, AI can figure out how to cure cancer.
How?
> We are particularly excited to build a lot of this in the US; right now, other countries are building things like chips fabs and new energy production much faster than we are, and we want to help turn that tide.
There's the appeal to the current administration.
> Over the next couple of months, we’ll be talking about some of our plans and the partners we are working with to make this a reality. Later this year, we’ll talk about how we are financing it
Beyond parody.
But for real, the leap from GPT4 to GPT5 was nowhere near as impressive as from GTP3 to GPT4. They'll have to do a lot more to give any weight to their usual marketing ultra-hype.
All that being said, it does seem like OpenAI and Anthropic are on a quest for more dollars by promoting fantasy futures where there is not a clear path from A to B, at least to those of us on the outside.
GPT-4 launched with 8k context. It hallucinated regularly. It was slow. One-shotting code was unheard of, you had to iterate and iterate. It fell over even doing basic math problems.
GPT-5 thinking on the other hand is so capable that the average person wouldn't be able to really test it's abilities. It's really only experts operating in their domain who can find it's stumbling blocks.
I think because we have seen these constant incremental updates that it creates a staircase with small steps, but if you really reflect and look back, you'll see the actual capability gap from 3.5 to 4 compared to 4 to 5 is way way smaller. This is echoed in benchmarks too, GPT-5 is solving problems so wildly beyond what GPT-4 was capable of.
Every engineer I see in coffee shops uses AI. All my coworkers use AI. I use AI. AI nearly solved protein folding. It is beginning to unlock personalized medicine. AI absolutely will be a fundamental driver of the economy in the future.
Being skeptical is still reasonable.. but flippant dismissal of legitimately amazing accomplishments is peak HN top comment.
Definitely don't look into the financial details of that deal with Nvidia!
> Every engineer I see in coffee shops uses AI. All my coworkers use AI. I use AI.
Okay
> AI nearly solved protein folding.
FAH predates OpenAI by fifteen years and ChatGPT 3 by twenty. Do not fall for Altman's conflation of LLMs with every other form of machine learning he and OpenAI had nothing to do with!
could you elaborate about what do you mean?..
That’s huge. The hope is that this can drive renewables or nuclear rollout, but not sure that hope is realistic.
https://en.m.wikipedia.org/wiki/List_of_countries_by_electri...
Every country bellow the top 40 has a consumption lower than 87.6 TWh per year, that includes developed countries like Finland and Belgium, so yes, 10 gigawatts is a lot of power.
When people have access to food and shelter as human rights then we can entertain nonsense.
If a tenth of this happens, and we don't build a new power plant every ten weeks... then what?
Because I do agree with him on that front. The question is whether the AI industry will end up like airplanes: massively useful technology that somehow isn't a great business to be in. If indeed that is the case, framing OpenAI as a nation-bound "human right" is certainly one way to ensure its organizational existence if the market becomes too competitive.
Think about the legal field. The masses tend to use Google, whereas the wealthy and powerful all use LexisNexis. Who do you think has been winning in court?
Just look at how people are using Grok on Twitter, or how they're pasting ChatGPT output to win online arguments, or how they're trusting Google AI snippets. This is only gonna escalate.
That said, this is probably not the future Sam Altman is talking about. His vision for the future must justify the sky-high valuations of OpenAI, and cheap ubiquity of this non-proprietary tech runs counter to that. So his "ubiquity" is some sort of special, qualified ubiquity that is 100% dependent on his company.
Will they though?
>Just look at how people are using Grok on Twitter, or how they're pasting ChatGPT output to win online arguments, or how they're trusting Google AI snippets. This is only gonna escalate.
But will they though?
Of course, you can't train LLMs on LLM-generated content.
LLMs aren’t AI. These are language processing tools which are highly effective and it turns out language is a large component of intelligence but they aren’t AI alone.
Intelligence isn’t the solution or bottleneck to solving the world’s most pressing problems. Famines are political. We know how to deploy clean energy.
Now that doesn’t quite answer your question but I think it says two things. First that the time horizon to real AI is still way longer than sama is currently considering. Second that AI won’t be as useful as many believe.
I agree that all of the predictions regarding AI are probably overblown if they're just LLMs. But that might not matter if we're just talking about geopolitics.
So I cannot think of a good argument of a reason this isn’t going to change the world even if that does look more like the AI as a normal technology[0] argument or simply a slopapolocypse.
0: https://knightcolumbia.org/content/ai-as-normal-technology
God-like technology that you can avoid by touching grass?
Unless you're talking about a Skynet type rogue ASI, which is probably not gonna happen anytime soon.
Apple: Privacy is a fundamental Human right. That is why we must control everything. And stop our user from sharing any form of data other than to Apple.
OpenAI: AI is a fundamental Human right.....
There is something about Silicon Valley that is philosophically very odd for the past 15 to 20 years.
Who decides what is good and evil, and what are our human rights. Is it any of their business? Through their actions they're shaping society.
Something I've never understood: why do AGI perverts think that a superintelligence is any more likely to "cure cancer" than "create unstoppable super-cancer"
I can see AI being helpful in generating hypothesis, or potential compounds to synthesize, or helping with literature search, but science is a physical process. You don't generally do science just by sitting there and pondering, despite what the movies suggest.
But yes, potentially in some narrow domains this will be possible, but it still only automates a part of the whole process when it comes to drugs. How a drug operates on a molecular test chip is often very different than how it works in the body.
What are the chances of advancing in AI regulation before any monumental fuck up that changes the public opinion to "yeah this thing is really dangerous". Like a Hiroshima or Chernobyl but for AI.
You don't see the ultra-wealthy say "Oh no! Not my ability to still do 'architecture, database, coding, testing' on my own!" They just move further and further up the stack.
And I think this is a useful frame for everyone else: just move further up the stack.
Again, I hear you. As a fellow nerd, I love all of these activities too. Computers can be really fun, endlessly fulfilling, truly. But I have the awareness to say to myself, "Ya, but seems like that may have been a temporary phenomenon ... of getting to control and master these machines, just like I don't really crave to hack away at stone tools anymore, because that's not the time period I was born in."
If by 'marginal AI user' you mean a user who leverages AI tools to enhance the marginal utility of their labor or tasks (by making them more productive or efficient, broadly defined), then I do think that user archetype definitely exists.
https://archive.is/20250109011408/https://blog.samaltman.com...
"We are already in the phase of co-evolution — the AIs affect, effect, and infect us, and then we improve the AI. We build more computing power and run the AI on it, and it figures out how to build even better chips."
Colossus data has 230K GPUs (150,000 H100 GPUs, 50,000 H200 GPUs and 30,000 GB200 GPUs) [source https://x.ai/colossus]
Energy usage: up to 150 megawatts of electricity per day [source: https://en.wikipedia.org/wiki/Colossus_(supercomputer)]
So, when SamA talks about 10 gigawatts of compute does he mean per day or GWH (Gigawatts-hour)?
We don't have the compute to do video on demand right now like we do images or text or audio.
Combining all the modalities together, smoothly, at speed, and for cheap, is going to take a hell of a lot of thinking sand powered by magic rocks.
The moat will be how efficiently you convert electricity into useful behavior. Whoever industrializes evaluation and feedback loops wins the next decade.
what's in between this line and the next:
"or it might not. Now give me moar money!!!!!"
Word generating machine will not "figure out how to cure cancer" but it could help, obviously. AI is extremely valuable tool but it does not work on my behalf, except in same sense a coin sorter does. It's a tool. I still see this thing confuse left and right (no exaggeration), which would be fine - tools aren't perfect - except for all bullshit from VCs. That is where the danger lies, not with the tool but idolatry.
I am concerned the system encourages suicide, delusional thinking etc. They need to work that out immediately. Must be held to at least the standard of a lawn mower or couch with regard to safety. Probably should make it safe before hooking it up to 10GW power plant. Does not help perception that author of this blog saying how awesome future will be is also building a hideaway bunker for himself.
This is all on you at OpenAI, Anthropic, Microsoft, Twitter etc. Whatever happens.
The growth in energy is because of the increase in the output tokens due to increased demand for them.
Models do not get smarter the more they are used.
So why does he expect them to solve cancer if they haven't already?
And why do we need to solve cancer more than once?
It could start by figuring out how to keep kids from using AI to write all their essays.
Lots of assumptions about the path to get there, though.
And interesting that he's measuring intelligence in energy terms.
* Nvidia invests 5 billion in Intel * Nvidia and OpenAI announce partnership to deploy 10 gigawatts of NVIDIA systems (Investment of upto 100 billion) * This indirectly benefits TSMC (which implies they'll be investing more in the US)
Looks like the US is cooking something...
* AI is a “fucking dud” (you have to be either highly ignorant or trolling to say this)
* Altman is a “charlatan” (definitely no but it does look like he has some unsavory personal traits, quite common BTW for people at that level)
* the ridiculousness of touting a cancer cure (I guess the post is targeted to the technical hoi polloi, with whom such terminology resonates, but also see protein 3D structure discovery advances)
I found the following to be interesting in this post:1. Altman clearly signaling affinity for the Abundance bandwagon with a clear reference right in the title. Post is shorter but has the flavor of Marc Andreessen's "It's Time to Build" post from 2020: https://a16z.com/its-time-to-build/
2. He advances the vision of "creat[ing] a factory that can produce a gigawatt of new AI infrastructure every week". This may be called frighteningly ambitious at a minimum: U.S. annual additions have been ~10-20 GW/year for solar builds (https://www.climatecentral.org/report/solar-and-wind-power-2...)
What are ways in which we can incentivize investments and place societal guardrails so that something similar doesn't happen with AI data centers.
Do governments need to invest in nuclear power?
Scale up energy generation in other ways through renewables?
Insulate or subsidize the average non-corporate electricity consumer through something like rent control?
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.