Why Sam Altman Won't Be on the Hook for OpenAI's Spending Spree
Mood
heated
Sentiment
negative
Category
business
Key topics
AI
OpenAI
corporate governance
The article discusses why Sam Altman, CEO of OpenAI, won't be personally liable for the company's massive spending, sparking debate among commenters about accountability, the true value of AI investments, and the potential risks of unchecked corporate power.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
85
Day 1
Avg / period
29.7
Based on 89 loaded comments
Key moments
- 01Story posted
11/8/2025, 2:33:00 PM
10d ago
Step 01 - 02First comment
11/8/2025, 3:35:37 PM
1h after posting
Step 02 - 03Peak activity
85 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
11/12/2025, 7:50:09 AM
7d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
She said the quiet part out loud? This was always the play, it is obvious. Too big to fail. National security concerns. China/Russia veer scary. Blah blah blah.
Altman’s libertarian pontification is so obviously insincere it’s laughable.
https://www.cnbc.com/2025/11/06/trump-ai-sacks-federal-bailo...
I just have money in usdc and I might like monero for privacy sometimes but I guess I am just using it as a bank account right now with usdc
I would personally like it if visa/stripe etc. middleman's could be cut preferably, its honestly insane how we still can't figure that issue of middleman taking cuts etc.
Maybe the issue isn't technological but regulatory
But overall I agree that crypto/ especially the web3 mostly is a scam.
Sacks was one of the most prominent whiners on X asking for the bail out the Silicon Valley Bank. He is lying, just as the all-in podcast was lying before Trump's election and then dropped the masks.
Just look at Elon’s insane pay package, approved in a landslide. The skulls of the average shareholder must echo like a cave.
And the rich accuse the poor of their poverty being their own fault, because they are just being irresponsible, making bad decisions, and spending unwisely. They should look in the mirror.
https://www.bbc.com/news/articles/cn7ek63e5xyo
One reason of many why I say that the CEO class is no longer accountable for anything. Laws do not exist for them.
The Optimus narrative is so obviously a fraud. The things can "dance" and play chess, but they cannot operate in dirt, scrub the kitchen, etc. Even if they succeed, BYD will build a $7000 Optimus. Intimidation of crowds and barking orders at humans (for example for the human to clean the kitchen floor) seem the only somewhat realistic goals.
You should view your contributions as a donation. What donation has a ROI?
It’s why Musk is also safe from similar problems.
There are also a finite number of opportunities to invest in, so companies that have “buzz” can create a bidding war among potential investors that drives up valuations.
So that’s one possible reason but in the end we can’t know why another investor invests the way they do. We assume that the investor in making a rational decision based on their portfolio. It’s fun to speculate about though, which is why there’s so much press attention.
[1] https://en.wikipedia.org/wiki/List_of_largest_pension_scheme...
[2] https://en.wikipedia.org/wiki/List_of_sovereign_wealth_funds...
What happens to the ones that built for projects that end up failing? Seems to me the only way the story ends is with taxpayers on the hook once again.
Power generation, power grids are more generally useful today and less speculative than trying to win the AI race, so the risk for those types of things is somewhat lower, but there IS risk even in those.
Entergy is not just going to sit around and take the L if the project doesn’t ultimately turn out to be a good long-term investment. They’re simply going to pass the cost on to their customers in the region (more so than they already plan to in the event of success). Meanwhile Louisiana taxpayers are footing the bill for all the subsidies going through these projects.
So yeah I agree it’s not quite as high risk because at least there’s some in infrastructural investment, but that’s not the kind of investment that is really needed in the region right now and having that extra capacity is not a good thing unfortunately.
To be clear I’m not really disagreeing with you. I’m just kind of bickering over the nuances lol
That is, it doesn't matter so much if OpenAI and individual investors get fleeced, if there's a 20-50% labour cost reduction generally for capitalism as a whole (especially cost reduction in our own tech professions, which have been very well paid for a generation) -- Institutional investors and political actors will benefit regardless, by increasing the productivity or rate of exploitation of intellectual / information workers.
What Would an AI Crash Look Like? - https://www.bloomberg.com/news/newsletters/2025-10-12/what-h...
> Experts note chief dealmaker Altman doesn’t have anything to lose. He has repeatedly claimed he does not have a stake in the company, and won’t have a stake even after OpenAI has restructured to become a public benefit corporation. “He has the upside, in a sense, in terms of influence, if it all succeeds,” said Ofer Eldar, a corporate governance professor at the UC Berkeley School of Law. “He's taking all this commitment knowing that he's not going to actually face any consequences because he doesn't have a financial stake.”
> That’s not good corporate governance, according to Jo-Ellen Pozner, a professor of management and entrepreneurship at Santa Clara University’s Leavey School of Business. “We allow leaders that we see as being super pioneering to behave idiosyncratically, and when things move in the opposite direction and somebody has to pay, it's unclear that they're the ones that are going to have to pay,” she said.
> Luria adds: “He can commit to as much as he wants. He can commit to a trillion dollars, ten trillion, a hundred trillion dollars. It doesn't matter. Either he uses it, he renegotiates it, or he walks away.” There are of course more indirect stakes for Altman, experts said, like the reputational blow he’d take if the deals fall apart. But on paper, he’d seemingly be off the hook, they said.
Is an intended takehome message of the article that Altman should invest his own money (with potential loss, or profit), or that he should be given a big compensation deal (maybe like the one Musk just got)?
Now you have a company where the leader at least notionally doesn't have that kind of a financial stake... and we think that's bad? I disagree with Altman on almost everything, but it feels like grasping at straws.
The description of a "stupid" person by Carlo Cipolla (recent HN thread The Basic Laws of Human Stupidity - https://news.ycombinator.com/item?id=45829210) seems to rather fit Sam Altman.
A stupid person is a person who causes losses to another person or to a group of persons while himself deriving no gain and even possibly incurring losses.
Taking that at face value, it means we would have to invest exponential resources just to get linear improvements. That’s not exactly an optimistic outlook.
Also, the LLM space is a red queen environment. Stop investing and you are done.
All that said. IMHO Short to medium term breakthrouhgs will come from hybrid AI systems, the LLM being the universal putty between all users and systems.
That assertion is unsupported and unproven.
Also, if a commercial use for LLMs is ever found, it will be in the local, personal computing market, not the "cloud".
Apple can take a future OSS model and produce the winning product. That truth would be very bitter for many to swallow. Cook maintaining good relations with China could be the thing that makes Apple topple everyone in the long-run.
By selling a dollar for ninety cents? This metric is meaningless.
Though I do agree that many of the breathless claims that you can stop hiring or even layoff developers because of LLMs seem unsubstantiated
Maybe not massive commercial potential but it was pretty amazing and reminded me a bit of the Babel fish which use to seem like impossible sci-fi
And now OpenAI has Google as their competitor. Besides, Google established a search monopoly by side deals, Android and browser push but they still lost the Asian market. Now OpenAi's got to overcome not only Google, Amazon and Microsoft but also Baidu, Deepseek and the other Asian and European competitors because nobody wants to lose the "AI race" - it's too risky. Without a monopoly, there's no high profit.
Even if you want to give OpenAI the benefit of the doubt by comparing it to other software primos, they're doing terribly. Google, Facebook, Apple, Amazon, etc. were profitable almost immediately after their founding. In the cases where they accumulated losses it was a deliberate effort to capture as much of the market as possible. They could simple hit the brakes and become profitable at will.
In OpenAI's case, every week yet another little-known lab in China releases a 99% competitive LLM at a fraction of their costs.
It's not looking good at all now or in the long-term.
Not necessarily. Approaches such as mixture of experts help lower training costs by covering domains with specialized models.
I understand it's very easy to post ignorant messages in internet forums, but the answer to your question is yes, "they have done it" and it does result in cheaper training costs. See models such as DeepSeek-MoE or Mixtral.
You can see a similar effect in computer chess ELO scores over time, with the odd blip up see https://www.reddit.com/r/dataisbeautiful/comments/1iovlb0/oc... (1985 - 2023) and https://wiki.aiimpacts.org/speed_of_ai_transition/range_of_h... (1960 - 2000)
OpenAI as an entity is only in trouble if they've bound themselves completely without any way out.
These "commitments" may just function as memorandums of understanding.
I remember the days when Facebook and Google and other familiar giants were painted with the same brush-- all negative speculation as to them being overvalued and on the cliff's edge of doom, because no one could imagine how they'd survive their obscene overvaluations. How and when will they monetize? How will they ever bring in enough revenue to justify the insane capital spending?
1. OpenAI goes public. 2. Pay off the USA's debts with OpenAI stock. 3. Crash the bubble with no survivors. 4. Finally blockchain is useful.
Rule no of scam flavored hype: make up impressive sounding units that are opaque and meaningless
Useful? Yep - it’s like the best autocomplete you could ever imagine. Paradigm-changing even, as we now have a big chunk of human knowledge in a much more easily searchable format. It’s just not intelligent.
I have to imagine that just like a magic trick, eventually someone will come up with a way to clearly communicate to the layperson how the trick is done. At that point, the illusion collapses.
All that the AI Industry is doing is scaling computation/data in the hope that the result may encompass more of "existing real-world data" and thus give the illusion of thinking. You don't know whether the correct answer is due to reasoning or due to parroting of previously seen answer data. I always tell common folks to think of LLMs as very very large dictionaries eg. with the words from a pocket oxford dictionary you can construct only so many sentences whereas from a multi-volume set of large oxford dictionaries you can construct orders of magnitude more sentences and thus the probability of finding your specific answer sentence is much much higher. Now they can understand the scaling issue, realize its limits and why this approach can never lead to AGI.
But that's the thing - LLMs are just a probabilistic playback of what people have written in the past. It's not actually useful for communication of new information or thought, should we ever find a way to synthesize those things with real AI. They're literally just a search engine of existing human knowledge.
I agree, its not intelligent. You have to accept that as a fundamental premise and build up from there to figure out where it makes sense to utilise the technology - oh and if you have to refer to the technology and make sure the user knows about it, youve already failed. The user frankly does not care nor do they need to know of its existence.
He doesn't even have a financial interest in the company, apparently. Obviously, the people who will lose their investments if everything goes south is… the investors. As it's supposed to be.
The implied premise of this headline, that somehow there's something wrong with the fact that a CEO won't be personally financially responsible for potential future losses, is truly bizarre.
5 more comments available on Hacker News
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.