Openai's Cash Burn Will Be One of the Big Bubble Questions of 2026
Key topics
The financial fate of OpenAI is sparking heated debate, with many questioning whether the company's massive cash burn is a recipe for disaster or a savvy investment in the future of AI. Some commenters are pushing back against the negative language used in a recent article, pointing out that OpenAI's spending is fueling a booming industry and creating real value, even if the company itself isn't yet profitable. Historical comparisons to Uber and Amazon have been drawn, but others are cautioning that OpenAI's scale and debt financing set it apart from its predecessors. As one commenter wryly noted, even if the debt is written off, the AI advancements that have been made will remain, potentially justifying the massive investment.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
32m
Peak period
132
0-6h
Avg / period
22.9
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 30, 2025 at 4:44 PM EST
10 days ago
Step 01 - 02First comment
Dec 30, 2025 at 5:15 PM EST
32m after posting
Step 02 - 03Peak activity
132 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Jan 2, 2026 at 11:31 AM EST
7 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I am not saying OpenAI is Amazon but am saying I have seen this before where masses are going “oh business is bad, losses are huge, where is path to profitability…”
I do know that in the late aughts, people were writing stories about how Amazon was a charity run on behalf of the American consumer by the finance industry.
There's an element of arms race between players, and the genie is out of the bottle now so have to move with it. Game theory is more driving this than economics in the short term.
Marginal gains on top of these probably have a ROI now.
That being said, if I was Sam Altman I'd also be stocking up on yachts, mansions and gold plated toilets while the books are still private. If there's $10bn a year in outgoings no one's going to notice a million here and there.
That's what the words mean in this context.
OpenAI ask for 1m GPUs for a month, Anthropic ask for 2m, the government data center only has 500,000, and a new startup wants 750,000 as well.
Do you hand them out to the most convincing pitch? Hopefully not to the biggest donor to your campaign.
Now the most successful AI lab is the one that's best at pitching the government for additional resources.
It would still likely devolve into most-money-wins, but it is not an insurmountable political obstacle to arrange some sort of sharing.
https://www.ornl.gov/news/doe-incite-program-seeks-2026-prop...
> The Innovative and Novel Computational Impact on Theory and Experiment, or INCITE, program has announced the 2026 Call for Proposals, inviting researchers to apply for access to some of the world’s most powerful high-performance computing systems.
> The proposal submission window runs from April 11 to June 16, 2025, offering an opportunity for scientific teams to secure substantial computational resources for large-scale research projects in fields such as scientific modeling, simulation, data analytics and artificial intelligence. [...]
> Individual awards typically range from 500,000 to 1,000,000 node-hours on Aurora and Frontier and 100,000 to 250,000 node-hours on Polaris, with the possibility of larger allocations for exceptional proposals. [...]
> The selection process involves a rigorous peer review, assessing both scientific merit and computational readiness. Awards will be announced in November 2025, with access to resources beginning in 2026.
Not sure OpenAI/Anthropic etc would be OK with a six month gap between application and getting access to the resources, but this does indeed demonstrate that government issued super-computing resources is a previously solved problem.
In theory it makes the process more transparent and fair, although slower. That calculus has been changing as of late, perhaps for both good and bad. See for example the Pentagon's latest support of drone startups run by twenty-year-olds.
The question of public and private distinctions in these various schemes are very interesting and imo, underexplored. Especially when you consider how these private LLMs are trained on public data.
Why would you want my money to be used to build datacenter that won’t benefit me ? I might use a LLM once a month, many people never use it.
Let the one who use it pay for it.
This is not at all true of generative AI.
What is the justification for considering data centers capable of running LLMs to be a public good?
There are many counter examples of things many people use but are still private. Clothing stores, restaurants and grocery stores, farms, home appliance factories, cell phone factories, laundromats and more.
Why not an LLM datacenter if it also offers information? You could say it's the public library of the future maybe.
Data centers clearly can exist without being owned by the public.
No chance they're going to take risks to share that hardware with anyone given what it does.
The scaled down version of El Capitan that was also used as a prototype is used for non-classified workloads, some of which are proprietary, like drug simulation. It is called Tuolumne.
Like OP, I also don't see why a government supercomputer does it better than hyperscalers, coreweave, neoclouds, et al, who have put in a ton of capital as even compared to government. For loads where institutional continuity is extremely important, like weather -- and maybe one day, a public LLM model or three -- maybe. But we're not there yet, and there's so much competition in LLM infrastructure that it's quite likely some of these entrants will be bag holders, not a world of juicy margins at all...rather, playing chicken with negative gross margins.
these things constitute public goods that benefit the individual regardless of participation.
Uncanny really.
people have no idea about how big the military and defense budgets worldwide are next to any other example of a public budget.
throw as many pie charts out as you want; people just can't see the astronomical difference in budgets.
I think it's based on how the thing works; a good defense works until it doesn't -- the other systems/budgets in place have a bit more of a graceful failure.
I see no argument why the government would jump into a hype cycle and start building infra that speculative startups are interested in. Why would they take on that risk compared to private investors, and how would they decide to back that over mammoth cloning infra or whatever other startups are doing?
Everything is happening exactly as it should. If the "bubble" "pops", that's just the economic laws doing what they naturally do.
Hmm, what about member-owned coöperatives? Like what we have for stock exchanges.
For all we know, they could be accumulating capital to weather an AI winter.
It's also worth noting that OpenAI has not trained a new model since gpt4o (all subsequent models are routing systems and prompt chains built on top of 4), so the idea of OpenAI being stuck in some kind of runaway training expense is not real.
I'd love a blog or coffee table book of "where are they now" for the director level folks who do dumb shit like this.
I know sama says they aren’t trying to train new models, but he’s also a known liar and would definitely try to spin systemic failure.
Their investors surely do (absent outrageous fraud).
> For all we know, they could be accumulating capital to weather an AI winter.
If they were, their investors would be freaking out (or complicit in the resulting fraud). This seems unlikely. In point of fact it seems like they're playing commodities market-cornering games[1] with their excess cash, which implies strongly that they know how to spend it even if they don't have anything useful to spend it on.
[1] Again c.f. fraud
Right, this is nonsense. Even if investors wanted to be complicit in fraud, it's an insane investment. "Give us money so we can survive the AI winter" is a pitch you might try with the government, but a profit-motivated investor will... probably not actually laugh in your face, but tell you they'll call you and laugh about you later.
No one knows whether the base model has changed, but 4o was not a base model, and neither is 5.x. Although I would be kind of surprised if the base model hadn't also changed, FWIW: they've significantly advanced their synthetic data generation pipeline (as made obvious via their gpt-oss-120b release, which allegedly was entirely generated from their synthetic data pipelines), which is a little silly if they're not using it to augment pretraining/midtraining for the models they actually make money from. But either way, 5.x isn't just a prompt chain and routing on top of 4o.
I’m sure all these AI labs have extensive data gathering, cleanup, and validation processes for new data they train the model on.
Or at least I hope they don’t just download the current state of the web on the day they need to start training the new model and cross their fingers.
At the very least they made GPT 4.5, which was pretty clearly trained from scratch. It was possibly what they wanted GPT-5 to be but they made a wrong scaling prediction, people simply weren't ready to pay that much money.
This isn't really accurate.
Firstly, GPT4.5 was a new training run, and it is unclear how many other failed training runs they did.
Secondly "all subsequent models are routing systems and prompt chains built on top of 4" is completely wrong. The models after gpt4o were all post-trained differently using reinforcement learning. That is a substantial expense.
Finally, it seems like GPT5.2 is a new training run - or at least the training cut off date is different. Even if they didn't do a full run it must have been a very large run.
https://www.theinformation.com/articles/openai-says-business...
https://epoch.ai/blog/training-compute-of-frontier-ai-models...
Doubtful. This would be the very antithesis of the Silicon Valley way.
But on the contrary, Nano Banana is very good, so I don't know. And in the end, I'm pretty confident Google will be the AI race winner, because they got the engineers, they tech background and the money. Unless Google Adsense die, they can continue the race forever.
Is it relative adoption or absolute ? I mean, the people using Gemini are they new or coming from another provider, like OpenAI ? (said differently: Is Google eating OpenAI lunch or just reaching new customers ?)
If they can achieve that they will cut off a key source of blood supply to MSFT+OAI. There is not much money in the consumer market segment from subscribers and entering the ad-business is going to be a lot tougher than people think.
Gemini is built into Android and Google search. People may not be going to gemini.google.com, but that does not mean adoption is low.
https://searchengineland.com/nearly-all-chatgpt-users-visit-...
But even more importantly, it obviously isn’t losing money from advertisers to ChatGPT. You can look at their quarterly results.
But you cannot use it with an API key.
If you're on a workspace account, you can't have normal individual plan.
You have to have the team plan with $100/month or nothing.
Google's product management tier is beyond me.
Absolutely no one besides ChromeOS users are forced to use Chrome.
Google has spent over a decade advertising Chrome on all their properties and has an unlimited budget and active desire to keep Chrome competitive. Mozilla famously needs Google’s sponsorship to stay solvent. Apple maintains Safari to have no holes in their ecosystem.
Stop being silly defending trillion dollar companies that are actively making the internet worse, it’s not productive or funny.
It is far far behind, and GPT hasn't exactly stalled growth either. Weekly Active Users, Monthly visits...Gemini is nowhere near. It's like Google vs Bing. They're comfortably second, but second is well below first.
>ai overviews in search are super popular and staggeringly more used than any other ai-based product out there
Is it ? How would you even know ? It's a forced feature you can not opt out of or not use. I ignore AI overviews, but would still count as a 'user' to you.
Search Traffic: https://x.com/Similarweb/status/2003078223135990246
https://cdn.openai.com/pdf/a253471f-8260-40c6-a2cc-aa93fe9f1...
Gemini - 1.4b visits - +14.4% MoM
Yeah, ChatGPT is still more popular, but this does not show Gemini struggling exactly.
I use it several times a day just to change text in image form to text form so you can search it and the like.
It's built into chrome but they move the hidden icon about regularly to confuse you. This month you click the url and it appears underneath, helpfully labeled "Ask Google about this page" so as to give you little idea it's Google Lens.
This really is the critical bit. A year ago, the spin was "ChatGPT AI results are better than search, why would you use Google?", now it's "Search result AI is just as good as ChatGPT, why bother?".
When they were disruptive, it was enough to be different to believe that they'd win. Now they need to actually be better. And... they kinda aren't, really? I mean, lots of people like them! But for Regular Janes at the keyboard, who cares? Just type your search and see what it says.
1. Google books, which they legally scanned. No dubious training sets for them. They also regularly scrape the entire internet. And they have YouTube. Easy access to the best training data, all legally.
2. Direct access to the biggest search index. When you ask ChatGPT to search for something it is basically just doing what we do but a bit faster. Google can be much smarter, and because it has direct access it's also faster. Search is a huge use case of these services.
3. They have existing services like Android, Gmail, Google Maps, Photos, Assistant/Home etc. that they can integrate into their AI.
The difference in model capability seems to be marginal at best, or even in Google's favour.
OpenAI has "it's not Google" going for it, and also AI brand recognition (everyone knows what ChatGPT is). Tbh I doubt that will be enough.
In my view Google is uniquely well positioned because, contrary to the others, it controls most of the raw materials for Ai.
>whereas OpenAI has a clear opportunity with advertising.
Personally, having "a clear opportunity with advertising" feels like a last ditch effort for a company that promised the moon in solving all the hard problems in the world.
Some players have to play, like google, some players want to play like USA vs. China.
Besides that, chatting with an LLM is very very convincing. Normal non technical people can see what 'this thing' can already do and as long as the progress is continuing as fast as it currently is, its still a very easy to sell future.
I don't think you have the faintest clue of what you're talking about right now. Google authored the transformer architecture, the basis of every GPT model OpenAI has shipped. They aren't obligated to play any more than OpenAI is, they do it because they get results. The same cannot be said of OpenAI.
Citation is needed
594 more comments available on Hacker News