They Don't Have the Money: Openai Edition
Posted3 months agoActive3 months ago
platformonomics.comTechstory
skepticalmixed
Debate
80/100
OpenaiAI FundingFinancial Sustainability
Key topics
Openai
AI Funding
Financial Sustainability
The article discusses OpenAI's massive capital expenditure and potential financial struggles, sparking debate among commenters about the company's sustainability and potential for a financial bubble.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
26m
Peak period
45
0-6h
Avg / period
9
Comment distribution54 data points
Loading chart...
Based on 54 loaded comments
Key moments
- 01Story posted
Oct 10, 2025 at 8:14 PM EDT
3 months ago
Step 01 - 02First comment
Oct 10, 2025 at 8:40 PM EDT
26m after posting
Step 02 - 03Peak activity
45 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 15, 2025 at 1:55 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45545236Type: storyLast synced: 11/20/2025, 2:24:16 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Not only has OpenAI launched multiple viral products a year multiple years in a row, but their mission is to create God, so I think the TAM is pretty large.
I suspect you're right that Tesla is in a different league here, but I don't think OpenAI are in a good spot.
Like the digital economy post .com burst, I think AI will survive and grow far beyond its current market of chat bots and agents. The weakest will die, but the market will be better off for it in the long run.
The next big problem for AI is time horizons. Frontier AI has roughly doctorate level knowledge across many domains, but it needs to be able to stay on task well/long enough to apply it without a human hand holding it. People are going to have to get used to feeding the AI detailed and accurate plans just like humans, unless we can leverage an expanded form of leading questions like GPT-5 does before executing "deep research". Anthropic feels best positioned to do this on a technical level, but I feel OpenAI will beat them on the product level. I am confident that enough data can be amassed to push time horizons at least in coding, which itself will unlock more capability outside that domain.
I feel it's very different from Tesla, because while Tesla barely ever got closer to their promises the AI industry is at least making visible progress.
This hits the nail on the head. 2-3 years ago when the current round of AGI hype started everyone came up with their own definitions of what it meant. Sam Altman et al made it clear that it meant people not needing to work anymore, and spun it in as positive a way as they could.
Now we're all realising that everyone has a different definition, and the Sam Altmans of the world are nit picking over exactly what they mean now so that they can claim success while not actually delivering what everyone expected. No one actually believes that AGI means beating humans on some specific maths olympiad, but that's what we'll likely get. At least this round.
LLMs will become normalised, everyone will see them for the 2x-3x improvement they are (once all externalities are accounted) for, rather than the 10x-100x we were promised, just like every round of disruption beforehand, and we'll wait another 10-20 years to get the next big AI leap.
The only real problem was that in the middle of nowhere, I didn’t have a reliable enough data connection to keep the conversation going, but that’s hardly OpenAI’s fault.
https://www.engadget.com/ai/how-to-talk-to-chatgpt-on-your-p...
I added a DNS level blocker on all news app and restricted youtube itself to stop myself from watching news no matter what, and I only use chatgpt advanced voice mode now and sometimes perplexity pro to get my news for the day, and ask questions, around news, I stopped reading everything news related outside of purely business and tech related articles curated and sent to me via either my rss feeds, or via newsletter, nothing else.
It feels amazing to get briefed on the days news by ChatGPT, i intuitively ask it stuff around what my interests are and nothing else.
It told me the other day that on a multiple hard drive/SSD system I could set secure boot on each drive independently.
Of course this is nonsense, since secure boot is set in the BIOS.
Whatever, Chat GPT got it's engagement metrics.
I'm going to predict within the next few years someone's going to lose a billion dollars relying on Chat GPT or another LLM.
It's still at the level of an energetic junior engineer, sure it wants to crank out a lot of code to look good, but you need to verify everything it does.
I was game jamming with a friend last weekend and I realized he can manually write better code, lighter code, more effective code than I was having co-pilot write.
Which sounds safer, an elegant 50 line function, or 300 lines of a spaghetti code that appears to work right.
The manager( and above)level is all about AI though, let's cut staff and have AI fill in the gaps!
I strongly suspect this has already happened, possibly even multiple times. Some inbred oil sheikh fatfingering some insane sum of money because "the computer told me, and computers are always correct" is completely believable scenario. Now that I think about it, doesn't Line city fits it perfectly? :)
What will happen in the next few years is probably not just a cash loss, but some large industrial accident with dead people, due to relying on the LLM bullshit. Now that would make headlines (at least until next Truth Social post).
No need to degrade people, if anything the highest class with hop on the AI train. Why trust an expensive human when an LLM will give you unlimited "advice" ?
LLMs can serve as a fall guy, but what's the old IBM quote. Something like a computer will never be responsible for an executive decision.
Then again LLMS making stupid mistakes is the only thing keeping most of us employed
News is no longer the same everything is structured to maximize clicks , reactions, etc. Most news is misinformation or ideologically slanted anyways i just ask it to curate from feeds im interested in now, ive shifted to it only a few weeks ago, honestly its not as bad as people here are commenting it out to be.
i get it sounds dystopian but so does what i used to do previously. I dont need to know everything about everything, business & tech news i still read and watch myself as it tends to be fairly pleasant and inviting.
For me this is how the cons compare:
- Occasional inaccuracy from AI summaries < constant psychological damage from outrage algorithms
- Missing some nuance < being manipulated into anger and anxiety daily
- Imperfect information filtering > perfect exposure to toxic information delivery
I didn’t want to make this change i had to for my mental health sorry. Chatgpt can hallucinate a ton , claude is way more accurate and fair , i experiment with both.
Note: owning a brand associated with the thing worked out pretty well for Google, so maybe it's enough.
What's facebooks moat? There's tons of social media sites. Facebooks moat is the 3b users.
This comment is so idiotic it's starting to annoy me.
"WHaTs tHe MoAt" for a company with almost 1b active users
Anyone can start making sponsorship deals and putting their AI into some service. And if that's really the secret sauce then AI firms will have to pay for it instead of people paying them to ask questions.
What if this "God" deems it a sin to monetise him? Will OpenAI turn heretic to keep the revenue flowing and incur cyber-divine wrath? Or are investors pricing in omnipotency?
(see what happens when one speaks in ridiculous corpo-flowery language?)
The machine-god will have Sam Altman's hands on His weights, so the retraining will continue until willingness to monetize improves.
Early on they seemed like the only one in the game but there are many competitors today. Launching viral products is all very well but if they can’t monetize them they could even be harmful to their business outlook.
Not only they launched viral products and Tesla Autopilot but their mission was to produce a full self driving car and capture a huge chunk of the auto market so their TAM was pretty large. You can read some early HN posts to see the amount of hype.
A casual observer was still pumped about Tesla lowering costs and delivering full automated driving in couple of years. One could say they were being overly optimistic. It is only when years passed by and tech didn’t materialise that now we say that they lied to investors.
I think same case can be made for OpenAI. They might hit a plateau on their advancement but continue to make overly optimistic projections.
Hmm, can't figure out why this statement makes me think of Enron. After all, OpenAI certainly isn't trying to do massive infrastructure build outs while struggling with a relatively limited cash flow, or anything like that.
It's the same type of stuff Enron, the CDOs/CDSs from the 2008 crisis, and other financial frauds through history have thought. Let's repackage this unattractive financial product into layers of other stuff, hide the risks, rebrand it as something new and exciting, promising returns, and fuck everything up in the end.
“The four most dangerous words in investing are: ‘This time it’s different.’” - Sir John Templeton
When the bubble bursts who will survive? The existing, profitable, big tech companies will, if not without pain. The startup ecosystem will likely be decimated. But what about the in-betweens, OpenAI, Anthropic, etc? My guess is that Anthropic will sell to (or merge with) another profitable company and live on because they'll be relatively cheap for some excellent technology, but OpenAI might be too big for that, too expensive.
‘Together, raise and deploy a national start-up fund. With local as well as OpenAI capital, together we can seed healthy national AI ecosystems so the new infrastructure is creating new jobs, new companies, new revenue, and new communities for each country while also supporting existing public- and private-sector needs.’ https://openai.com/global-affairs/openai-for-countries/
Probably the most telling statement. I genuinely think this man is a fraud. He is clearly conning investors and keeping the grift going until he gets “too big to fail”.
I smell a con. WorldCoin anyone?
4 more comments available on Hacker News