Openai's H1 2025: $4.3b in Income, $13.5b in Loss
Posted3 months agoActive3 months ago
techinasia.comOtherstoryHigh profile
heatednegative
Debate
85/100
Artificial IntelligenceOpenaiFinancialsVenture Capital
Key topics
Artificial Intelligence
Openai
Financials
Venture Capital
OpenAI reported $4.3B in revenue and a $13.5B loss in H1 2025, sparking concerns about the company's financial sustainability and the viability of its business model.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
10m
Peak period
114
0-6h
Avg / period
22.9
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 2, 2025 at 2:37 PM EDT
3 months ago
Step 01 - 02First comment
Oct 2, 2025 at 2:47 PM EDT
10m after posting
Step 02 - 03Peak activity
114 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 6, 2025 at 3:19 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45453586Type: storyLast synced: 11/22/2025, 11:00:32 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
um...
Like I get 50,000 shares deposited in to my Fidelity account, worth $2 each, but i can't sell them or do anything with them?
The shares are valued by an accounting firm auditor of some type. This determines the basis value if you're paying taxes up-front. After that the tax situation should be the same as getting publicly traded options/shares, there's some choices in how you want to handle the taxes but generally you file a special tax form at the year of grant.
For all practical purposes it’s worth nothing until there is a liquid market. Given current financials, and preferred cap table terms for those investing cash, shares the average employee has likely aren’t worth much or maybe even anything at the moment.
best to treat it like an expense from the perspective of shareholders
I don’t work there but know several early folks and I’m absolutely thrilled for them.
employees are very liquid if they want to be, or wait a year for the next 10x in valuation
it’s just selling a few shares for any higher share price
1x Person with billions probably gets spent in a way that fucks everyone over.
Why would employees stay after getting trained if they have a better offer?
You may lose a few employees to poaching, sure - but the math on the relative cost to hire someone for 100m vs. training a bunch employees and losing a portion of those is pretty strongly in your favor.
I'm glad if US and Chinese investors will bleed trillions on AI, just to find out few of your seniors can leave and found their own company and are at your level minus some months of progress.
The United states has tens of millions of skilled and competent and qualified people who can play basketball. 1000 of them get paid to play professionally.
10 of them are paid 9 figures and are incredible enough to be household names to non-basketball fans.
I have known several people who have went to OAI and I would firmly say they are 10x engineers, but they are just doing general infra stuff that all large tech companies have to do, so I wouldn’t say they are solving problems that only they can solve and nobody else.
Nobody wants to hear that one dev can be 50x better, but it's obvious that everyone has their own strengths and weaknesses and not every mind is replaceable.
In any case the talent is very scarce in AI/ML, the one able to push through good ideas so prices are going to be high for years.
There's always individuals, developers or not, whose impact is 50 times greater than the average.
And the impact is measured financially, meaning, how much money you make.
If I find a way to solve an issue in a warehouse sparing the company from having to hire 70 people (that's not a made up number but a real example I've seen), your impact is in the multiple millions, the guy being tasked with delivering tables from some backoffice in the same company is obviously returning fractions of the same productivity.
Salvatore Sanfilippo, the author of Redis, alone, built a database that killed companies with hundreds of (brilliant) engineers.
Approaching the problems differently allowed him to scale to levels that huge teams could not, and the impact on $ was enormous.
Not only that but you can have negative x engineers. Those that create plenty of work, gaslighting and creating issues and slowing entire teams and organizations.
If you don't believe in NX developers or individuals that's a you problem, they exist in sports or any other field where single individuals can have impact hundreds of thousands or millions of times more positive than the average one.
Of course different scientists with different backgrounds, professionalism, communication and leadership skills are going to have magnitude of orders different outputs and impacts in AI companies.
If you put me and Carmack in a game development team you can rest assured that he's going to have a 50/100x impact over me, not sure why would I even question it.
Not only his output will be vastly superior than mine, but his design choices, leadership and experience will save and compound infinite amounts of money and time. That's beyond obvious.
As for your various anecdotes later, I offer the counter observation that nobody is going around talking about 50x lottery winners, despite the lifetime earnings on lotteries also showing very wide spread:. Clearly observing a big spread in outcome is insufficient evidence for concluding the spread is due to factors inherent to the participants.
Adding headcount to a fast growing company *to lower wages* is a sure way to kill your culture, lower the overall quality bar and increase communication overheads significantly.
Yes they are paying a lot of their employees and the pool will grow, but adding bodies to a team that is running well in hopes that it will automatically lead to a bump in productivity is the part that is insane. It never works.
What will happen is a completely new team (team B) will be formed and given ownership of a component that was previously owned by team A under the guise of "we will just agree on interfaces". Team B will start doing their thing and meeting with Team A representative regularly but integration issues will still arise, except that instead of a tight core of 10-20 developers, you now have 40. They will add a ticketing to track change better, now issues in Team's B service, which could have been addressed in an hour by the right engineer on team A, will take 3 days to get resolved as ticket get triaged/prioritized. Lo and behold, Team C as now appeared and will be owning a sub-component of Team B. Now when Team A has issue with Team B's service, they cut a ticket, but the oncall on Team B investigates and finds that it's actually an issue with Team C's service, they cut their own ticket.
Suddenly every little issue takes days and weeks to get resolved because the original core of 10-20 developers is no longer empowered to just move fast. They eventually leave because they feel like their impact and influence has diminished (Team C's manager is very good at politics), Team A is hollowed out and you now have wall-to-wall mediocrity with 120 headcounts and nothing is ever anyone's fault.
I had a director that always repeated that communication between N people is inherently N² and thus hiring should always weight in that the candidate being "good" is not enough, they have to pull their weight and make up for the communication overhead that they add to the team.
Doesn't it depend upon how you measure the 50x? If hiring five name-brand AI researchers gets you a billion dollars in funding, they're probably each worth 1,000x what I'm worth to the business.
Besides, people are actively being trained up. Some labs are just extending offers to people who score very highly on their conscription IQ tests.
"The people spreading obvious lies must have a reasonable basis in their lying"?
You’re ignoring my point about the legitimate reason people might be getting offers in this stratosphere. No one has debunked or refuted the general reporting, at least not that I’ve seen. If you have a source, show it please.
A better way to look at it is they had about $12.1B in expenses. Stock was $2.5B, or roughly 21% of total costs.
If all goes well, someday it will dilute earnings.
While there is some flexibility in how options are issued and accounted for (see FASB - FAS 123), typically industry uses something like a 4 year vesting with 1 year cliffs.
Every accounting firm and company is different, most would normally account for it for entire period upfront the value could change when it is vests, and exercised.
So even if you want to compare it to revenue, then it should be bare minimum with the revenue generated during the entire period say 4 years plus the valuation of the IP created during the tenure of the options.
---
[1] Unless the company starts buying back options/stock from employees from its cash reserves, then it is different.
Even secondary sales that OpenAI is being reported to be facilitating for staff worth $6.6Billion has no bearing on its own financials directly, i.e. one third party(new investor) is buying from another third party(employee), company is only facilitating the sales for morale, retention and other HR reasons.
There is secondary impact, as in theory that could be shares the company is selling directly to new investor instead and keeping the cash itself, but it is not spending any existing cash it already has or generating, just forgoing some of the new funds.
My life insurance broker got £1k in commission, I think my mortgage broker got roughly the same. I’d gladly let OpenAI take the commission if ChatGPT could get me better deals.
t. perplexity ai
In fact it's an unavoidable solution. There is no future for OpenAI that doesn't involve a gigantic, highly lucrative ad network attached to ChatGPT.
One of the dumbest things in tech at present is OpenAI not having already deployed this. It's an attitude they can't actually afford to maintain much longer.
Ads are a hyper margin product that are very well understood at this juncture, with numerous very large ad platforms. Meta has a soon to be $200 billion per year ad system. There's no reason ChatGPT can't be a $20+ billion per year ad system (and likely far beyond that).
Their path to profitability is very straight-forward. It's practically turn-key. They would have to be the biggest fools in tech history to not flip that switch, thinking they can just fund-raise their way magically indefinitely. The AI spending bubble will explode in 2026-2027, sharply curtailing the party; it'd be better for OpenAI if they quickly get ahead of that (their valuation will not hold up in a negative environment).
Fascist corporatism will throw them in for whatever Intel rescue plan Nvidia is forced to participate in. If the midterms flip congress or if we have another presidential election, maybe something will change.
I'd say it's a bit of a Hail Mary and could go either way, but that's as an outsider looking in. Who really knows?
But there will still be thousands of screens everywhere running nonstop ads for things that will never sell because nobody has a job or any money.
As much as I don't want ads infiltrating this, it's inevitable and I agree. OpenAI could seriously put a dent into Google's ad monopoly here, Altman would be an absolute idiot to not take advantage of their position and do it.
If they don't, Google certainly will, as will Meta, and Microsoft.
I wonder if their plan for the weird Sora 2 social network thing is ads.
Investors are going to want to see some returns..eventually. They can't rely on daddy Microsoft forever either, now with MS exploring Claude for Copilot they seem to have soured a bit on OpenAI.
In other words, yes GPT-X might work well enough for most people, but the newer demo for ShinyNewModelZ is going to pull customers of GPT-X's in regardless of both fulfilling the customer needs. There is a persistent need for advancement (or at least marketing that indicates as much) in order to have positive numbers at the end of the churn cycle.
I have major doubts that can be done without trying to push features or SOTA models, without just straight lying or deception.
https://arstechnica.com/information-technology/2025/08/opena...
Sure, those models are cheaper, but we also don’t really know how an ecosystem with a stale LLM and up to date RAG would behave once context drifts sufficiently, because no one is solving that problem at the moment.
I didn't understand how bad it was until this weekend when I sat down and tried GPT-5, first without the thinking mode and then with the thinking mode, and it misunderstood sentences, generated crazy things, lost track of everything-- completely beyond how bad I thought it could possibly be.
I've fiddled with stories because I saw that LLMs had trouble, but I did not understand that this was where we were in NLP. At first I couldn't even fully believe it because the things don't fail to follow instructions when you talk about programming.
This extends to analyzing discussions. It simply misunderstands what people say. If you try to do this kind of thing you will realise the degree to which these things are just sequence models, with no ability to think, with really short attention spans and no ability to operate in a context. I experimented with stories set in established contexts, and the model repeatedly generated things that were impossible in those contexts.
When you do this kind of thing their character as sequence models that do not really integrate things from different sequences becomes apparent.
It’s so easy for people to shout bubble on the internet without actually putting their own money on the line. Talk is cheap - it doesn’t matter how many times you say it, I think you don’t have conviction if you’re not willing to put your own skin in the game. (Which is fine, you don’t have to put your money on the line. But it just annoys me when everyone cries “bubble” from the sidelines without actually getting in the ring.)
After all, “a bubble is just a bull market you don’t have a position in.”
In the same way that my elderly grandmother binge watches CNN to have something to worry about.
But the commenter I responded to DID care about the stock market, despite your attempt to grandstand.
And my point was, and still is, if you really believe it’s a bubble and you don’t actually have a short position, then you don’t actually believe it’s a bubble deep down.
Talk is cheap - let’s see your positions.
It would be like saying “I’ve got this great idea for a company, I’m sure it would do really well, but I don’t believe it enough to actually start a company.”
Ok, then what does that actually say about your belief in your idea?
The statistically correct play is therefore not to do this (and just keep buying).
You’ve just said, “I think something will go down at some point.” Which… like… sure, but in a pointlessly trivial way? Even a broken clock is right eventually?
That’s not “identifying a bubble” that’s boring dinner small talk. “Wow, this Bitcoin thing is such a bubble huh!” “Yeah, sure is crazy!”
And even more so, if you’re long into something you call a bubble, that by definition says either you don’t think it’s that much of a bubble, huh? Or you’re a goon for betting on something you believe is all hot air?
$4.3B in revenue is tremendous.
What are you comparing them to?
The best play for all portfolio managers is to froth up the stock price and take their returns later.
Everyone knows this a bubble but the returns at the end of this of those who time it are juicy - portfolio managers have no choice to be in this game because those who supply the money they invest on their behalf, demand it.
Its that simple.
Not saying that will happen, but it's always good to rewatch just as a reminder how bad things can get.
Here's information about checkout inside ChatGPT: https://openai.com/index/buy-it-in-chatgpt/
...but rather that they're doing that while Chinese competitors are releasing models in vaguely similar ballpark under Apache license.
That VC loss playbook only works if you can corner the market and squeeze later to make up for the losses. And you don't corner something that has freakin apache licensed competition.
I suspect that's why the SORA release has social media style vibes. Seeking network effects to fix this strategic dilemma.
To be clear I still think they're #1 technically...but the gap feels too small strategically. And they know it. That recent pivot to a linkedin competitor? SORA with socials? They're scrambling on market fit even though they lead on tech
Distribution isn't a moat if the thing being distributed is easily substitutable. Everything under the sun is OAI API compatible these days.
700 WAU are fickle AF when a competitor offers a comparable product for half the price.
Moat needs to be something more durable. Cheaper, Better, some other value added tie in (hardware / better UI / memory). There needs to be some edge here. And their obvious edge - raw tech superiority...is looking slim.
The LLM isn't 100% of the product... the open source is just part. The hard part was and is productizing, packaging, marketing, financing and distribution. A model by itself is just one part of the puzzle, free or otherwise. In other words, my uncle Bill and my mother can and do use ChatGPT. Fill in the blank open-source model? Maybe as a feature in another product.
They have the name brand for sure. And that is worth a lot.
Notice how Deepseek went from a nobody to making mainstream news though. The only thing people like more than a trusted thing is being able to tell their friends about this amazing cheap good alternative they "discovered".
It's good to be #1 mind share wise but without network effect that still leave you vulnerable
So what? DAUs don't mean anything if there isn't an ad product attached to it. Regular people aren't paying for ChatGPT, and even if they did, the price would need to be several multiples of what Netflix charges to break even.
- OpenAI,etc will go bankrupt (unless one manages to capture search from a struggling Google)
- We will have a new AI winter with corresponding research slowdown like in the 1980s when funding dries up
- Opensource LLM instances will be deployed to properly manage privacy concerns.
You think we have these crazy valuations because the market thinks that OpenAI will make joe-schmoe buy enough of their services? (Them introducing "shopping" into the service honestly feels like a bit of a panicky move to target Google).
We're prototyping some LLM assisted products, but right now the cost-model isn't entirely there since we need to use more expensive models to get good results that leaves a small margin, spinning up a moderately sized VM would probably be more cost effective option and more people will probably run into this and start creating easy to setup models/service-VM's (maybe not just yet, but it'll come).
Sure they could start hosting things themselves, but what's stopping anyone from finding a cheaper but "good enough" alternative?
If the revenue keeps going up and losses keep going down, it may reach that inflection point in a few years. For that to happen, the cost of AI datacenter have to go down massively.
Amazon had huge capital investments that got less painful as it scaled. Amazon also focuses on cash flow vs profit. Even early on it generated a lot of cash, it just reinvested that back into the business which meant it made a “loss” on paper.
OpenAI is very different. Their “capital” expense depreciation (model development) has a really ugly depreciation curve. It’s not like building a fulfillment network that you can use for decades. That’s not sustainable for much longer. They’re simply burning cash like there’s no tomorrow. Thats only being kept afloat by the AI bubble hype, which looks very close to bursting. Absent a quick change, this will get really ugly.
Unless one of these companies really produces a leapfrog product or model that can't be replicated within a short timeframe I don't see how this changes.
Most of OpenAI's users are freeloaders and if they turn off the free plan they're just going to divert those users to Google.
That's very different from the world where everyone immediately realized what a threat Chat-GPT was and instantly began pouring billions into competitor products; if that had happened with search+adtech in 1998, I think Google would have had no moat and search would've been a commoditized "function (query: String): String" service.
The exception is datacenter spend since that has a more severe and more real depreciation risk, but again, if the Coreweave of the world run into to hardship, it's the leading consolidators like OpenAI that usually clean up (monetizing their comparatively rich equity for the distressed players at firesale prices).
Alot of finances for non public company is funny numbers. It's based on numbers the company can point to but amount of asterisks in those numbers is mind-blowing.
https://s2.q4cdn.com/299287126/files/doc_financials/annual/0...
"Ouch. It’s been a brutal year for many in the capital markets and certainly for Amazon.com shareholders. As of this writing, our shares are down more than 80% from when I wrote you last year. Nevertheless, by almost any measure, Amazon.com the company is in a stronger position now than at any time in its past.
"We served 20 million customers in 2000, up from 14 million in 1999.
"• Sales grew to $2.76 billion in 2000 from $1.64 billion in 1999.
"• Pro forma operating loss shrank to 6% of sales in Q4 2000, from 26% of sales in Q4 1999.
"• Pro forma operating loss in the U.S. shrank to 2% of sales in Q4 2000, from 24% of sales in Q4 1999."
Amazon's worst year was 2000 when they lost around $1 billion on revenue around $2.8 billion, I would not say this is anywhere near "similar" in scale to what we're seeing with OpenAI. Amazon was losing 0.5x revenue, OpenAI 3x.
Not to mention that most of the OpenAI infrastructure spend has a very short life span. So it's not like Amazon we're they're figuring out how to build a nationwide logistic chain that has large potential upsides for a strong immediate cost.
> If the revenue keeps going up and losses keep going down
That would require better than "dogshit" unit economics [0]
0. https://pluralistic.net/2025/09/27/econopocalypse/#subprime-...
Other than Nvidia and the cloud providers (AWS, Azure, GCP, Oracle, etc.), no one is earning a profit with AI, so far.
Nvidia and the cloud providers will do well only if capital spending on AI, per year, remains at current rates.
2 generations of cards that amount to “just more of a fire hazard” and “idk bro just tell them to use more DLSS slop” to paper over actual card performance deficiencies.
We have 3 generations of cards where 99% of games fall approximately into one of 2 categories:
- indie game that runs on a potato
- awfully optimised AAA-shitshow, which isn’t GPU bottlenecked most of the time anyway.
There is the rare exception (Cyberpunk 2077), but they’re few and far between.
My point is that it could be far worse if they get in trouble and get bought out by some actor like Qualcomm that might see PC GPU's as a sideshow.
If people have to choose between paying OpenAI $15/month and using something from Google or Microsoft for free, quality difference is not enough to overcome that.
537 more comments available on Hacker News