How Openai Uses Complex and Circular Deals to Fuel Its Multibillion-Dollar Rise
Posted2 months agoActive2 months ago
nytimes.comOtherstoryHigh profile
heatednegative
Debate
85/100
AI IndustryFinancial ManipulationMarket Bubble
Key topics
AI Industry
Financial Manipulation
Market Bubble
The New York Times article exposes OpenAI's complex and circular deals that have fueled its multibillion-dollar rise, sparking concerns about financial manipulation and potential market crash.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
17m
Peak period
137
0-6h
Avg / period
22.9
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 31, 2025 at 9:03 AM EDT
2 months ago
Step 01 - 02First comment
Oct 31, 2025 at 9:20 AM EDT
17m after posting
Step 02 - 03Peak activity
137 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 4, 2025 at 4:17 AM EST
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45771538Type: storyLast synced: 11/20/2025, 8:14:16 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Depending on your POV OpenAI and the surrounding AI hype machine is at the extremes either the dawn of a new era, or a metastasized financial cancer that’s going to implode the economy. Reality lies in the middle, and nobody really knows how the story is going to end.
In my personal opinion, “financial innovation” (see: the weird opaque deals funding the frantic data center construction) and bullshit like these circular deals driving speculation is a story we’ve seen time and time again, and it generally ends the same way.
An organization that I’m familiar with is betting on the latter - putting off a $200M data center replacement, figuring they’ll acquire one or two in 2-3 years for $0.20 on the dollar when the PE/private debt market implodes.
The argument to moderation/middle ground fallacy is a fallacy.
https://en.wikipedia.org/wiki/Argument_to_moderation
The fallacy is that the true lies _at_ the middle, not in the middle.
This is totally fallacious.
"AI is a bubble" and "AI is going to replace all human jobs" is, essentially, the two extremes I'm seeing. AI replacing some jobs (even if partially) and the bubble-ness of the boom are both things that exist on a line between two points. Both can be partially true and exist anywhere on the line between true and false.
No jobs replaced<-------------------------------------->All jobs replaced
Bubble crashes the economy and we all end up dead in a ditch from famine<---------------------------------------->We all end up super rich in the post scarcity economy
For one, in higher dimensions, most of the volume of a hypersphere is concentrated near the border.
Secondly, and it is somewhat related, you are implicitly assuming some sort of convexity argument (X is maybe true, Y is maybe true, 0.5X + 0.5 Y is maybe true). Why?
Round-earthers: The earth is round.
"Reality lies in the middle" argument: The earth is oblong, not a perfect sphere, so both sides were right.
The AI situation doesn't not have two mutually exclusive claims, it has two claims on the opposite sides of economic and cultural impact that are differences of magnitude and direction.
AI can both be a bubble and revolutionary, just like the internet.
Ultimately, if both sides have a true argument, the real issue is which will happen first in time? Will AI change the world before the whole circular investment vehicle implode? Or after, like happened with the dotcom boom?
Eh, in a way they're not mutually exclusive. Look back at the dot com crash: it was all about things like online shopping, which we absolutely take for granted and use every day in 2025. Same for the video game crash in the 80s. They are both an overhyped bubble and and the dawn of a new era.
AI is a powerful and compelling technology, full stop. The sausage making process where the entire financial economy is pivoting around it is a different matter, and can only end in disaster.
He also has a podcast called Better Offline, which is slightly too ad heavy for my taste. Nevertheless, with my meagre understanding of the large corporate finances I was not able to find any errors in his core argument regardless of his somewhat sensationalist style of writing.
https://bsky.app/profile/notalawyer.bsky.social/post/3ltkami...
This comment is pretty depressing but it seems to be the path we're headed to:
> It's bad enough that people think fake videos are real, but they also now think real videos are fake. My channel is all wildlife that I filmed myself in my own yard, and I've had people leaving comments that it's AI, because the lighting is too pretty or the bird is too cute. The real world is pretty and cute all the time, guys! That's why I'm filming it!
Combine this with selecting only what you want to believe in and you can say that video/image that goes against your "facts" is "fake AI". We already have some people in pretty powerful positions doing this to manipulate their bases.
I have no idea how such a thing would work.
And annoyed and suspicious techies can use it to check other people's content and report them as fake.
Yeah, there are a lot of dumb people who want to be deceived. But would be good for the rest of us to have some tools.
And this is the crux of the issue - we are beelining to a world where EVERYTHING gets an AI filter in front of it. In a few years there will be no authentic content at all.
You don't have to be vague. Let's be specific. The President of the United States implied a very real voiceover of President Reagan was AI. Reagan was talking about the fallacy of tariffs as engines of economic growth, and it was used in an ad by the government of Ontario to sow divide within Republicans. It worked, and the President was nakedly mad at being told by daddy Reagan.
This is an example of how people viscerally hate anyone passing off AI generated images and video as real.
https://youtu.be/Q0TpWitfxPk
Central banks don't print money[1] but investment banks do. Think about it like this: Someone deposits $100. The bank pays interest, to make money on to pay that interest, ~$90 is loaned out to someone.
Now, I still have a bank slip that says $100 in the account, and the bank has given $90 of that to someone else. We now have $190 in the economy! The catch is, that money needs to be paid back, so when people need to call in that cash, suddenly the economy only has $10, because the loan needed to be paid back, causing a cash vacuum.
But that paying back is also where the profit is, because you sell off the loan book, and you can get all your money back, including future interest. So you have lent out $90, sold the right to collect the repayments to someone else as a bond, so you now have $120, a profit of $30
That $30 comes pretty much from nowhere. (there are caveats....)
Now we have my bank account, after say a year with $104 in it, the bank has $26 pure profit AND someone has a bond "worth" $90 which pays $8 a year. but guess what, that bond is also a store of value. So even though its debt, it acts as money/value/whatever.
Now, the numbers are made up, so are the percentages. but the broad thrust is there.
[1] they do
The practice was known as “zaitech”
> zaitech - financial engineering
> In 1984, Japan’s Ministry of Finance permitted companies to operate special accounts for their shareholdings, known as tokkin accounts. These accounts allowed companies to trade securities without paying capital gains tax on their profits.
> At the same time, Japanese companies were allowed to access the Eurobond market in London. Companies issued warrant bonds, a combination of traditional corporate bonds with an option (the “warrant") to purchase shares in the company at a specified price before expiry. Since Japanese shares were rising, the warrants became more valuable, allowing companies to issue bonds with low-interest payments.
> The companies, in turn, placed the money they raised into their tokkin accounts that invested in the stock market. Note the circularity: companies raised money by selling warrants that relied on increasing stock prices, which was used to buy more shares, thus increasing their gains from investing in the stock market.
https://www.capitalmind.in/insights/lost-decades-japan-1980s...
OpenAI applies the same strategy, but they’re using their equity to buy compute that is critical to improving their core technology. It’s circular, but more like a flywheel and less like a merry-go-round. I have some faith it could go another way.
But we know that growth in the models is not exponential, its much closer to logarithmic. So they spend =equity to get >results.
The ad spend was a merry go round, this is a flywheel where the turning grinds its gears until its a smooth burr. The math of the rising stock prices only begins to make sense if there is a possible breakthrough that changes the flywheel into a rocket, but as it stands its running a lemonade stand where you reinvest profits into lemons that give out less juice
In that sense it makes sense to keep spending billions even f model development is nearing diminishing return - it forces competition to do the same and in that game victory belongs to the guy with deeper pockets.
Investors know that, too. A lot of startup business is a popularity contents - number one is more attractive for the sheer fact of being number one. If you’re a very rational investor and don’t believe in the product you still have to play this game because others are playing it, making it true. The vortex will not stop unless limited partners start pushing back.
This can go either way. For databases open source integration tools prevailed, the commercial activity left hosting those tools.
But enterprise software integration that might end up mostly proprietary.
Citation needed. This is completely untrue AFAIK. They've claimed that inference is profitable, but not that they are making a profit when training costs are included.
The new OpenAI browser integration would be an example. Mostly the same model, but with a whole new channel of potential customers and lock in.
What _could_ prevent this from happening is the lack of available data today - everybody and their dog is trying to keep crawlers off, or make sure their data is no longer "safe"/"easy" to be used to train with.
Even if the model training part becomes less worthwhile, you can still use the data centers for serving API calls from customers.
The models are already useful for many applications, and they are being integrated into more business and consumer products every day.
Adoption is what will turn the flywheel into a rocket.
Power companies are even constructing or recommissioning power plants specifically to meet the needs of these data centers.
All of these investments have significant benefits over a long period of time. You can keep on upgrading GPUs as needed once you have the data center built.
They are clearly quite profitable as well, even if the chips inside are quickly depreciating assets. AWS and Azure make massive profits for Amazon and Microsoft.
The other difference (besides Sam's deal making ability) is, willing investors: Nvidia's stock rally leaves it with a LOT of room to fund big bets right now. While in Oracle's case, they probably see GenAI as a way to go big in the Enterprise Cloud business.
And then what happens if the stock collapses?
If they don't then they're spending a ton of money to level up models and tech now, but others will eventually catch up and their margins will vanish.
This will be true if (as I believe) AI will plateau as we run out of training data. As this happens, CPU process improvements and increased competition in the AI chip / GPU space will make it progressively cheaper to train and run large models. Eventually the cost of making models equivalent in power to OpenAI's models drops geometrically to the point that many organizations can do it... maybe even eventually groups of individuals with crowdfunding.
OpenAI's current big spending is helping bootstrap this by creating huge demand for silicon, and that is deflationary in terms of the cost of compute. The more money gets dumped into making faster cheaper AI chips the cheaper it gets for someone else to train GPT-5+ competitors.
The question is whether there is a network effect moat similar to the strong network effect moats around OSes, social media, and platforms. I'm not convinced this will be the case with AI because AI is good at dealing with imprecision. Switching out OpenAI for Anthropic or Mistral or Google or an open model hosted on commodity cloud is potentially quite easy because you can just prompt the other model to behave the same way... assuming it's similar in power.
Why would they run out of training data? They needed external data to bootstrap, now it's going directly to them through chatgpt or codex.
I’m thinking they eventually figure out who is the source of good data for a given domain, maybe.
Even if that is solved, models are terrible at long tail.
Or not - there still knowledge in people heads that is not bleeding into ai chat.
One implication here is that chats will morph to elicit more conversation to keep mining that mine. Which may lead to the need to enrage users to keep engagement.
This is a pricey machine though. But 5-10 years from now I can imagine a mid-range machine running 200-400B models at a usable speed.
Even if that weren't true having your software be cheaper to run is not a bad thing. It makes the software more valuable in the long run.
There are physical products involved, but the situation otherwise feels very similar to ads prior to dotcom.
That's capital markets working as intended. It's not necessarily doomed to end in a fiery crash, although corrections along the way are a natural part of the process.
It seems very bubbly to me, but not dotcom level bubbly. Not yet anyway. Maybe we're in 1998 right now.
Things are worth what people are willing to pay for them. And that can change over time.
Sentiment matters more than fundamental value in the short term.
Long term, on a timescale of a decade or more, it’s different.
That ultimately wouldn't be a big deal if the paper valuation from the trade didn't matter. As it stands, though, both parties could log it as both revenue and expenses, and being public companies their valuation, and debt they can borrow against it, is based in part on revenue numbers. If the number was meaningless who cares, but the numbers aren't meaningless and at such a scale they can impact the entire economy.
The thing is: you've paid nothing - all you did was trade pets and played an accounting trick to make them seem more valuable than they are.
I don't tend to benefit from my predictions as things always take longer to unfold than I think they will, but I'm beyond bearish at present. I'd rather play blackjack.
I’ve made that mistake already.
I’m nervous about the economic data and the sky high valuations, but I’ll invest with the trend until the trend changes.
Not? Money is thrown after people without really looking at the details, just trying to get in on the hype train? That's exactly how the dotcom bubble felt like.
Nowhere near that level. There’s real demand and real revenue this time.
It won’t grow as fast as investors expect, which makes it a bubble if I’m right about that. But not comparable to the dotcom bubble. Not yet anyway.
PE ratios of 50 make no sense, there is no justification for such a ratio. At best we can ignore the ratio and say PE ratios are only useful in certain situations and this isn't one of them.
Imagine if we applied similar logic to other potential concerns. Is a genocide of 500,000 people okay because others have done drastically more?
If you have a better measure, share it. I trust data more than your or my feelings on the matter.
Capital markets weren't intended for round trip schemes. If a company on paper hands 100B to another company who gives it back to the first company, that money never existed and that is capital markets being defrauded rather than working as expected.
Ugh I hate it so much, but you're right, it's coming.
I've started to wonder why we see so few companies do this. It's always "evil company lobbying to harm the its customers and the nation." Companies are made up of people, and for myself, if I was at a company I would be pushing to lobby on behalf of consumers to be able to keep a moral center and sleep at night. I am strongly for making money, but there are certain things I am not willing to do for it.
Targeted advertising is one of these things that I believe deserves to fully die. I have nothing against general analytics, nor gathering data about trends etc, but stalking every single person on the internet 24/7 is something people are put in jail for if they do it in person.
1) Google Search is now 99% crap that nobody wants, and even the AI answers are largely crap,
2) I believe somebody is going to eventually realize that search engines are stupid and improve on them. The whole idea of a single text box where you type some words and the search engine reads your mind to figure out the one thing you wanted, and then gives you one generic answer, is crap. We've just been blind to this because we don't see any other answer to realize we've been getting crap.
If I type in "when did MMS come out", Google will tell me when the candy product M&M's came out. But I wanted to know when the Multimedia Messaging Service was released. At some point somebody is going to realize that you can't actually tell what the hell the person wants from these simple queries alone. The computer needs to ask you questions to narrow down the field. That's sometimes what happens in ChatGPT, but it can be greatly improved with simple buttons/drop-downs/filters/etc. I think it'll also be improved by more dynamic and continuous voice input for context. (I notice Google Search now has audio input; I wonder if that came in after ChatGPT? Wayback Machine shows it starting in mid-2024) When they eventually implement all this, and people realize it's a million times better than what Google has, then Google will be playing catch-up.
I'm commenting here in case a large crash occurs, to have a nice relic of the zeitgeist of the time.
https://time.com/archive/6931645/how-the-once-luminous-lucen...
The customers bought real equipment that was claimed to be required for the "exponential growth" of the Internet. It is very much like building data centers.
I wonder how they felt during the .com era.
That's only like 1/8th of the flywheel, though.
It is at the very least highly debatable how much their core technology is improving from generation to generation despite the ballooning costs.
2020: https://www.youtube.com/watch?v=rpiZ0DkHeGE 2019: https://www.cadtm.org/spip.php?page=imprimer&id_article=1732...
This boom is a data center boom with AI being the software layer/driver. This one potentially has a lot longer to run even though everyone is freaking out now. If you believe the AI is rebuilding compute then this changes our compute paradigm in the future. As well as long as we don't get an over leveraged build out without revenue coming in the door. I think we are seeing a lot of revenue come in for certain applications.
The companies that are all smoke and mirrors built on chatGPT with little defensibility are probably the same as the ones you are referring to in the current era. Or the AI tooling companies.
To be clear circular deal flow is not a good look.
I can see the both sides of bull and bear at this moment.
While it was sorta legal (at the time) it was not ethical and led to a massive collapse of the #1 company at the time.
Makes you wonder if AI is in such a bubble. (It is).
Or maybe not, nobody knows the future any more then next guy in line.
what could possibly go wrong
This is bad. We should not shrug our shoulders and go "Oh ho, this is how the game is played" as though we can substitute cynicism for wisdom. We should say "this is bad, this is a moral hazard, and we should imprison and impoverish those who keep trying it".
Or we'll get more.
* stock prices increasing more than the non-existent money being burnt
* they are now too big to fail - turn on the real money printers and feed it directly into their bank accounts so the Chinese/Russians/Iranians/Boogeymen don't kill us all
Keep in mind also that the models are going to continue improving, if only on cost. Just a significant cost reduction allows for more "thinking" mode use.
Most of the reports about how useless LLMs are were from older models being used by people that don't know how to use LLMs. I'm not someone that thinks they're perfect or even great yet, but their not dirt.
And, well, nobody knows if it is providing real value. We know it's doing something and has some value WE attached to it. We don't know what the real value is, we're just speculating.
Now we’re creating jobs!
I can spin up a strong ML team through hiring in probably 6-12 months with the right funding. Building a chip fab and getting it to a sensible yield would take 3-5 years, significantly more funding, strong supply lines, etc.
Not sure what to call this except "HN hubris" or something.
There are hundreds of companies who thought (and still think) the exact same thing, and even after 24 months or more of "the right funding" they still haven't delivered the results.
I think you're misunderstanding how difficult all of this is, if you think it's merely a money problem. Otherwise we'd see SOTA models from new groups every month, which we obviously aren't, we have a few big labs iteratively progressing SOTA, with some upstarts appearing sometimes (DeepSeek, Kimi et al) but it isn't as easy as you're trying to make it out to be.
As you mentioned, multiple no name chinese companies have done it and published many of their results. There is a commodity recipe for dense transformer training. The difference between Chinese and US is that they have less data restrictions.
I think people overindex on the Meta example. It’s hard to fully understand why Meta/llama have failed as hard as they have - but they are an outlier case. Microsoft AI only just started their efforts in earnest and are already beating Meta shockingly.
Build a chip fab? I’ve got no idea where to start, where to even find people to hire, and i know the equipment we’d need to acquire would be also quite difficult to get at any price.
Mark Zuckerberg would like a word with you
247 more comments available on Hacker News