Openai Signs $38b Cloud Computing Deal with Amazon
Posted2 months agoActiveabout 2 months ago
nytimes.comTechstoryHigh profile
skepticalnegative
Debate
85/100
Artificial IntelligenceCloud ComputingMarket Bubble
Key topics
Artificial Intelligence
Cloud Computing
Market Bubble
OpenAI has signed a $38B cloud computing deal with Amazon, sparking concerns about the financial sustainability of such large deals and the potential for a market bubble.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
14m
Peak period
107
0-6h
Avg / period
26.7
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Nov 3, 2025 at 9:20 AM EST
2 months ago
Step 01 - 02First comment
Nov 3, 2025 at 9:33 AM EST
14m after posting
Step 02 - 03Peak activity
107 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 5, 2025 at 10:51 PM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45799211Type: storyLast synced: 11/20/2025, 8:32:40 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
But that feels weird combined with this. You can buy OpenAI API access which is served off of AWS infrastructure, but you can't bill for it through AWS? (I mean, lots of companies work like that. but Microsoft is betting that a lot of people move regular workloads to Azure so they can have centralized billing for inference and their other stuff?)
> Non-API products may be served on any cloud provider.
I am not sure if Bedrock counts. There are 2 OpenAI models already there: https://aws.amazon.com/blogs/aws/openai-open-weight-models-n...
Recent analysis shows AWS is burning through Amazon’s free cash on AI buildouts which is very concerning if the bubble pops, leaving Amazon holding the bag of invested capital not making returns.
Amazon is a bit late to the party on these headlines, and lots of unanswered questions about what’s really going on here.
Lots of questions on if this makes sense, and highly likely Amazon never gets $38B cash from OpenAI out of this.
[0] https://www.tomshardware.com/tech-industry/artificial-intell...
I remember when everyone was racing to produce "datacenter in a shipping container" solutions. I just laughed because apparently nobody actually bothered to check if you could actually plug it in anywhere.
In what context? This isn't fashion, being the 2nd mover has benefits which often outweigh the costs.
https://www.tomshardware.com/tech-industry/artificial-intell...
OpenAI is doing the same with compute. They're going to have more compute than everyone else combined. It will give them the scale and warchest to drive everyone else out. Every AI company is going to end up being a wrapper around them. And OpenAI will slowly take that value too either via acquisition or cloning successful products.
OpenAI and Anthropic are signing large deals with Google and Amazon for compute resources, but ultimately it means that Google and Amazon will own a ton of compute. Is OpenAI paying Amazon's cap ex just so Amazon can invest and end up owning what OpenAI needs over the long term?
For those paying Google, are they giving Google the money Google needs to further invest in their TPUs giving them a huge advantage?
Google is a viable competitor here.
Everyone else is missing part of the puzzle. They theoretically could compete but they're behind with no obvious way of catching up.
Amazon specifically is in a position similar to where they were with mobile. They put out a competing phone but with no clear advantage it flopped. They could put out their own LLM but they're late. They'd have to put out a product that is better enough to overcome consumer inertia. They have no real edge or advantage over OpenAI/Google to make that happen.
Theoretically they could back a competitor like Anthropic but what's the point? They look like an also ran these days and ultimately who wins doesn't affect Amazon's core businesses.
While I can see Anthropic or any other leading on API usage, it is unlikely that Anthropic leads in terms of raw consumer usage as Microsoft has the Office AI integration market locked down
Every image/video/text post on a meta app is essentially subsidized by oai/gemini/anthropic as they are all losing money on inference. Meta is getting more engagement and ad sales through these subsidized genai image content posts.
Long term they need to catch up and training/inference costs need to drop enough such that each genai post costs less than net profit on the ads but they’re in a great position to bridge the gap.
The end of all of this is ad sales. Google and Meta are still the leaders of this. OpenAI needs a social engagement platform or it is only going to take a slice of Google.
Do you have any sources backing this? As in "more engagement and ad sales" relative to what they would get with no genai content
It's still all about the (yet to be collected) data and advancements in architecture, and OAI doesn't have anything substantial there.
A relatively localized, limited lived experience apparently conveys a lot that LLM input does not - there's an architecture problem (or a compute constraint).
No amount of reddit posts and H200s will result in a model that can cure cancer or drive high-throughput waste filtering or precision agriculture.
Its slow as balls as of late though. So I use a lot of sonnet 4.5 just because it doesn't involve all this waiting even though I find sonnet to be kinda lazy.
No, it’s Amazon that’s doing this. OpenAI is paying Amazon for the compute services, but it’s Amazon that’s building the capacity.
the race is for sure on: https://menlovc.com/perspective/2025-mid-year-llm-market-upd...
I started working in 1997 at the height of the dot com bubble. I thought it would go on forever but the second half of 2000 and 2001 was rough.
I know a lot of people designing AI accelerator chips. Everyone over 45 thinks we are in an AI bubble. It's the younger people that think growth is infinite.
I told them to diversify from their company stock but we'll see if they have listened after the bubble pops
I do worry what the other side of this looks like when the circular feedback loop driving hype up eventually reverses and drives things down with amplifying effect.
corporate would like you to find the difference between these two photos
Here, the clouds have pulled a trick to inflate their revenues with their own cashflows, and have not been punished yet for it by shareholders - except meta which is getting asked some difficult questions.
Not financial advice, obviously, but that's my personal outlook. I've said it before: Alphabet is probably the safest play long term as they haven't been infected by any NVIDIA or OpenAI deals (yet)
The other side to that coin is monetization. Google is dominant there as well. OpenAI can't yield that space to Google because it's how the value is extracted from the consumer.
They just didn’t like the chips is the most logical answer. Particularly given AWS has been doing everything they can to pump up interest, and this huge PR release doesn’t even mention it at all. That omission speaks volumes.
Anthropic is moving to Trainium[1], that will free Nvidia GPUs and allow AWS to rent those GPUs to OpenAI.
[1] https://finance.yahoo.com/news/amazon-says-anthropic-will-us...
I was at WeWork around the time of its downfall. I have a lot of opinions about how that place was ran, but I can assure you pre-pandemic they were buying up every office space because they were filling them with tenants. Not paying for offices was a result of tenants not paying due to the pandemic.
That’s the same as what happened when WeWork was buying up office space pre-pandemic and then using handwavy nonsense like “Community Adjusted EBITDA” as part of the smoke and mirrors to pretend like there was an actual business there.
The pandemic expedited the pain, but the business model was broken and folks called BS long before Covid hit.
They're going to sell ads at the moment people are looking to buy stuff. It's the single most viable business model we've ever seen.
Besides, how are ads on ChatGPT supposed to work? If some student is asking it to write their paper for them, is ChatGPT going to stop in the middle of it and go "Hey, you know what sounds good right now? A nice bowl of soup..." Although admittedly that would make for some hilarious proof of people using AI for things they shouldn't...
ChatGPT will also probably be selling ad infrastructure to inject ads just like Google injects ads into search. They probably will pay out little to websites that include the “ChatGPT” widget to integrate ChatGPT with their site that also has ads.
Right now the barriers are technical for injecting ads into AI responses.
As an advanced research engine, knowing it will reliably only recommend you sponsored products means it's worthless - and worse it will be primed to advocate for sponsored products.
Then the whole thing becomes a scam engine, because check out what Facebook ads look like today.
Regardless of if that’s true, it’s clearly still a huge business opportunity. And you point out Facebook ads are a scam yet they bring in $164B/year and growing. Regardless of value judgement, there’s clearly a lot of money to chase.
Google/facebook do that today, because the content they're showing is created pre-ad, and the ads have to be injected after the fact.
With AI- the content is being generated in the same place that the ads are being injected, which allows us to be much more subtle about it.
How much do you think a car company would pay for to put special training weight on their marketing materials? I would guess big money
"While we're on the topic of self-harm, did you know the ABC Co Truck has the highest safety rating?"
https://openai.com/index/introducing-chatgpt-atlas/
> Besides, how are ads on ChatGPT supposed to work?
"How do I do XYZ?" "Product ABC can do XYZ for you."
Is it going to inject ads for indeed while a recruiter is using ChatGPT to summarize a stack of resumes?
If it only ever injects ads for specific requests how profitable would that even be? I understand clients would want their product to be recommended but if I only get the ad answer when prompting a certain way, can I the user avoid ads by asking questions a specific way?
This would create a ton of hesitation to use this for product recommendations if I knew ChatGPT wasn't using its extensive input for products and reviews and coming back with an objective answer for me.
I guess at this point would we even know the difference? Is it possible this is already happening?
Plus like Google search they have a ton of organic traffic. Chatgpt has replaced Google search as my starting point to investigate anything. Lots of that is related to things where I will eventually spend money
I think the queries will fall into profitable (product recommendations) and non profitable (writing an essay or code) just the way they do for Google. Probably former will have a generous free tier and latter will be largely paywalled. I don't know how they'll do that, but I imagine they'll find some way
It's a mass consumer (software) product and they need new revenue venues and ads have a history of working well. Even Spotify, Netflix, Amazon Prime, ... Companies that historically don't have the ad infrastructure of Google or Facebook have increasingly profitable ad tiers
OpenAI maybe in the same situation, committed to spending $1.4T while enjoying a good revenue year this year but then One Bad Thing and poof.
Or, well, they stated that the TCO of the compute they have commitments for is $1.4T, which is a somewhat strange phrasing. I assume it's due to it being a mix of self-owned vs. rental compute, and what they mean is the TCO to OpenAI rather than the TCO to the owner of the compute.
[0] https://x.com/sama/status/1983584366547829073
I get that folks are now just engaged in “keeping up with the Jones’” FOMO behavior but none of this is making any sense.
The financial impact if the whole AI space loses even 50% of its current "valuation" will be huge. The financial impact of the whole AI space continuing at its current velocity is... More of whatever is going on now?
There’s been some buzz around the official opening of the Grand Egyptian Museum, which I visited last month. That project took 1.1 to 1.2B USD. Double its original budget estimate but still the museum looks fantastic and it feels, tangibly, like it’s worth a billion.
In contrast with all the money spent on AI, it just feels like monopoly money. Where’s the monument to its success? We could’ve built flying cars or been back to the moon with this much money.
It's much less likely that I'd drive a flying car and there is 0 chance that I would be the one going to the moon if we spent the equivalent money on those things instead.
I currently pay 200 USD a month for AI, and my company pays about 1,200 USD for all employees to use it essentially unlimited - and I get AT LEAST 5x the return on value on that, I would happy multiply all those numbers by 5 and still pay it.
Domain knowledge, bug fixing, writing tests, fixing tests, spotting what’s incomplete, help visualising results, security review generation for human interpretation, writing boilerplate, and simpler refactors
It can’t do all of these things end to end itself, but with the right prompting and guidance holy smokes does it multiply my positive traits as a developer
> and I get AT LEAST 5x the return on value on that
You make $800 by paying OpenAI $200? Can you please explain how your the value put in is 5x and how I can start making $800 more a month?
> holy smokes does it multiply my positive traits as a developer
But it’s not you doing the work. And by your own admission, anyone can eventually figure it out. So if anything you’ve lost traits and handed them to the Llm. As an employee you’re less entrenched and more replaceable.
I estimate that the addtional work I can do is worth that much. It doesn't matter that "I do it" or "The LLM does it" - Its both of us, but I'm responsible for the code (I always execute it, test it, and take responsibility for it). That's just my estimate. Also what a ridiculous phrasing, the intent of what I'm saying is "I would pay a lot more for this because I personally see the value in it" - that's a subjective judgement I'm making, I have no idea who you are, why would you assume thats a tranferrable objective measure that could simply be transferred to you? AI is a multiplier on the human that uses it, and the quality of the output is hugely dependent on the communication skill of the human, you using AI and me using AI will produce different results with 100% certainty, one will be better, it doesn't matter who, I'm saying, they will not be equal.
>But it’s not you doing the work. And by your own admission, anyone can eventually figure it out. So if anything you’ve lost traits and handed them to the Llm. As an employee you’re less entrenched and more replaceable.
So what? I'm results driven - the important thing is that the task gets done - it's not "ME" doing it OR the "LLM" doing it, it's Me AND the LLM. I'm still responsible if there's bugs in it, and I check it and make sure I understand it.
>As an employee you’re less entrenched and more replaceable.
I hate this attitute, this is an attitude of a very poor employee - It leads to gatekeeping and knowledge hoarding, and lots of other petty and defensive behaviour, and it what people think when they view the world from a point of scarcity. I argue the other way - the additional productivity and tasks that I get done with the assistance of the LLMS makes me a more valuable employee, so the business is incentivised to keep me more, there's always more to do, it's just we are now using chainsaws and not axes.
I disagree, I brought all this up because it seems you are confusing perceived, marketed/advertised value with actual value. Again you did not become 5 times more valuable in reality to your employer or by obtaining more money (literal value). You're comparing $200 of "value" which is 200 dollars to...time savings, unmeasureable skill ability? This is the unclear part.
> I hate this attitute, this is an attitude of a very poor employee - It leads to gatekeeping and knowledge hoarding, and lots of other petty and defensive behaviour,
You may hate that attitude but those people will be long employed after the boss sacked you for not taking enough responsibility for your LLM mistakes. This is because entrenching yourself is really the way it's always worked and those people that entrenched themselves didn't do it by relying on a tool to help them do their job. This is the world and sadly LLMs don't do anything to unentrench people making money.
All I am saying is enjoy your honeymoon period with your LLM. If that means creating apple and oranges definitions of "value" then comparing them directly as benefits, then more power to you.
I tagged the address with this conversation. No cheating by generating your results.
Lot of feeling going on in this comment, but that's not really how money works.
But I agree that the numbers are increasingly beyond reasonable comprehension
I'd be happy if the industry/stock market proves me wrong, but I can't see this ending any other way than with a major crash that makes the dot-com boom seem like a minor blimp.
It doesn't come off as schadenfreude to me as much as it does the emotional clarity of accepting the oncoming train and knowing there's nothing you could have done to stop it. This brand of "just along for the ride" nihilism seems pretty damn common now.
We used to have lunch at the bar across the street and just about once or twice a week for several months, we'd walk in and there would be a table with about 15-20 people sitting around drinking and reminiscing about how they were going to change the world.
A lot of developers I know just completely left the industry and never came back.
If this crash exceeds that one? We're in for some seriously tough times.
ChatGPT has 800 million weekly users but only 10 million are paying.
Someone has to come up with $1.4 trillion in actual cash, fast, or this whole thing comes crashing down. Why? At the end of all this circular financing and deals are folks that actually want real cash (eg electricity utilities that aren’t going to accept OpenAI shares for payment).
If the above doesn’t freak you about a bit at how bonkers this whole thing has become then you need a reality check. “Selling ads” on ChatGPT ain’t gonna close that hole.
If you were actually the guys from the big short and you have strong conviction, you should short the market (literally like the guys from big short) and get really rich.
Money is the language they understand, so hit them where it hurts.
When you go long, you can still make money by being “sort of right” or “obliquely right” or “somewhat wrong but lucky”or by just collecting dividends if the market stays irrational long enough. If you short something you have to be exactly right (both about what will happen and precisely when) or your money will end up in the hands of the people you’re betting against. It’s not a symmetrical thing you can just switch back and forth on.
Also, IIUC the guys in The Big Short would've lost everything if the government stepped in sooner since the banks controlled the price of the CDSs and could've maintained the incorrect price if they had a bunch of extra cash.
Yeah. "Markets can remain irrational longer than you can remain solvent."
https://en.wikipedia.org/wiki/Michael_Burry had an investor panic and nearly lost everything. He was right, but he nearly got the timing wrong.
if no, and you thought it was a bubble, does that price of NVIDIA from 2 years ago (not from today) makes sense to you now?
What if AI invents fusion power?
(Thanks for the downvotes I wanted to keep my karma at 69)
2. Outside of software, inventions have to be turned into physical things like power plants. That doesn’t happen overnight and is expensive.
3. The industry is already going through a power revolution in the form of battery + solar and it’s going to take a while for a new technology to climb the learning curve enough to be competitive.
4. What if AI gives us all a pony?
“Please don't comment about the voting on comments. It never does any good, and it makes boring reading.”
https://news.ycombinator.com/newsguidelines.html
It's certainly possible to imagine OpenAI eventually generating far more revenue than Google, even without anything close to AGI. For example, if they were to improve productivity of 10% of the economy by 10% and capture a third of that value for themselves, that would be more than enough. Alternatively, displacing Google as the go-to place for search and selling ads against that would likely generate at least Google levels of revenue. Or some combination of both.
Is this guaranteed to happen? Of course not. But it's not in "bonkers" territory either.
The Amazon deal is actually spread over 7 years. Other deals have different terms, but also spread over multiple years.
Deals like these have cancellation terms. OpenAI could presumably pay a fee and cancel in the future if their projections are too high and they don't need some of the compute from these deals.
The deals also include OpenAI shares. The deals are being made with companies that have sufficient revenue or even cash on hand to buy the compute and electricity.
The claim above that someone needs to come up with $1.4 trillion right now or everything will collapse isn't grounded in any real understanding of these deals. It's just adding up numbers and comparing them to a single annual revenue snapshot.
Even under the most bullish cases for AI the real $ requires here looks iffy at best.
I think we all know that a big part of the angle here is to keep the hype going until there’s a liquidity event, folks will cash out and then at the like they won’t care what happens.
Search engines were never a user friendly app to begin with. You had to know how to search well to get comprehensive answers, and the average person is not that scrupulous. Google’s product is inferior, believe it or not. There will be nothing normal about seeing a list of search results pretty soon, so Google literally has a legacy app out in the wild as far as facts are concerned.
So imagine that, Google would have to remove Search as they know it (remove their core business) and standup a app that looks the same as all the new apps.
People might like one AI persona more than others, which means people will seek out all types of new apps. LLMs is the worst thing that could have ever happened to Google quite frankly.
Googles biggest advancement in the last ~15 years is to produce worse search results so that you spend more time engaging with Google, and doing more searches, so that Google can show more ads. Facebook is similar in that they feed you tons of rage-bait, engagement spam, and things you don't like infused with nuggets of what you actually want to see about your friends / interests. Just like a slot machine the point is that you don't always get what you want, so there's a compulsion to use because MAYBE you will get lucky.
OpenAI's potential for mooning hinges on creating a fusion of information and engagement where they can sell some sort of advertisement or influence. The problem of course is that the information and engagement is pretty much coming in the most expensive form possible.
The idea that the LLM is going to erode actual products people find useful enough to pay for is unlikely to come true. In particular people are specifically paying for software because of it's deterministic behavior. The LLM is by its nature extremely nondeterministic. That's fully in the realm of social media, search engines, etc. If you want a repeatable and predictable result the LLM isn't really the go to product.
I don’t disagree with you entirely, but I’d argue the second level apps are harder to chase because they get so specialized.
Death of Google (as everyone knows Google today) is a tricky one. It seems impossible to believe at this exact moment. It can sit next to IBM in the long run, no shame at all, amazing run.
OpenAI is at the very least worth at least half as much as Google. I foresee Google becoming like IBM, and these new LLM companies being the new generation of tech companies.
I'd be more worried about OpenAI surviving. Aside from the iffy finances, much of their top talent seems to leave after falling out with Altman.
This is “if we get 1% of the market” logic.
Of course, you must also make a convincing case for getting to that 1%.
Inherently, no. In practice, it's riddled with biases deep enough [1] to make it an informal fallacy.
"The competition in a large market, such as CRM software, is very tough," and "there are power laws which mean that you have to rank surprisingly high to get 1% of a market" [2]. Strategically, it ignores the necessity of establishing a beachhead in a small market, where "a small software company" has "a much better chance of getting a decent sized chunk."
[1] https://www.nature.com/articles/s41599-024-03403-9
[2] https://news.ycombinator.com/item?id=45804756
The fun part is to go back now and listen to Blake Lemoine interviews from summer 2022. That for me was the start of all this.
OpenAI has nothing resembling this ecosystem, and will never be nearly as valuable a place to buy ads. Replacing Google is probably the least realistic business plan for OpenAI - if that's what they're betting on, they're cooked.
These deals aren't for 100% payment up front. The deals also include stock, not just cash. So, no, they do not need to come up with $1.4 trillion in cash quickly.
This AWS deal is spread over 7 years. That's $5.4 billion per year, though I assume it's ramping up over time.
> At the end of all this circular financing and deals are folks that actually want real cash (eg electricity utilities that aren’t going to accept OpenAI shares for payment).
Amazon's cash on hand is on the order of $100 billion. They also have constant revenue coming in. They will not have any problem accepting OpenAI shares and then paying electricity bills with cash.
These deals are also being done in the open with publicly traded companies. Investors can see the balance sheets and react accordingly in the stock price.
The one I found best documented (1) is a Meta's SPV to fund their Hyperion DC in Louisiana, which is a deal that is 80% financed by private credit firm Blue Owl. There is a lot of financial trickery to getting the SPV to be counted by the ratings agencies as debt belonging to a different entity that does not count against Meta's books but treated by the market as basically something that Meta will back. But xAI's Memphis DC is also a SPV, and Microsoft is doing that as well. I'm not sure about AMZN, but that we're starting to see that from their competitors suggests they will also be going to this way.
1: By the invaluable Matt Levine, here: https://www.bloomberg.com/opinion/newsletters/2025-10-29/put... but the other major companies have their own SPV's
Then Meta would do this in a wholly-controlled off balance sheet vehicle à la Enron. The fact that they're involving side cars signals some respect for their rating.
Does that make any sense? No.
If the market collapses I think Meta can technically just walk away and they lose access to those data centers (which they no longer want anyways) and the SPV is stuck holding $X of assets with $>X liabilities and the issues of the credit are on the hook but not Meta.
And investors are fine being on the hook because they get a higher return from the SPV bonds than Meta bonds. (risk adjusted it's probably the same return).
70 more comments available on Hacker News