The Next Chapter of the Microsoft–openai Partnership
Posted2 months agoActive2 months ago
openai.comTechstoryHigh profile
heatedmixed
Debate
80/100
MicrosoftOpenaiAgiAI Partnership
Key topics
Microsoft
Openai
Agi
AI Partnership
Microsoft and OpenAI announced a new chapter in their partnership, with OpenAI gaining more independence and Microsoft maintaining its IP rights, sparking debate about the implications of their deal and the future of AGI.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
5m
Peak period
114
0-6h
Avg / period
16
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 28, 2025 at 9:05 AM EDT
2 months ago
Step 01 - 02First comment
Oct 28, 2025 at 9:10 AM EDT
5m after posting
Step 02 - 03Peak activity
114 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 31, 2025 at 7:46 AM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45732350Type: storyLast synced: 11/20/2025, 8:14:16 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
https://blogs.microsoft.com/blog/2025/10/28/the-next-chapter...
Also: Built to Benefit Everyone — by Bret Taylor, Chair of the OpenAI Board of Directors
https://openai.com/index/built-to-benefit-everyone
Whats my share then?
I have no idea what @sama is doing but he's doing it quite well.
I wonder what criteria that panel will use to define/resolve this.
It only just then became obvious to me that to them it's a question of when, in large part because of the MS deal.
Their next big move in the chess game will be to "declare" AGI.
Nevertheless, I've been wondering of late. How will we know when AGI is accomplished? In the books or movies, it's always been handwaved or described in a way that made it seem like it was obvious to all. For example, in The Matrix there's the line "We marveled at our own magnificence as we gave birth to AI." It was a very obvious event that nobody could question in that story. In reality though? I'm starting to think it's just going to be more of a gradual thing, like increasing the resolution of our TVs until you can't tell it's not a window any longer.
It's certainly not an specific thing that can be accomplished. AGI is a useful name for a badly defined concept, but any objective application of it (like in a contract) is just stupid things done by people that could barely be described as having the natural variety of GI.
'as we have traditionally understood it' is doing a lot of heavy lifting there
https://blog.samaltman.com/reflections#:~:text=We%20believe%...
https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
Just redefine the terms into something that's easy to accomplish but far from the definition of the terms/words/promises.
>This is an important detail because Microsoft loses access to OpenAI’s technology when the startup reaches AGI, a nebulous term that means different things to everyone.
Not sure how OpenAI feels about that.
Dot-com bubble all over again
[0] - https://openai.com/charter/
We're talking about things that would make AGI recognizable as AGI, in the "I know it when I see it" sense.
So things we think about when the word AGI comes up: AI-driven commercial entity selling AI-designed services or products, AI-driven portfolio manager trading AI-selected stocks, AI-made movie going at the boxoffice, AI-made videogame selling loads, AI-won tournament prizes at computationally difficult games that the AI somehow autonomously chose to take part in, etc.
Most probably a combination of these and more.
I kind of meant this as a joke as I typed this, but by the end almost wanted to quit the tech industry all together.
>"When a measure becomes a target, it ceases to be a good measure"
What appalls me is that companies are doing this stuff in plain sight. In the 1920s before the crash, were companies this brazen or did they try to hide it better?
Aren't we humans supposed to have GI? Maybe you're conflating AGI and ASI.
Show me where GI is and how to measure it in a way that isn't just "it's however humans think"
Supposed by humans, who might not be aware of their own limitations.
To sign this deal today, presumably you wouldn’t bother if AGI is just around the corner?
Maybe I’m reading too much into it.
OpenAI’s Jakob Pachocki said on a call today that he expects that AI is “less than a decade away from superintelligence”
Microsoft’s IP rights for both models and products are extended through 2032 and now includes models post-AGI...
To me, this suggests a further dilution of the term "AGI."
If you believe in a hard takeoff, than ownership of assets post agi is pretty much meaningless, however, it protects Microsoft from an early declaration of agi by openai.
"I just wanted you to know that you can't just say the word "AGI" and expect anything to happen.
- Michael Scott: I didn't say it. I declared it
A group of ex frontier lab employees? You could declare AGI today. A more diverse group across academia and industry might actually have some backbone and be able to stand up to OpenAI.
Spare me. Sam has been talking about ChatGPT already being AGI for ages, meanwhile still peddling this duplicitous talk about how AGI is coming despite it apparently already being here. Can we act like grownups and treat this like a normal tool? No, no we cannot, for Sam is a hype merchant.
The promise of AGI is that you could prompt the LLM "Prove that the Riemann Hypothesis is either true or false" and the LLM would generate a valid mathematical proof. However, if you throw it into ChatGPT what you actually get is "Nobody else has solved this proof yet and I can't either."
And that's the issue. These LLMs aren't capable of reason, only regurgitation. And they aren't moving towards reason.
Until LLMs got popular, we would have called that reasoning skills. Not surpassing humans but better than many humans within a small context.
I don't mean that I have a higher opinion about LLM intelligence than you do, but perhaps I have a lower opinion on what human intelligence is. How many do much more than regurgitate, tweak? Science has taken hundreds of years to develop.
The real question is: When do knowledge workers loose their jobs. That is close enough for "AGI" in its consequences for society, Riemann hypothesis or not.
> OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.
So, can you (and everyone you know) be replaced at work by a subscription yet? If not, it's not AGI I guess.
Perhaps their big bet is that their partnership with Jony Ive will create the first post-phone hardware device that consumers attach themselves with, and then build an ecosystem around that?
What's the value in investing in a smaller company and then giving up things produced off that investment when the company grows?
Having a customer locked in to buying $250bn of Azure services is a fairly big benefit.
"Microsoft will no longer have a right of first refusal to be OpenAI’s compute provider."
Seems like a loss to me!
An investor can be stubborn about retaining all rights previously negotiated and never give them up... but that absolutist position doesn't mean anything if the investment fails.
OpenAI needs many more billions to cover many more years of expected losses. Microsoft itself doesn't want to invest any more money. Additional outside investors don't want to add more billions in funding unless Microsoft was willing to give up a few rights so that OpenAI has a better competitive position against Google Gemini, Anthropic, Grok etc.
When a startup is losing money and desperately needs more capital, a new round of investors can chip away at rights the previous investor(s) had. Why would previous original investors voluntarily agree to give up any rights?!? Because their investment is at risk if the startup doesn't get a lot more money. If the original investor doesn't want to re-invest again and would rather others foot the bill, they sometimes have to be a little flexible on their rights for that to happen.
This looks more like Microsoft ensuring that they'll win, regardless of how OpenAI fairs in the next four to six years.
It seems like Microsoft stock is then the most straightforward way to invest in OpenAI pre-IPO.
This also confirms the $500 billion valuation making OpenAI the most valuable private startup in the world.
Now many of the main AI companies have decent ownership by public companies or are already public.
- OpenAI -> Microsoft (27%)
- Anthropic -> Amazon (15-19% est), Alphabet/Google (14%)
Then the chip layer is largely already public: Nvidia. Plus AMD and Broadcom.
Clouds too: Oracle, Alphabet/GCP, Microsoft/Azure, CoreWeave.
Also, you have to consider the size of Microsoft relative to its ownership of OpenAI, future dilution, and how Microsoft itself will fare in the future. If, say, Microsoft is on a path towards decreasing relevance/marketshare/profitability, any gains from its stake in OpenAI may be offset by its diminishing fortunes.
That’s a big if. I see a lot of people in big enterprises who would never even consider anything other than Microsoft and Azure.
One thing I will say is the Azure documentation is some of the most cumbersome to navigate I've ever experienced, there is a dearth of information in there, you just have to know how to find it.
Couldn’t they just throw money at the problem? Or fire the criminals who designed it?
b) You design for the audience. The complexity that person-who-would-ever-use-GCP will deal with is far beyond what the average internet user would ever endure.
Windows workstations and servers are now "joined" to Azure instead, where they used to be joined to domain controller servers. Microsoft will soon enough stop supporting that older domain controller design (soon as in a decade).
Both of which seem to be true.
Hell, I'm still amazed they got away with the Office-licenses-only-usable-on-Azure bullshit, but here we are.
Because things are going to change soon. What nobody know is what things exactly, and in what direction.
The biggest real threat to MS position is the Trump administration pushing foreign customers away with stuff like shutting down the ICJ Microsoft accounts, but that'll hurt AWS and Google equally much (The winners of that will be Alibaba and other foregin providers that can't compete in full enterprise stacks today).
We were cloud shopping, and they came by as well with REALLY good discount. Luckily our CTO was massively afraid of what would happen after that discount ran out.
Some people's convictions and lack of reading comprehension skills are certainly wild.
https://www.cbsnews.com/news/wall-street-says-yahoos-worth-l...
Because if you buy the tokens you presumably do not own the company. And if you buy the company you hopefully don’t own the tokens - nor the assets that back the tokens.
I have no interest in crypto, just wanted to mention this which was surprising to me when I heard it.
So somehow this crypto firm and its investor think it can get a better return than Blackstone with a fraction of the assets. Now, sure, developing market and all that. But really? If it scaled to Blackstone assets level of $1 trillion then you’d expect the platform valuation to scale, perhaps not in lockstep but at least somewhat. So with $1 trillion in collateralised crypto does that make Tether worth $1.5 trillion? I’d love someone to explain that.
Now the main thing is how sustainable these earnings are and if they will continue to be a dominant player in stable coins and if there will continue to be demand for them.
Another difference to Blackstone is Tether takes 100% of the returns on the treasuries backing the coins, whereas Blackstone gets a small fee from AUM, and their goal is to make money for their investor clients.
If crypto wanted to really be decentralized they'd find a way to have stable coins backed by whatever assets where the returns of the assets still came to the stable coin holder, not some big centralized company.
https://www.reuters.com/business/crypto-firm-tether-eyes-500...
I struggle to see how those numbers stack up.
SpaceX?
No hard proof it's a bubble. Bubbles can only be proved to have existed after they pop.
All of those who are investing in stock market and thinking they are becoming rich might just realize that those were the paper gains when they will still not be able to afford anything with all the big numbers in their investment accounts.
Relevant and under-appreciated.
OpenAI wearable eyeglasses incoming... (audio+cellular first, AR/camera second?)AI is not making enough money to cover the cost and it will take a decade or so to cover the same.
More likely Americans’ tax dollars will be shoveled into the hole.
While not unexpected, this is exciting and intriguing.
And of course, looking forward to Microsoft's Zune AI.
By the time we get 30% global unemployment and another financial crash along the way in the next decade, only then OpenAI would have already declared "AGI".
Likely with in the 2030 - 2035 timeframe.
The question is does this reflect an increase or decrease in confidence at OpenAI wrt them achieving AGI?
320 more comments available on Hacker News