Meta's New A.i. Superstars Are Chafing Against the Rest of the Company
Key topics
The tension is palpable at Meta as the company's A.I. superstars clash with the rest of the organization. At the heart of the issue is a perceived disconnect between the A.I. researchers and the executives who prioritize the social media business - or, as some commenters pointed out, the ad business that drives it. While some, like mullingitover, argue that Meta's reliance on ad revenue is a limiting factor, others like AndrewDucker counter that alternative models, such as user-funded platforms like Mastodon and Dreamwidth, are possible, if challenging to scale. As the debate rages on, it becomes clear that the real question is whether Meta can reconcile its A.I. ambitions with its existing business model.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
12m
Peak period
128
Day 7
Avg / period
24.2
Based on 145 loaded comments
Key moments
- 01Story posted
Dec 10, 2025 at 11:44 AM EST
23 days ago
Step 01 - 02First comment
Dec 10, 2025 at 11:57 AM EST
12m after posting
Step 02 - 03Peak activity
128 comments in Day 7
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 20, 2025 at 8:30 PM EST
13 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
That cannot have been a surprise to anyone joining.
It would have the side effect of making the whole business less ghoulish and manipulative, since the operators wouldn't be incentivized to maximize eyeball hours.
It's impossible to imagine this because government regulation is so completely corrupted that a decades-long anticompetitive dumping scheme is allowed to occur without the slightest pushback.
The bigger problem is the monopoly. They would charge $4/mo. Then add ads on top. Then up it to $5/mo. Then..
Of course perhaps it’s a bit different now since most people consume content from a small set of sources, making social media largely the same as traditional media. But then traditional media also has trouble with being supported by subscriptions.
Scaling is harder. But you can have a niche which works fine.
It doesn’t provide any value to reframe it this way, unless you think it’s some big secret that ads are the main source of revenue for these businesses.
They were kinda the first real Web 2.0 social media site, with a social graph, privacy controls, a developer API, tagging, RSS feeds.
I feel that they never really got to their full potential exactly because these big VC-backed dumping operations in social media (like Facebook) were able to kill them in the crib.
If we're going to accept that social media is a natural monopoly: great. Regulate them strictly, as you should with any monopoly.
Del.icio.us is the same story. Good product ahead of its time, bought by Yahoo and died. Could have been Pinterest.
Which is very reassuring considering some of them are fairly obviously on the wrong side of history with very naive viewpoints https://news.ycombinator.com/item?id=7852246
They do broadcast TV, the purpose of which is to display ads. That does make sense.
> “Google doesn’t have a search business, they have an ad business.”
When Google started out, in the "don't be evil", simple home page days, they were a search company. It is hardly true any more, ads are now the centre of their business.
> “Amazon doesn’t have a retail business, they have an ad business.”
Well, duh! Quite obvious these days. That is where they get the lion's share of the revenue, outside AWS.
I am impressed, you hit the nail on the head!
It must be massively demoralizing, particularly if you're an engineer who has been there for 10+ years and has pushed features which directly bring in revenue, etc...
All companies are structuring like this, and some are more equipped to do it than others
Basically the executive team realizes the corporate hierarchy is too rigid for the lowly engineers to surface any innovation or workflow adjustments above the AI anxiety riddled middle management and bandwagon chaser’s desperate plea for job security, and so the executive creates a team exempt from it operating in a new structure
Most agentic work impacts organizations that are outside of the tree of that software/product team, and there is no trust in getting the workflow altered unless a team from upon high overwrites the targeted organization
we are at that phase now, I expect this to accelerate as executives catch on through at least mid-summer 2026
Lots of siloed processes tied together in a simple way neglected for decades solely because the political capital and will didn’t exist
I think the biggest issue with Meta here, is how much visibility they have to adjacent orgs, which is not too surprising given the expenditures, but still surprising. It should be a separate unit and the expenses absolutely thought of as separate from the rest of the org(s).
So, yes, I have not and will not be one of them.
An adult needs to show up, put zuck back in a corner and right the ship.
Were they not actually performing poorly, then? Maybe I'm missing some context, but laying off poor performers is a good thing last I checked. It's identifying them that's difficult the further removed you are from the action (or lack thereof).
Anyone who's worked in a large org knows there's absolutely zero chance that those layoffs don't touch a single bystander or special case.
...
[We...thought that we would naturally be protected from any layoffs, being a team that reduced costs of any team we partnered with.] "
https://ericlippert.com/2022/11/30/a-long-expected-update/#:...
The politics surrounding zuck is wild. Cox left then came back, mainly because hes not actually that good, and has terrible judgement when it comes to features and how to shape effective teams (just throw people at it, features should be purely metric based, or a straight copy of competitors products. There is no cohesive vision of what a meta product should be. Just churn out microchanges until something sticks)
Zuck also has pretty bad people instincts. He is surrounded by egomangics, and Boz is probably the sanest out of all of them. Its a shame he doesn't lead engineering that well (ie getting into fights with plebs in the comments about food and shuttle timings)
He also is very keen on flashy new toys, and features, but has no instinct for making a product. He still thinks that incremental slightly broken features, but rapidly released is better than a product that works well, is integrated and has a simple well tested UI pathway for everything. Common UI language? Pah, thats for android/apple. I want that new shiny feature, I want it now. What do you mean its buggy? just pull people off that other project to fix it. No, the other one.
Schrep also was an in insightful and good leader.
Sheryl is a brilliant actor that helped shape the culture of the place. However there was always a tinge of poison, which was mostly kept in check until about 2021. She went full politician and started building her own brand, and generally left a massive mess.
Zuck went full bro and decided that empathy made shit products and decided that he like the taste of engineer's tears.
but back to TBD.
The problem for them is that they have to work collaboratively with other teams in facebook to get the stuff the need. The problem is, the teams/orgs they are fighting against have survived by competing against others ruthlessly. TBD doesn't have the experience to fight the old timers, they also don't really have experience in making frontier models.
They are also being swamped by non-ML engineers looking to ride the wave of empire building. this generates lots of alignment meetings and no progress.
The problem with that assessment is that only really the monetisation team were the ones abusing the data. They are an organisation that were very much apart from the rest, different culture and different rules.
For the longest while you could be actually making things better, of thinking you were.
When problems popped up, we _could_ apply pressure and get things fixed. The blatant content discrimination in india, instagram kids, and a load of other changes were forced by employees.
However, in 2023 there were some rule changes aimed at stopping "social justice warrior-ing" internally. It was repeatedly tightened until questioning the leaders is considered against the rules.
Its no coincidence that product decisions are getting worse.
Boz is such a grifter in his online content. He naturally weasel words every little point and while I have no doubt he’s smart, I don’t think I could trust him to provide an honest opinion publicly.
My friends at meta tend to not hold him in the highest esteem but echo largely what you said about the politics and his standing amongst them.
I have a higher opinion of zuck than this though. He nailed a couple of really important big picture calls - mobile, ads, instagram - and built a really effective organization.
LeCun obviously thinks otherwise and believes that LLMs are a dead-end, and he might be right. The trouble with LLMs is that most people don't really understand how they work. They seem smart, but they are not; they are really just good at appearing to be smart. But that may have created the illusion the true artificial intelligence is much closer than it really is in the minds of many people including Zuckerberg. And obviously, there now exists an entire industry that relies on that idea to raise further funding.
As for Wang, he's not an AI researcher per se, he basically built a data sweatshop. But he apparently is a good manager who knows how to get projects done. Maybe the hope is that giving him as many resources as possible will allow him to work his magic and get their superintelligence project on track.
I've had a 15 year+ successful career as a SWE so far. I don't think I've had a single idea so novel that today's LLM could not have come up with it.
Additionally, "novel ideas" isn't something that is included in something that smart people do so why would it be a requirement for AI.
This is what an LLM essentially is. It is good at mimicking, reproducing and recombining the things it was trained on. But it has no creativity to go beyond this, and it doesn't even possess true reasoning, which is how it will end up making mistakes that are just immediately obvious to a human observer, yet the LLM is unable to see them, because it just mimicking.
Now imagine that, during the interval, you approach the actor backstage and initiate a conversation in that language. His responses are consistently coherent and grammatical, always relevant to what you said modulo ambiguity, and mostly accurate. You'll quickly realise that 'actor who merely memorized lines in a language he doesn't speak' does not describe this person.
They literally do not, what are you talking about?
https://chatgpt.com/s/t_6942e03a42b481919092d4751e3d808e
It's been discussed a lot recently, but anyone who has interacted with LLMs at a deeper level will tell you that there is something there; not sure if you'd call it "intelligence" or what. There is plenty of evidence to the contrary too. I guess this is a long-winded way of saying "we don't really know what's going on"...
[1] https://plato.stanford.edu/entries/chinese-room/
And so when they interact with a bot that knows everything, they associate it with smart.
Plus we anthropomorphise a lot.
Is Wikipedia "smart"?
Appears smart: pattern matching. Actually smart: first principles understanding.
Is that specific enough?
It also made reference to stochastic parrots vs emergent reasoning, the bat and ball problem, the library vs the librarian, and the Chinese room.
It ended by asking if I would like it to solve a logic puzzle I made up on the spot to see if it relies on patterns or reasoning.
If, for whatever reason, you don't have a vision and a plan, hiring big names to help kickstart that process seems like a better next step than "do nothing".
1. Hire an artist.
2. Draw the rest of the fucking owl.
4. In frustration, use some AI tool to generate a couple of drafts that are close to what you want and hand them to the artist.
5. Hire a new artist after the first one quits because you don't respect the creative process.
6. Dig deeper into a variety of AI image-generating tools to get really close to what you want, but not quite get there.
7. Hire someone from Fiverr to tweak it in Photoshop because the artists, both bio and non-bio, have burned through your available cash and time.
8. Settle for the least bad of the lot because you have to ship and accept you will never get the image you have in your head.
That's why I also think the hiring angle makes sense. It would actually be astonishing if he could turn technical and compete with the leaders in OAI/Anthrpic
Prove me wrong.
I think our type two reasoning is roughly comparable to LLM reasoning when it is within the LLM reinforcement learning distribution.
I think some humans are smarter than LLMs out-of-distribution, but only when we think carefully, and in many cases LLMs perform better than many humams even in this case.
I think humans are smart. I also think AI is smart.
“Humans aren't smart, they are really just good at appearing to be smart. Prove me wrong.”
There are too many different ways to measure intelligence.
Speed, matching, discovery, memory, etc.
We can combine those levers infinitely create/justify "smart". Are they dumb? Absolutely, but are they smart? Very much so. You can be both at the same time.
Maybe you meant genius? Because that standard is quite high and there's no way they're genius today.
Trying to create new terminology ("genius", "superintelligence", etc.) seems to only shift goal posts and define new ways of approximation.
Personally, I'll believe a system is intelligent when it presents something novel and new and challenges our understanding of the world as we know it (not as I personally do because I don't have the corpus of the internet in my head).
This has to be bait
Smart and dumb are opposites. So this seems dubious. You can have access to a large base of trivial knowledge (mostly in a single language), as LLMs do, but have absolutely no intelligence, as LLMs demonstrate.
You can be dumb yet good at Jeopardy. This is no dichotomy.
In other words, functionally speaking, for many purposes, they are smart.
This is obvious in coding in particular, where with relatively minimal guidance, LLMs outperform most human developers in many significant respects. Saying that they’re “not smart” seems more like an attempt to claim specialness for your own intelligence than a useful assessment of LLM capabilities.
Seems like a great bang for the buck.
This hot dog, this no hot dog.
So there are disagreements about resource allocation among staff. That's normal and healthy. The CEO's job is to resolve those disagreements and it sounds like Zuck is doing it. The suggestion to train Meta's products on Instagram and Facebook data was perfectly reasonable from the POV of the needs of Cox's teams. You'd want your skip-level to advocate for you in the same way. It was also fine for AW to push back.
>. On Thursday, Mr. Wang plans to host his annual A.I. holiday party in San Francisco with Elad Gil, a start-up investor...It’s unclear if any top Meta executives were invited.
Egads, they _might_ not get invited to a 28-year-old's holiday party? However will they recover??
> he was basically an IC
Disagree with this part - ICs have to write code. He literally did nothing except meetings and WP posts.
This is ageist in the way I don't usually expect from the Valley. Plenty of entrepreneurs have built successful or innovative concepts in their 20s. It is OK to state that Wang is incompetent, but that has little to do with his age and more to do with his capability.