Salesforce Regrets Firing 4000 Experienced Staff and Replacing Them with AI
Key topics
As Salesforce grapples with the aftermath of replacing 4,000 experienced staff with AI, commenters are weighing in on the decision, with some blasting the company's lack of testing and phased deployment, while others see it as a necessary, if painful, experiment. The discussion reveals a consensus that Salesforce's leadership, particularly CEO Benioff, drove the AI push from the top down, with some commenters pointing out that the company's rebranding around AI offerings was a key factor. While some defended the decision as a bold move, others questioned the societal value of Salesforce's products, with one commenter noting that even useful products like Slack, which Salesforce acquired, could be impacted by the company's missteps. The thread is abuzz with debate over the consequences of prioritizing AI over human employees.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
18m
Peak period
92
0-3h
Avg / period
10.3
Based on 123 loaded comments
Key moments
- 01Story posted
Dec 25, 2025 at 9:58 AM EST
13 days ago
Step 01 - 02First comment
Dec 25, 2025 at 10:16 AM EST
18m after posting
Step 02 - 03Peak activity
92 comments in 0-3h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 27, 2025 at 10:44 AM EST
11 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
https://news.ycombinator.com/item?id=42639532
No, someone just wanted their bonus for being forward-thinking, paradigm-shifting, opex cutters. I'm sure they got it.
Also probably a part of their go-to-market strategy. If they can prove it internally they can sell it externally.
The more examples of this going badly we can get together the better.
This just seems a poor decision made by C-suite folk who were neither AI-savvy enough to understand the limits of the tech, nor smart enough to run a meaningful trial to evaluate it. A failure of wishful thinking over rational evaluation.
for that reason alone humans will always need to be in the loop. of course you can debate how many people you need to the above activity, but given that AI isn't omniscient, nor omnipotent I expect that number to be quite high for the foreseeable future.
no no no you don't get it, you would have ANOTHER AI for that
Also, it does appear that there are companies willing to YOLO themselves off a cliff with AI
Until AI gets ego and will of its own (probably the end of humanity) it will simply be a tool, regardless of how intelligent and capable it is.
one would hope that one ability of an 'omniscient and omnipotent' AI would be greater understanding.
When speaking of the divine (the only typical example of the omniscient and omnipotent that comes to mind) we never consider what happens when God (or whoever) misunderstands our intent -- we just rely on the fact that an All-Being type thing would just know.
I think the understanding of minute intent is one such trait an omniscient and omnipotent system must have.
p.s. what a bar raise -- we used to just be happy with AGI!
In reality, even an ASI will not know your intent unless you communicate it clearly. Just as this would also be the case with a highly intelligent human.
Figuring out what to build is 80% of the work, building it is maybe 20%. The 20% has never been the bottleneck. We make a lot of software, and most of it is not optimal and requires years if not decades of tweaking to meet the true requirements.
I recently came to this realization as well, and it now seems so obvious. I feel dumb for not realizing it sooner. Is there any good writing or podcast on this topic?
If anything I've noticed the bar being lowered by the pro-AI set, except for humans, because the prevailing belief is that LLMs must already be AGI but any limitations are dismissed as also being human limitations, and therefore evidence that LLMs are already human equivalent in any way that matters.
And instead of the singularity we have Roko's Basilisk.
this sort of assumes that most humans actually know what they want to do.
It is very untrue in my experience.
Its like most complaints I hear about AI art. yes, it is generic and bland. just like 90% of what human artists produce.
If your pay is 400 times average employee salary because of your unique strategic vision, surely firing 4000 people based on faulty assumptions should come with proportional consequences?
Or does the high risk, high reward, philosophy only apply to the reward part?
If we take out the AI part of this and treat it like any other project, if what they admit is true, it represents a massive failure of judgement and implementation.
I can't see anyone admitting that in public, as it would probably end their career, or should do at least. Especially if a company is a "meritocracy"
https://m.economictimes.com/news/new-updates/ai-bubble-burst...
https://opentools.ai/news/salesforce-steps-back-from-ai-exec...
It does seem like Salesforce relies on Agentforce and therefore doesn't need as much support stuff. But the pressure was also to “reduce heads”, which is a bit of a tone-deaf way to describe firing thousands of people.
1. literally document everything in the product and keep documentation up to date (could be partially automated?)
2. Build good enough search to find those things
3. Be able to troubleshoot / reason / abstract beyond those facts
4. Handle customer information that goes against the assumptions in the core set of facts (ie customers find bugs or don’t understand fundamental concepts about computers)
5. Be prepared to restart the entire conversation when the customer gets frustrated with 1-4 (this is very annoying)
LLMs are a great technology for making up plausible looking text. When correctness matters, and you don't have a second system that can reliably check it, the output turns out to be unreliable.
When you're dealing with customer support, everyone involved has already been failed by the regular system. So they're an exception, and they're unhappy. So you really don't want to inflict a second mistake on them.
The counter: the existing system of checks with (presumably) humans was not good enough. For the last 15 months or so, I have been dealing with E.ON promising one thing and doing another, and had to escalate it to the Ombudsman. I don't think E.ON were using an AI to make these mistakes, I think they just couldn't get customer support people to cope with the idea "the address you have been posting letters to, that address isn't simply wrong, it does not exist". An LLM would have done better, except for what I'm going to say in the counter-counter.
The counter-counter, is that LLMs are only an extra layer of Swiss-cheese: the mistakes they make may be different to human mistakes or may overlap, but they're still definitely present. Specifically, I expect that an LLM would have made two mistakes in my case, one of which is the same mistake the actual humans made (saying they'd fixed everything repeatedly when they had not done so, see meme about LLMs playing the role of HAL in 2001 failing to open the pod bay door) and the other would have been a mistake in my favour (the Ombudsman decided less than I asked for, an LLM would likely have agreed with me more than it should have).
Unfortunately what we see from you is a pattern of low-effort comments, some of which don't even bother with basic sentence formation like capitalization at the start and a period at the end. Looking down your comment feed we see many single-line comments that are low on substance and high in snark.
The guidelines make it clear we're trying for something better here. They ask us to be kind, and to avoid snark and swipes. They ask us to converse curiously. They ask us not to fulminate, and not to sneer, including at the rest of the community.
It's fine to want HN to be better. As moderators we certainly do. That's why we do this job. But it requires us to actually make the effort to be better in our own conduct, and when we see comments from other users that aren't up to standard, to use the tools that have always been here, like downvoting, flagging and emailing us (hn@ycombinator.com) so we can take action.
You should understand that one way people improve the standards of a commons is by imposing social controls on those who violate norms which create a healthy society, such as by shilling. That is normal behavior on every forum I’ve ever seen.
When you allow there to be 100x more of this mindless slop than of anything else, the most any individual can do to resist the tide is to contribute to the voices trying to make antisocial behavior come with a cost.
It works, and because it works, people will continue to do it until you figure out how to keep a clean commons.
The people you claim have “allowed” this have maintained HN for many years - 13 in dang's case, the majority of its history. The primary reason this is a place where people want to participate is because of the guidelines that have been developed and refined since HN's inception, and that we spend hours each day upholding.
Generated comments and posts are banned, and we state this frequently. I spend hours each day evaluating submissions and Show HNs to determine whether they're human-authored or AI-generated. We welcome people to flag suspected cases and email us so we can ban accounts that post generated content.
There is a cohort of community members who have demonstrated a commitment to making HN better over several years through: (a) submitting good articles, (b) posting thoughtful comments, (c) observing the guidelines, (d) flagging bad submissions and comments, and (e) emailing us to point out guidelines breaches and to discuss the healthy functioning of the site. These are the people we listen to when they express concerns about HN's health, because they've established a track record of genuine contribution and care over several years.
From you, we see two comments prior to 2023, and little or none of the above kinds of actions. Instead: ragey fulmination, hyperbole, ascribing views to us without basis. All this while holding yourself up as HN's defender, having never undertaken the earnest, unglamorous, unseen work that other community members do to make this the place you now hold yourself up as seeking to heroically defend.
Please. If you really want HN to be better, you are most welcome to start doing the things that other community members quietly do every day to help make it better.
It's my sincerely held opinion that we're fostering a culture here that ignores the "human impact" of the technology that we're rushing to adopt.
I'm well aware that many members of this community have achieved "success" through software. This includes the rapid adoption of new computing paradigms, new technology stacks, new frameworks, etc.
I am fortunate to be employed. But around me, when I step out of my house, it's painful. People are hurting. They're unemployed. They're depressed. And the younger generation is even worse. They can't even afford to dream.
I live in a corporate world of forced smiles and fake enthusiasm. I would hate for that same culture to take root here. We need to be able to express significant doubt, or even cynicism against AI, without fear of backlash.
Firing people = smart cost cutting
Hiring people = strong vote of confidence in continued growth
Both the OP article and this Times of India article appear to be AI-generated summaries of the original article.
Craziness!
Is anyone really buying they laid off 4k people _because_ they really thought they’d replace them with an LLM agent? The article is suspect at best and this doesn’t even in the slightest align with my experience with LLMs at work (it’s created more work for me).
The layoff always smelled like it was because of the economy.
Of course, it could still happen, as maybe AI systems just need another few years to mature before trying to fully replace jobs like this.
But, feel like combining LLM's with other AI techniques seems like it could do so much more...
... As mentioned, am no expert, but seems like one of the next major focuses on LLM's is on verification of its answers, and adding to this, giving LLM's a sense for when its result are right or wrong. Yeah, feel like the ability for an LLM to introspect itself so it can gain an understanding of how it got its answer might be of help if knowing if it's answer is right (think Anthropic has been working on this for awhile now), as well as scoring the reliability of the information sources.
And, they could also mix in a formal verification step, using some form of proofs to prove that its results are right (for those answers that lend themselves to formal verification).
Am sure all this is all currently being tried. So any AI experts out there, feel free to correct me. Thanks!
Interesting that you mentioned Knowledge Graphs, haven't heard about these in a long time. Just looked up "Commonsense knowledge" page on wikipedia and seems like they're still being added to. Would you happen to know if they're useful yet and can do any real work? or are good enough to integrate with LLM's?
https://timesofindia.indiatimes.com/technology/tech-news/aft...
It isn't regret, they are trying to sell their Agentforce product.
"Why Our Story on Salesforce’s Declining Trust in LLMs Hit a Nerve" - https://www.theinformation.com/articles/story-salesforces-de...
https://archive.is/7RXKb
Because they don't have 4000+ workers worth of work to do?
No that's not what I'm saying. I'm saying that demand (for a product or service) is what drives the amount of labor that is performed, not the other way around.
If a company has maxed out the amount of widgets the can sell in their market and adding new features will not change that, then adding more labor makes no sense.
It follows that making their existing labor more productive leads to layoffs.
stop. reading. evals.
Salesforce has a vested interest in maintaing its seat based licenses, so it's not in favor of mass layoffs.
Internally Salesforce is pushing AgentForce full stop
https://timesofindia.indiatimes.com/technology/tech-news/mic...
He also uses cultural revolution tactics and uses the young ones against the old. I imagine AI house of cards will collapse soon and he'll be remembered as the person who enshittified Windows after the board fires him.