It Is Worth It to Buy the Fast CPU
Mood
heated
Sentiment
mixed
Category
other
Key topics
The article argues that buying a fast CPU is worth it for developers, but the discussion reveals diverse opinions on the matter, with some questioning the value of top-of-the-line CPUs and others highlighting the importance of other factors like RAM and storage.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
149
Day 1
Avg / period
80
Based on 160 loaded comments
Key moments
- 01Story posted
Aug 24, 2025 at 2:03 AM EDT
3 months ago
Step 01 - 02First comment
Aug 24, 2025 at 4:11 AM EDT
2h after posting
Step 02 - 03Peak activity
149 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 26, 2025 at 12:23 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Certainly not ahead of the curve when considering server hardware.
Server hardware is not very portable. Reserving a c7i.large is about $0.14/hour, this would equal the cost of an MBP M3 64GB in about two years.
Apple have made a killer development machine, I say this as a person who does not like Apple and macOS.
On top of that when you look at price vs performance they are way behind.
Apple may have made good strides in single core cpu performance, but they have definitely not made killer development machines imo.
It's not like objective benchmarks disproving these sort of statements don't exist.
I would agree with the idea that faster compile times can have a significant improvement in performance. 30s is long enough for a developer to get distracted and go off and check their email, look at social media, etc. Basically turning 30s into 3s can keep a developer in flow.
The critical thing we’re missing here is how increasing the CPU speed will decrease the compile time. What if the compiler is IO bound? Or memory bound? Removing one bottleneck will get you to the next bottleneck, not necessarily get you all the performance gains you want
I think just having LSP give you answers 2x faster would be great for staying in flow.
Applies to git operations as well.
The days when 30 seconds pauses for the compiler was the slowest part are long over.
It gets ridiculous quickly, really.
And don't get me started on the cloud ERP software the rest of the company uses...
https://github.com/rui314/mold?tab=readme-ov-file#why-is-mol...
The larger point is the fastest may not be faster for your workload so benchmark before spending money. Your workload may be different.
I've seen a test environment which has most assets local but a few shared services and databases accessed over a VPN which is evidently a VIC-20 connected over dialup.
The dev environment can take 20 seconds to render a page that takes under 1 second on prod. Going to a newer machine with twice the RAM bought no meaningful improvement.
They need a rearchitecture of their dev system far more than faster laptops.
There’s your problem. If your expectation was double-digit milliseconds in prod, then non-prod and its VPN also wouldn’t be an issue.
Now in the tfa they compare laptop to desktop so I guess the title should be “you should buy two computers”
You do need a good SSD though. There is a new generation of pcie5 SSDs that came out that seems like it might be quite a bit faster.
It is of course more expensive but that allows them to offer the latest and greatest to their employees without needing all the IT staff to manage a physical installation.
Then your actual physical computer is just a dumb terminal.
Then realistically in any company you'll need to interact with services and data in one specific location, so maybe it's better to be colocated there instead.
In which movie ? "Microsoft fried movie" ? Cloud sucks big time. Not all engineers are web developers.
The desktop latency has gotten way better over the years and the VMs have enough network bandwidth to do builds on a shared network drive. I've also found it easier to request hardware upgrades for VDIs if I need more vCPUs or memory, and some places let you dispatch jobs to more powerful hosts without loading up your machine.
I've also seen it elsewhere in the same industry. I've seen AWS workspaces, custom setups with licensed proprietary or open-source tech, fully dedicated instances or kubernetes pods.. All managed in a variety of ways but the idea remains the same: you log into a remote machine to do all of your work, and can't do anything without a reliable low-latency connection.
Maybe that’s an AMD (or even Intel) thing, but doesn’t hold for Apple silicon.
I wonder if it holds for ARM in general?
* Apple: 32 cores (M3 Ultra)
* AMD: 96 cores (Threadripper PRO 9995WX)
* Intel: 60 cores (W‑9 3595X)
I wouldn’t exactly call that low, but it is lower for sure. On the other hand, the stated AMD and Intel CPUs are borderline server grade and wouldn’t be found in a common developer machine.
For AMD/Intel laptop, desktop and server CPUs usually are based on different architectures and don’t have that much overlap.
Core count used to be a big difference but the ARM Procs in the Apple machines certainly meet the lower end workstation parts now. to exceed it you're spending big big money to get high core counts in the x86 space.
Proper desktop processors have lots and lots of PCI-E Lanes. The current cream of the crop Threadripper Pro 9000 series have 128 PCI-E 5.0 Lanes. A frankly enormous amount of fast connectivity.
M2 Ultra, the current closest workstation processor in Apple's lineup (at least in a comparable form factor in the Mac Pro) has 32 lanes of PCI-E 4.0 connectivity that's enhanced by being slotted into a PCI-E Switch fabric on their Mac Pro. (this I suspect is actually why there hasn't been a rework of the Mac Pro to use M3 Ultra - that they'll ditch the switch fabric for direct wiring on their next one)
Memory bandwidth is a closer thing to call here - using the Threadripper pro 9000 series as an example we have 8 channels of 6400MT/s DDR5 ECC. According to kingston the bus width of DDR5 is 64b so that'll get us ((6400 * 64)/8) = 51,200MB/s per channel; or 409.6 GB/s when all 8 channels are loaded.
On the M4 Max the reported bandwidth is 546 GB/s - but i'm not so certain how this is calculated as the maths doesn't quite stack up from the information i have (8533 MT/s, bus width of 64b, seems to point towards 68,264MB/s per channel. the reported speed doesn't neatly slot into those numbers).
In short the memory bandwidth bonus workstation processors traditionally have is met by the M4 Max, but PCI-E Extensibility is not.
In the mac world though that's usually not a problem as you're not able to load up a Mac Pro with a bunch of RTX Pro 6000s and have it be usable in MacOS. You can however load your machine with some high bandwidth NICs or HBAs i suppose (but i've not seen what's available for this platform)
It’s not that it’s worse than a “real” desktop chip. In a way it’s better you get almost comparable performance with way lower power usage.
Also the M4 Max has worse MT performance than e.g. the 14900k which is architecture ancient in relative terms and also costs a fraction
So, in a way, slow computers is always a software problem, not a hardware problem. If we always wrote software to be as performant as possible, and if we only ran things that were within the capability of the machine, we’d never have to wait. But we don’t do that; good optimization takes a lot of developer time, and being willing to wait a few minutes nets me computations that are a couple orders of magnitude larger than what it can do in real time.
To be fair, things have improved on average. Wait times are reduced for most things. Not as fast as hardware has sped up, but it is getting better over time.
Limiting the number and size of monitors. Putting speedbumps (like assessments or doctor's notes) on ergo accessories. Requiring special approval for powerful hardware. Requiring special approval for travel, and setting hotel and airfare caps that haven't been adjusted for inflation.
To be fair, I know plenty of people that would order the highest spec MacBook just to do web development and open 500 chrome tabs. There is abuse. But that abuse is really capped out at a few thousand in laptops, monitors and workstations, even with high-end specs, which is just a small fraction of one year's salary for a developer.
Google and Facebook I don't think are cheap for developers. I can speak firsthand for my past Google experience. You have to note that the company has like 200k employees and there needs to be some controls and not all of the company are engineers.
Hardware -> for the vast majority of stuff, you can build with blaze (think bazel) on a build cluster and cache, so local CPU is not as important. Nevertheless, you can easily order other stuff should you need to. Sure, if you go beyond the standard issue, your cost center will be charged and your manager gets an email. I don't think any decent manager would block you. If they do, change teams. Some powerful hardware that needs approval is blanket whitelisted for certain orgs that recognize such need.
Trips -> Google has this interesting model you have a soft cap for trips and if you don't hit the cap, you pocket half of the trips credit in your account which you can choose to spend later when you are overcap or you want to get something slightly nicer the next time. Also, they have clear and sane policies on mixing personal and corporate travel. I encourage everyone to learn about and deploy things like that in their companies. The caps are usually not unreasonable, but if you do hit them, it is again an email to your management chain, not some big deal. Never seen it blocked. If your request is reasonable and your manager is shrugging about this stuff, that should reflect on them being cheap not the company policy.
I have a pretty high end MacBook Pro, and that pales in comparison to the compute I have access to.
Sure, I’ve stopped using em-dashes just to avoid the hassle of trying to educate people about a basic logical fallacy, but I reserve the right to be salty about it.
1 or 2 bed gamer things
1) Em-dashes
2) "It's not X, it's Y" sentence structure
3) Comma-separated list that's exactly 3 items long
>3) Comma-separated list that's exactly 3 items long
Proper typography and hamburger paragraphs are canceled now because of AI? So much for what I learned high school english class.
>2) "It's not X, it's Y" sentence structure
This is a pretty weak point because it's n=1 (you can check OP's comment history and it's not repeated there), and that phrase is far more common in regular prose than some of the more egregious ones (eg. "delve").
I read Google is now issuing Chromebooks instead of proper computers to non-engineers, which has got to be corrosive to productivity and morale.
"AI" (Plus) Chromebooks?
They eventually became so cheap they blanket paused refreshing developer laptops...
Proper ergo is a cost concious move. It helps keep your employees able to work which saves on hiring and training. It reduces medical expenses, which affects the bottom line because large companies are usually self-insured; they pay a medical insurance company only to administer the plan, not for insurance --- claims are paid from company money.
Don’t worry, they’ll tell you
Apple have long thought that 8Gb ram is good enough for anything, and will continue to for some time now.
All this at my company would be a call or chat to the travel agent (which, sure, kind of a pain, but they also paid for dedicated agents so wait time was generally good).
So people started slacking off, because "you have to love your employees"?
Equality doesn't have to mean uniformity.
Some people would minimize the amount spent on their core hardware so they had money to spend on fun things.
So you’d have to deal with someone whose 8GB RAM cheap computer couldn’t run the complicated integration tests but they were typing away on a $400 custom keyboard you didn’t even know existed while listening to their AirPods Max.
I've been on teams where corporate hardware is all max spec, 4-5 years ahead of common user hardware, provided phones are all flagships replaced every two years. The product works great for corporate users, but not for users with earthly budgets. And they wonder how competitors swallow market in low income countries.
The developer integration tests don’t need to run on a low spec machine. That is not needed.
At one place I had a $25 no question spending limit, but sank a few months trying to buy a $5k piece of test equipment because somebody thought maybe some other tool could be repurposed to work, or we used to have one of those but it's so old the bandwidth isn't useful now, or this project is really for some other cost center and I don't work for that cost center.
Turns out I get paid the same either way.
Where did this idea about spiting your fellow worker come from?
That seems unreasonably short. My work computer is 10 years old (which is admittedly the other extreme, and far past the lifecycle policy, but it does what I need it to do and I just never really think about replacing it).
It depends what you're working on. My work laptop is 5 years old, and it takes ~4 minutes to do a clean compile of a codebase I work on regularly. The laptop I had before that (which would now be around 10 years old) would take ~40 minutes to compile to the same codebase. It would be completely untenable for me to do the job I do with that laptop (and indeed I only started working in the area I do once I got this one).
You're underestimating the scope of time lost by losing a few percent in productivity per employee across hundreds of thousands of employees.
You want speed limits not speed bumps. And they should be pretty high limits...
After I saw the announcement, I immediately knew I needed to try out our workflows on the new architecture. There was just no way that we wouldn't have x86_64 as an implicit dependency all throughout our stack. I raised the issue with my manager and the corporate IT team. They acknowledged the concern but claimed they had enough of a stockpile of new Intel machines that there was no urgency and engineers wouldn't start to see the Apple Silicon machines for at least another 6-12 months.
Eventually I do get allocated a machine for testing. I start working through all the breakages but there's a lot going on at the time and it's not my biggest priority. After all, corporate IT said these wouldn't be allocated to engineers for several more months, right? Less than a week later, my team gets a ticket from a new-starter who has just joined and was allocated an M1 and of course nothing works. Turns out we grew a bit faster than anticipated and that stockpile didn't last as long as planned.
It took a few months before we were able to fix most of the issues. In that time we ended up having to scavenge under-specced machines form people in non-technical roles. The amount of completely avoidable productivity wasted from people swapping machines would have easily reached into the person-years. And of course myself and my team took the blame for not preparing ahead of time.
Budgets and expenditure are visible and easy to measure. Productivity losses due to poor budgetry decisions, however, are invisible and extremely difficult to measure.
> And of course myself and my team took the blame for not preparing ahead of time.
If your initial request was not logged and then able to be retrieved by yourself in defence, then I would say something is very wrong at your company.
But regardless, I already left there a few years back.
You are suggesting a level of due process that is wildly optimistic for most companies. If you are an IC, such blame games are entirely resolved behind closed doors by various managers and maybe PMs. Your manager may or may not ask you for supporting documentation, and may or may not be able to present it before the "retrospective" is concluded.
For a single person, slight improvements added up over regular, e.g., daily or weekly, intervals compound to enormous benefits over time.
XKCD: https://xkcd.com/1205/
Saving 1 second/employee/day can quickly be worth 10+$/employee/year (or even several times that). But you rarely see companies optimizing their internal processes based on that kind of perceived benefits.
Water cooler placement in a cube farm comes to mind as a surprisingly valuable optimization problem.
Then some period of time later they start looking at spending in detail and can’t believe how much is being spent by the 25% or so who abuse the possibly. Then the controls come.
> There is abuse. But that abuse is really capped out at a few thousand in laptops, monitors and workstations, even with high-end specs,
You would think, but in the age of $6,000 fully specced MacBook Pros, $2,000 monitors, $3,000 standing desks, $1500 iPads with $100 Apple pencils and $300 keyboard cases, $1,000 chairs, SaaS licenses that add up, and (if allowed) food delivery services for “special circumstances” that turns into a regular occurrence it was common to see individuals incurring expenses in the tens of thousands range. It’s hard to believe if you’re a person who moderates their own expenditures.
Some people see a company policy as something meant to be exploited until a hidden limit is reached.
There also starts to be some soft fraud at scales higher than you’d imagine: When someone could get a new laptop without questions, old ones started “getting stolen” at a much higher rate. When we offered food delivery for staying late, a lot of people started staying just late enough for the food delivery to arrive while scrolling on their phones and then walking out the door with their meal.
Ehh. Neither of these are soft fraud. The former is outright law-breaking, the latter…is fine. They stayed till they were supposed to.
This is the soft fraud mentality: If a company offers meal delivery for people who are working late who need to eat at the office and then people start staying late (without working) and then taking the food home to eat, that’s not consistent with the policies.
It was supposed to be a consolation if someone had to (or wanted to, as occurred with a lot of our people who liked to sleep in) stay late to work. It was getting used instead for people to avoid paying out of pocket for their own dinners even though they weren’t doing any more work.
Which is why we can’t have nice things: People see these policies as an opportunity to exploit them rather than use them as intended.
This isn’t about fraud anymore. It’s about how suspiciously managers want to view their employees. That’s a separate issue (but not one directed at employees).
This is why I call it the soft fraud mentality: When people see some fraudulent spending and decide that it’s fine because they don’t think the policy is important.
Managers didn’t care. It didn’t come out of their budget.
It was the executives who couldn’t ignore all of the people hanging out in the common areas waiting for food to show up and then leaving with it all together, all at once. Then nothing changed after the emails reminding them of the purpose of the policy.
When you look at the large line item cost of daily food delivery and then notice it’s not being used as intended, it gets cut.
If you start trying to tease apart the motivations people have even if they are following those rules, you are going to end up more paranoid than Stalin.
> So if you are astonished that people optimize for their financial gain, that’s concerning.
I’m not “surprised” nor “astonished” nor do you need to be “concerned” for me. That’s unnecessarily condescending.
I’m simply explaining how these generous policies come to and end through abuse.
You are making a point in favor of these policies: Many will see an opportunity for abuse and take it, so employers become more strict.
The idea that a company offering food in some capacity can be seen as generous is, at best, confusing and possibly naïve. A company does this because it expects such a policy will extract more work for less pay. There is no benevolence in the relationship between a company and an individual — only pure, raw self-interest.
In my opinion, the best solution is not to offer benefits at all, but simply to overpay everyone. That’s far more effective, since individuals then spend their own money as they choose, and thus take appropriate care of it.
Yes, but some also have a moral conscience and were brought up to not take more than they need.
If you are not one of these types of people, then not taking complete over advantage of an offer like free meals probably seems like an alien concept.
I try to hire more people like this, it makes for a much stronger workforce when people are not all out to get whatever they can for themselves and look out for each others interests more.
As you mentioned, setting policy that isn’t abused is hard. But abuse isn’t fraud—it’s abuse—and abuse is its own rabbit hole that covers a lot of these maladaptive behaviors you are describing.
I call the meal expense abuse “soft fraud” because people kind of know it’s fraud, but they think it’s small enough that it shouldn’t matter. Like the “eh that’s fine” commenter above: They acknowledged that it’s fraud, but also believe it’s fine because it’s not a major fraud.
If someone spends their employer’s money for personal benefit in a way that is not consistent with the policies, that is legally considered expense fraud.
There was a case local to me where someone had a company credit card and was authorized to use it for filling up the gas tank of the company vehicle. They started getting in the habit of filling up their personal vehicle’s gas tank with the card, believing that it wasn’t a big deal. Over the years their expenses weren’t matching the miles on the company vehicle and someone caught on. It went to court and the person was liable for fraud, even though the total dollar amount was low five figures IIRC. The employee tried to argue that they used the personal vehicle for work occasionally too, but personal mileage was expensed separately so using the card to fill up the whole tank was not consistent with policy.
I think people get in trouble when they start bending the rules of the expense policy thinking it’s no big deal. The late night meal policy confounds a lot of people because they project their own thoughts about what they think the policy should be, not what the policy actually is.
Note that employers do this as well. A classic one is a manager setting a deadline that requires extreme crunches by employees. They're not necessarily compensating anyone more for that. Are the managers within their rights? Technically. The employees could quit. But they're shaving hours, days, and years off of employees without paying for it.
If a company policy says you can expense meals when taking clients out, but sales people started expensing their lunches when eating alone, it’s clearly expense fraud. I think this is obvious to everyone.
Yet when engineers are allowed to expense meals when they’re working late and eating at the office, but people who are neither working late nor eating at the office start expensing their meals, that’s expense fraud.
These things are really not gray area. It seems more obvious when we talk about sales people abusing budgets, but there’s a blind spot when we start talking about engineers doing it.
Engineers are very highly paid. Many are paid more than $100/hr if you break it down. If a salaried engineer paid the equivalent of $100/hr stays late doing anything, expenses a $25 meal, and during the time they stay late you get the equivalent of 20 minutes of work out of them- including in intangibles like team bonding via just chatting with coworkers or chatting about some bug- then the company comes out ahead.
That you present the above as considered "expense fraud" is fundamentally a penny-wise, pound-foolish way to look at running a company. Like you say, it's not really a gray area. It's a feature not a bug.
Luckily that comes down to the policy of the individual company and is not enforced by law. I am personally happy to pay engineers more so they can buy this sort of thing themselves and we dont open the company to this sort of abuse. Then its a known cost and the engineers can decide from themselves if they want to spend that $30 on a meal or something else.
It can be a win for both sides for the employees to work an extra 30-90 minutes and have some team bonding and to feel like they’re getting a good deal. (Source: I did this for years at a place that comp’d dinner if you worked more than 8 hours AND past 6 PM; we’d usually get more than half the team staying for the “free” food.)
I have worked in places where the exact opposite of what you describe happens. As OP says, people just stop working at 6 and just start reading reddit or scrolling their phones. No team bonding and chat because everyone is wiped out from a hard day. Just people hanging around, grabbing their food when it arrives, and leaving.
We too had more than half the team staying for the “free” food, but they definitely didnt do much work whilst they were there.
I'm making the case that mandatory unpaid overtime is effectively wage theft. It is legal in the US because half of jobs there are "exempt" from the usual overtime protections. There's no ethical reason for that, just political ones.
At any rate, I think people who want to crack down on meal expenses out of a sense of justice should get at least as annoyed by employers taking advantage of their employees in technically allowed ways.
A better option is for leadership to enforce culture by reinforcing expectations and removing offending employees if need be to make sure that the culture remains intact. This is a time sync, without a doubt. For leadership to take this on it has to believe that the unmeasurable benefit of a good company culture outweighs the drag on leadership's efficiency.
Company culture is will always be actively eroded in any company and part of the job of leadership is to enforce culture so that it can be a defining factor in the company's success for as long as possible.
If an employee or team is not putting in the effort desired, that's a separate issue and there are other administrative processes for dealing with that.
peanuts compared to their 500k TC
I do think a lot of this comment section is assuming $500K TC employees at employers with infinite cash to spend, though.
Exactly. I personally have never been in a meeting which I thought was absolutely necessary. Except maybe new fire regs.
Two, several tens of thousands are in the 5%-10% range. Hardly "peanuts". But I suppose you'll be happy to hear "no raise for you, that's just peanuts compared to your TC", right?
I paid a premium for my home height-adjustable desk because the frame and top are made in America, the veneer is much thicker than competitors, the motors and worm gears are reliable, and the same company makes coordinating office furniture.
The same company sells cheap imported desks too. Since my work area is next to the dining table in my open-plan apartment, I considered the better looks worth the extra money.
If someone's unstable motorized desk tips over and injures someone at the office, it's a big problem for the company.
A cheap desk might have more electrical problems. Potential fire risk.
Facilities has to manage furniture. If furniture is a random collection of different cheap desks people bought over the years they can't plan space without measuring them all. If something breaks they have to learn how to repair each unique desk.
Buying the cheapest motorized desk risks more time lost to fixing or replacing it. Saving a couple hundred dollars but then having the engineer lose part of a day to moving to a new desk and running new cables every 6 months while having facilities deal with disposal and installation of a new desk is not a good trade.
Alex St. John Microsoft Windows 95 era, created directX annnnd also built an alien spaceship.
I dimly recalled it as a friend in the games division telling me about some someone getting 5 and a 1 review scores in close succession.
Facts i could find (yes i asked an llm)
5.0 review: Moderately supported. St. John himself hosted a copy of his Jan 10, 1996 Microsoft performance review on his blog (the file listing still exists in archives). It reportedly shows a 5.0 rating, which in that era was the rare top-box mark. Fired a year later: Factual. In an open letter (published via GameSpot) he states he was escorted out of Microsoft on June 24, 1997, about 18 months after the 5.0 review. Judgment Day II alien spaceship party: Well documented as a plan. St. John’s own account (quoted in Neowin, Gizmodo, and others) describes an H.R. Giger–designed alien-ship interior in an Alameda air hangar, complete with X-Files cast involvement and a Gates “head reveal” gag. Sunk cost before cancellation: Supported. St. John says the shutdown came “a couple of weeks” before the 1996 event date, after ~$4.3M had already been spent/committed (≈$1.2M MS budget + ≈$1.1M sponsors + additional sunk costs). Independent summaries repeat this figure (“in excess of $4 million”).
So: 5.0 review — moderate evidence Fired 1997 — factual Alien spaceship build planned — factual ≈$4M sunk costs — supported by St. John’s own retrospective and secondary reporting
It’s like your friend group and time choosing a place to eat. It’s not your friends, it’s the law of averages.
But also, when I tell one of my reports to spec and order himself a PC, there should be several controls in place.
Firstly, I should give clear enough instructions that they know whether they should be spending around $600, $1500, or $6000.
Second, although my reports can freely spend ~$100 no questions asked, expenses in the $1000+ region should require my approval.
Thirdly, there is monitoring of where money is going; spending where the paperwork isn't in order gets flagged and checked. If someone with access to the company amazon account gets an above-ground pool shipped to their home, you can bet there will be questions to be answered.
351 more comments available on Hacker News
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.