Intel Arc Pro B50 GPU Launched at $349 for Compact Workstations
Posted4 months agoActive4 months ago
guru3d.comTechstoryHigh profile
calmmixed
Debate
70/100
GPUIntel ArcWorkstation Graphics
Key topics
GPU
Intel Arc
Workstation Graphics
Intel has launched the Arc Pro B50 GPU, a $349 compact workstation graphics card with 16GB VRAM, sparking discussion on its performance, features, and market positioning.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
44m
Peak period
68
0-6h
Avg / period
17.8
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 7, 2025 at 6:06 PM EDT
4 months ago
Step 01 - 02First comment
Sep 7, 2025 at 6:51 PM EDT
44m after posting
Step 02 - 03Peak activity
68 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 11, 2025 at 2:58 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45162626Type: storyLast synced: 11/20/2025, 8:28:07 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Also, do these support SR-IOV, as in handing slices of the GPU to virtual machines?
Is HDMI seen as a “gaming” feature, or is DP seen as a “workstation” interface? Ultimately HDMI is a brand that commands higher royalties than DP, so I suspect this decision was largely chosen to minimize costs. I wonder what percentage of the target audience has HDMI only displays.
(Note that some self-described “open” standards are not royalty-free, only RAND-licensed by somebody’s definiton of “R” and “ND”. And some don’t have their text available free of charge, either, let alone have a development process open to all comers. I believe the only thing the phrase “open standard” reliably implies at this point is that access to the text does not require signing an NDA.
DisplayPort in particular is royalty-free—although of course with patents you can never really know—while legal access to the text is gated[2] behind a VESA membership with dues based on the company revenue—I can’t find the official formula, but Wikipedia claims $5k/yr minimum.)
[1] https://hackaday.com/2023/07/11/displayport-a-better-video-i...
[2] https://vesa.org/vesa-standards/
As someone who has toyed with OS development, including a working NVMe driver, that's not to be underestimated. I mean, it's an absurd idea, graphics is insanely complex. But documentation makes it theoretically possible... a simple framebuffer and 2d acceleration for each screen might be genuinely doable.
https://www.x.org/docs/intel/ACM/
[0] https://www.amazon.co.uk/ASUS-GT730-4H-SL-2GD5-GeForce-multi...
I assume you have to pay HDMI royalties for DP ports which support the full HDMI spec, but older HDMI versions were supersets of DVI, so you can encode a basic HDMI compatible signal without stepping on their IP.
Converting from DisplayPort to HDMI is trivial with a cheap adapter if necessary.
HDMI is mostly used on TVs and older monitors now.
Not cheap though. And also not 100% caveat-free.
I have a Level1Techs hdmi KVM and it's awesome, and I'd totally buy a display port one once it has built in EDID cloners, but even at their super premium price point, it's just not something they're willing to do yet.
1. https://www.store.level1techs.com/products/p/14-display-port...
Only now are DisplayPort 2 monitors coming out
Otherwise HDMI would have been dead a long time ago.
https://www.theregister.com/2024/03/02/hdmi_blocks_amd_foss/
> Is HDMI seen as a “gaming” feature
It's a tv content protection feature. Sometimes it degrades the signal so you feel like you're watching tv. I've had this monitor/machine combination that identified my monitor as a tv over hdmi and switched to ycbcr just because it wanted to, with assorted color bleed on red text.
I like to Buy American when I can but it's hard to find out which fabs various CPUs and GPUs are made in. I read Kingston does some RAM here and Crucial some SSDs. Maybe the silicon is fabbed here but everything I found is "assembled in Taiwan", which made me feel like I should get my dream machine sooner rather than later
Apologies for the video link. But a recent pretty in depth comparison: https://youtu.be/kkf7q4L5xl8
I have a service that runs continuously and reencodes any videos I have into h265 and the iGPU barely even notices it.
I'll have to consider pros and cons with Ultra chips, thanks for the tip.
There really is no such thing as "buying American" in the computer hardware industry unless you are talking about the designs rather than the assembly. There are also critical parts of the lithography process that depend on US technology, which is why the US is able to enforce certain sanctions (and due to some alliances with other countries that own the other parts of the process).
Personally I think people get way too worked up about being protectionist when it comes to global trade. We all want to buy our own country's products over others but we definitely wouldn't like it if other countries stopped buying our exported products.
When Apple sells an iPhone in China (and they sure buy a lot of them), Apple is making most of the money in that transaction by a large margin, and in turn so are you since your 401k is probably full of Apple stock, and so are the 60+% of Americans who invest in the stock market. A typical iPhone user will give Apple more money in profit from services than the profit from the sale of the actual device. The value is really not in the hardware assembly.
In the case of electronics products like this, almost the entire value add is in the design of the chip and the software that is running on it, which represents all the high-wage work, and a whole lot of that labor in the US.
US citizens really shouldn't envy a job where people are sitting at an electronics bench doing repetitive assembly work for 12 hours a day in a factory wishing we had more of those jobs in our country. They should instead be focused on making high level education more available/affordable so that they stay on top of the economic food chain, where most/all of its citizens are doing high-value work rather than causing education to be expensive and beg foreign manufacturers to open satellite factories to employ our uneducated masses.
I think the current wave of populist protectionist ideology is essentially blaming the wrong causes of declining affordability and increasing inequality for the working class. Essentially, people think that bringing the manufacturing jobs back and reversing globalism will right the ship on income inequality, but the reality is that the reason that equality was so good for Americans m in the mid-century was because the wealthy were taxed heavily, European manufacturing was decimated in WW2, and labor was in high demand.
The above of course is all my opinion on the situation, and a rather long tangent.
EDIT: I did think of, what is the closest thing to artisan silicon and thought of the POWER9 CPUs and found out those are made in USA Talos II is also manufactured in the US with the IBM POWER9 processors being fabbed in New York while the Raptor motherboard is manufactured in Texas along with where their systems are assembled.
https://www.phoronix.com/review/power9-threadripper-core9
I randomly thought of paint companies as another example, with Sherwin-Williams and PPG having US plants.
The US is still the #2 manufacturer in the world, it's just a little less obvious in a lot of consumer-visible categories.
I’m pretty sure the US already has military market has been doing exactly this for decades. The military budget is over twice the size of Apple’s revenue.
The CHIPS act is essentially doing the same kind of thing that helped Taiwan get so good at semiconductors in the first place. Whether it’s been as effective remains to be seen.
It clocks in at 1503.4 samples per second, behind the NVidia RTX 2060 (1590.93 samples / sec, released Jan 2019), AMD Radeon RX 6750 XT (1539, May 2022), and Apple M3 Pro GPU 14 cores (1651.85, Oct 2023).
Note that this perf comparison is just ray-tracing rendering, useful for games, but might give some clarity on performance comparisons with its competition.
>Overall the Intel Arc Pro B50 was at 1.47x the performance of the NVIDIA RTX A1000 with that mix of OpenGL, Vulkan, and OpenCL/Vulkan compute workloads both synthetic and real-world tests. That is just under Intel's own reported Windows figures of the Arc Pro B50 delivering 1.6x the performance of the RTX A1000 for graphics and 1.7x the performance of the A1000 for AI inference. This is all the more impressive when considering the Arc Pro B50 price of $349+ compared to the NVIDIA RTX A1000 at $420+.
Toss in a 5060 Ti into the compare table, and we're in an entirely different playing field.
There are reasons to buy the workstation NVidia cards over the consumer ones, but those mostly go away when looking at something like the new Intel. Unless one is in an exceptionally power-constrained environment, yet has room for a full-sized card (not SFF or laptop), I can't see a time the B50 would even be in the running against a 5060 Ti, 4060 Ti, or even 3060 Ti.
I seem to recall certain esoteric OpenGL things like lines being fast was a NVIDIA marketing differentiator, as only certain CAD packages or similar cared about that. Is this still the case, or has that software segment moved on now?
For me (not quite at the A1000 level, but just above -- still in the prosumer price range), a major one is ECC.
Thermals and size are a bit better too, but I don't see that as $500 better. I actually don't see (m)any meaningful reasons to step up to an Ax000 series if you don't need ECC, but I'd love to hear otherwise.
I don't get why there's people trying to twist this story or come up with strawmen like the A2000 or even the RTX5000 series. Intel's coming into this market competitively, which as far as I know is a first, and it's also impressive.
Coming into the gaming GPU market had always been too ambitious a goal for Intel, they should have started with competing in the professional GPU market. It's well known that Nvidia and AMD have always been price gouging this market so it's fairly easy to enter it competitively.
If they can enter this market successfully and then work their way up on the food chain then that seems like good way to recover from their initial fiasco.
We could just as well compare it to the slightly more capable RTX A2000, which was released more than 4 years ago. Either way, Intel is competing with the EoL Ampere architecture.
There are huge markets that does not care about SOTA performance metrics but needs to get a job done.
With 16GB everybody will just call it another in the long list of Intel failures.
Given the high demand of graphic cards, is this a plausible scenario?
Given how young and volatile this domain still is, it doesn't seem unreasonable to be wary of it. Big players (google, openai and the likes) are probably pouring tons of money into trying to do exactly that
But intel is still lost in it's hubris, and still thinks it's a serious player and "one of the boys", so it doesn't seem like they want to break the line.
This makes it mysterious since clearly CUDA is an advantage, but higher VRAM lower cost cards with decent open library support would be compelling.
Plenty of people use eg 2, 4 or 6 3090s to run large models at acceptable speeds.
Higher VRAM at decent (much faster than DDR5) speeds will make cards better for AI.
Intel and even AMD can’t compete or aren’t bothering. I guess we’ll see how the glued 48GB B60 will do, but that’s a still relatively slow GPU regardless of memory. Might be quite competitive with Macs, though.
People actually use loaded out M-series macs for some forms of AI training. So, total memory does seem to matter in certain cases.
Are there any performance bottlenecks with using 2 cards instead of a single card? I don't think any one the consumer Nvidia cards use NVlink anymore, or at least they haven't for a while now.
- less people care about VRAM than HN commenters give impression of
- VRAM is expensive and wouldn't make such cards profitable at the HN desired price points
Businesswise? Because Intel management are morons. And because AMD, like Nvidia, don't want to cannibalize their high end.
Technically? "Double the RAM" is the most straightforward (that doesn't make it easy, necessarily ...) way to differentiate as it means that training sets you couldn't run yesterday because it wouldn't fit on the card can now be run today. It also takes a direct shot at how Nvidia is doing market segmentation with RAM sizes.
Note that "double the RAM" is necessary but not sufficient.
You need to get people to port all the software to your cards to make them useful. To do that, you need to have something compelling about the card. These Intel cards have nothing compelling about them.
Intel could also make these cards compelling by cutting the price in half or dropping two dozen of these cards on every single AI department in the US for free. Suddenly, every single grad student in AI will know everything about your cards.
The problem is that Intel institutionally sees zero value in software and is incapable of making the moves they need to compete in this market. Since software isn't worth anything to Intel, there is no way to justify any business action isn't just "sell (kinda shitty) chip".
Prices from NewEgg on 16gb+ consumer cards, sold by NewEgg and in stock.
My first software job was at a place doing municipal architecture. The modelers had and needed high end GPUs in addition to the render farm, but plenty of roles at the company simply needed anything with better than what the Intel integrated graphics of the time could produce in order to open the large detailed models.
In these roles the types of work would include things like seeing where every pipe, wire, and plenum for a specific utility or service was in order to plan work between a central plant and a specific room. Stuff like that doesn’t need high amounts of VRAM since streaming textures in worked fine. A little lag never hurt anyone here as the software would simply drop detail until it caught up. Everything was pre-rendered so it didn’t need large amounts of power to display things. What did matter was having the grunt to handle a lot of content and do it across three to six displays.
Today I’m guessing the integrated chips could handle it fine but even my 13900K’s GPU only does DisplayPort 1.4 and up to only three displays on my motherboard. It should do four but it’s up to the ODMs at that point.
For a while Matrox owned a great big slice of this space but eventually everyone fell to the wayside except NVidia and AMD.
I guess it's a boon for Intel that NVidia repeatedly shoots their own workstation GPUs in the foot...
I.e. maybe Nvidia say "if we're going to fuse some random number of cores such that this is no longer a 3050, then let's not only fuse the damaged cores, but also do a long burn-in pass to observe TDP, and then fuse the top 10% of cores by measured TDP."
If they did that, it would mean that the resulting processor would be much more stable under a high duty cycle load, and so likely to last much longer in an inference-cluster deploy environment.
And the extra effort (= bottlenecking their supply of this model at the QC step) would at least partially justify the added cost. Since there'd really be no other way to produce a card with as many FLOPS/watt-dollar, without doing this expensive "make the chip so tiny it's beyond the state-of-the-art to make it stably, then analyze it long enough to precision-disable everything required to fully stabilize it for long-term operation" approach.
Such an appliance could plug into literally any modern computer — even a laptop or NUC. (And for inference, "running on an eGPU connected via Thunderbolt to a laptop" would actually work quite well; inference doesn't require much CPU, nor have tight latency constraints on the CPU<->GPU path; you mostly just need enough arbitrary-latency RAM<->VRAM DMA bandwidth to stream the model weights.)
(And yeah, maybe your workstation doesn't have Thunderbolt, because motherboard vendors are lame — but then you just need a Thunderbolt PCIe card, which is guaranteed to fit more easily into your workstation chassis than a GPU would!)
https://www.gigabyte.com/Graphics-Card/GV-N5090IXEB-32GD
The thing you linked is just a regular Gigabyte-branded 5090 PCIe GPU card (that they produced first, for other purposes; and which does fit into a regular x16 PCIe slot in a standard ATX chassis), put into a (later-designed) custom eGPU enclosure. The eGPU box has some custom cooling [that replaces the card's usual cooling] and a nice little PSU — but this is not any more "designing the card around the idea it'll be used in an enclosure" than what you'd see if an aftermarket eGPU integrator built the same thing.
My point was rather that, if an OEM [that produces GPU cards] were to design one of their GPU cards specifically and only to be shipped inside an eGPU enclosure that was designed together with it — then you would probably get higher perf, with better thermals, at a better price(!), than you can get today from just buying standalone peripheral-card GPU (even with the cost of the eGPU enclosure and the rest of its components taken into account!)
Where by "designing the card and the enclosure together", that would look like:
- the card being this weird nonstandard-form-factor non-card-edged thing that won't fit into an ATX chassis or plug into a PCIe slot — its only means of computer connection would be via its Thunderbolt controller
- the eGPU chassis the card ships in, being the only chassis it'll comfortably live in
- the card being shaped less like a peripheral card and more like a motherboard, like the ones you see in embedded industrial GPU-SoC [e.g. automotive LiDAR] use-cases — spreading out the hottest components to ensure nothing blocks anything else in the airflow path
- the card/board being designed to expose additional water-cooling zones — where these zones would be pointless to expose on a peripheral card, as they'd be e.g. on the back of the card, where the required cooling block would jam up against the next card in the slot-array
...and so on.
It's the same logic that explains why those factory-sealed Samsung T-series external NVMe pucks can cost less than the equivalent amount of internal m.2 NVMe. With m.2 NVMe, you're not just forced into a specific form-factor (which may not be electrically or thermally optimal), but you're also constrained to a lowest-common-denominator assumption of deployment environment in terms of cooling — and yet you have to ensure that your chips stay stable in that environment over the long term. Which may require more-expensive chips, longer QC burn-in periods, etc.
But when you're shipping an appliance, the engineering tolerances are the tolerances of the board-and-chassis together. If the chassis of your little puck guarantees some level of cooling/heat-sinking, then you can cheap out on chips without increasing the RMA rate. And so on. This can (and often does) result in an overall-cheaper product, despite that product being an entire appliance vs. a bare component!
The hottest one on the consumer market
> The eGPU box has some custom cooling
Custom liquid cooling to tame the enormous TDP
> and a nice little PSU
Yeah, an 850W one.
>were to design one of their GPU cards specifically and only to be shipped inside an eGPU enclosure that was designed together with it
And why they would do so?
Do you understand what it would drive the price a lot?
> at a better price(!)
With less production/sales numbers than a regular 5090 GPU? No way. Economics 101.
> the card being this weird nonstandard-form-factor non-card-edged thing
Even if we skip the small series nuances (which makes this a non-starter by the price alone), there is a little what some other 'nonstandard-form-factor' can do for the cooling - you still need the RAM near the chip... and that's all. You just designed the same PCIe card for the sake of it being incompatible..
> won't ... plug into a PCIe slot
Again - why? What this would provide what the current PCIe GPU lacks? BTW you still need the 16 lines of PCIe and you know which connector provides the most useful and cost effective way to do so? A regular 16x PCIe connector. That one you ditched.
> the card being shaped less like a peripheral card and more like a motherboard
You don't need to 're-design it from scratch', it's enough not to be constrained with a 25cm limit to have a proper air-flow along a properly oriented radiator.
> why those factory-sealed Samsung T-series external NVMe pucks
Lol: https://www.zdnet.com/article/why-am-i-taking-this-samsung-t...
That's a bold claim when their acceleration software (IPEX) is barely maintained and incompatible with most inference stacks, and their Vulkan driver is far behind it in performance.
Why would you bother with any Intel product with an attitude like that, gives zero confidence in the company. What business is Intel in, if not competing with Nvidia and AMD. Is it giving up competing with AMD too?
In many cases where 32GB won't be enough, 48 wouldn't be enough either.
Oh and the 5090 is cheaper.
Foundry business. The latest report on Discreet Graphics Market share Nvidia has 94%, AMD at 6% and Intel at 0%.
I may still have another 12 months to go. But in 2016 I made a bet against Intel engineers on Twitter and offline suggesting GPU is not a business they want to be in, or at least too late. They said at the time they will get 20% market share minimum by 2021. I said I would be happy if they did even 20% by 2026.
Intel is also losing money, they need cashflow to compete in Foundry business. I have long argued they should have cut off GPU segment when Pat Gelsinger arrives, turns out Intel bound themselves to GPU by all the government contract and supercomputer they promised to make. Now that they have delivered it all or mostly they will need to think about whether to continue or not.
Unfortunately unless US point guns at TSMC I just dont see how Intel will be able to compete, as Intel needs to be a leading edge position in order to command the margin required for Intel to function. Right now in terms of density Intel 18A is closer to TSMC N3 then N2.
If NVidia gets complacent as Intel has become when they had the market share in the CPU space, there is opportunity for Intel, AMD and others in NVidias margin.
They may not have to, frankly, depending on when China decides to move on Taiwan. It's useless to speculate—but it was certainly a hell of a gamble to open a SOTA (or close to it—4 nm is nothing to sneeze at) fab outside of the island.
That's the correct call in my opinion. Training is far more complex and will span multi data centers soon. Intel is too far behind. Inference is much simpler and likely a bigger market going forward.
That's how you get things like good software support in AI frameworks.
Inference is vastly simpler than training or scientific compute.
I want hardware that I can afford and own, not AI/datacenter crap that is useless to me.
[1] https://www.maxsun.com/products/intel-arc-pro-b60-dual-48g-t...
Kind of. It's more two 24gb b60s in a trenchcoat. It connects to one slot but it's two completely separate gpus and requires the board to support pcie bifurcation.
And lanes. My board has two PCIe x16 slots fed by the CPU, but if I use both they'll only get x8 lanes each. Thus if I plugged two of these in there, I'd still only have two working GPUs, not four.
The biggest Deepseek V2 models would just fit, as would some of the giant Meta open source models. Those have rather pleasant performance.
In theory, how feasible is that?
I feel like the software stack might be like a Jenga tower. And PCIe limitations might hit pretty hard.
> "Because 48GB is for spreadsheets, feed your rendering beast with a buffet of VRAM."
Edit: I guess must just be a bad translation or sloppy copywriting, and they mean it's not for just spreadsheets rather than it is...
Therefore I can install Proxmox VE and run multiple VMs, assigning a vGPU to each of them a for video transcoding (IPCam NVR), AI and other applications.
https://github.com/Upinel/PVE-Intel-vGPU
I would happily buy 96 Gb for $3490, but this makes very little sense.
107 more comments available on Hacker News