Intel's E2200 "mount Morgan" Ipu at Hot Chips 2025
Posted4 months agoActive4 months ago
chipsandcheese.comTechstory
calmmixed
Debate
60/100
IntelIpuDatacenter HardwareSemiconductor Industry
Key topics
Intel
Ipu
Datacenter Hardware
Semiconductor Industry
Intel's E2200 'Mount Morgan' IPU, a 24-core Neoverse N2 server on TSMC, was discussed at Hot Chips 2025, with commenters weighing its potential and concerns about Intel's strategy and software support.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
1h
Peak period
9
2-4h
Avg / period
3.8
Comment distribution34 data points
Loading chart...
Based on 34 loaded comments
Key moments
- 01Story posted
Sep 10, 2025 at 6:21 PM EDT
4 months ago
Step 01 - 02First comment
Sep 10, 2025 at 7:31 PM EDT
1h after posting
Step 02 - 03Peak activity
9 comments in 2-4h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 11, 2025 at 4:19 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45204838Type: storyLast synced: 11/20/2025, 12:47:39 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Additionally, the second and third round of desktop parts released on 10nm (aka "Intel 7") are now known to have pushed clocks and voltages somewhat beyond the limits of the process, leading to embarrassing reliability problems and microcode updates that hurt performance. Intel has squeezed everything they can out of their 10nm and have mostly put it behind them, so talking about it like they only recently ramped production is totally wrong about where they are in the lifecycle.
Color me confused
(I miss having these kinds of convos on twitter as networkservice ;)
https://static.googleusercontent.com/media/research.google.c...
Conventionally this is done in software with a hypervisor which emulates network devices for VMs (virio/vmxnet3, etc...) and does some sort of network encapsulation (vlan, vxlan, etc...). Similar things are done for virtual block storage (virtio blk, nvme, etc..) to attach to remote drives.
If the IaaS clients are high bandwidth or running their own virtualization stack, the infrastructure provider has nowhere to put this software. You can do the infrastructure network and storage isolation on the network switches with extra work but then the termination of the networking and storage has to be done in cooperation with the clients (and you can't trust them to do it right).
Here, the host just sees PCI attached network interfaces and directly attached NVMe devices which pop up as defined by the infrastructure. These cards are the compromise where you let everyone have baremetal but keep your software defined network and storage. In advanced cases you could even dynamically traffic shape bandwidth between network and storage prioritization.
Whether or not that's a good thing, well, people have their opinions, but they're considered a national security necessity.
https://en.wikipedia.org/wiki/William_Knox_D%27Arcy