Rethinking the Linux Cloud Stack for Confidential Vms
Posted5 months agoActive4 months ago
lwn.netTechstory
skepticalmixed
Debate
80/100
Confidential ComputingCloud SecurityTrusted Execution Environments
Key topics
Confidential Computing
Cloud Security
Trusted Execution Environments
The article discusses rethinking the Linux cloud stack for confidential VMs, sparking a debate among commenters about the feasibility and effectiveness of confidential computing in the cloud.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
25
6-12h
Avg / period
8
Comment distribution48 data points
Loading chart...
Based on 48 loaded comments
Key moments
- 01Story posted
Aug 23, 2025 at 7:39 AM EDT
5 months ago
Step 01 - 02First comment
Aug 23, 2025 at 9:07 AM EDT
1h after posting
Step 02 - 03Peak activity
25 comments in 6-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 26, 2025 at 3:43 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 44995234Type: storyLast synced: 11/20/2025, 4:41:30 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
https://www.atlanticcouncil.org/blogs/geotech-cues/how-the-c...
It only fools people who want to be fooled, or genuiely have no idea.
Confidential computing is trying to solve the very problem you are worried about. It is a way of providing compute as a service without the customer having to blindly trust the compute provider. It moves the line from "the host can do anything it wants" to "we're screwed if they are collaborating with Intel to bake a custom backdoor into their CPUs".
To me that sounds like a very reasonable goal. Go much beyond that, and the only plausible attacker is going to be the kind of people who'll simply drag you to a black site and apply the big wrench until you start divulging encryption keys.
is a kinda insane argument at even a surface level
Do all servers have debug back doors? Of course they do. Every piece of hardware has some form of JTAG debugging that can bypass all aspects of security and magic math no matter what proprietary fancy name that Stan the car sales man pushes. To access those debugging features they have to physically access my servers and that is not going to happen.
It is essentially by definition more secure than a VM anywhere.
I wouldn't "fully" trust it without going on-prem though. But trust isn't binary either; container < VM < hosted machine < on-prem machine. That's all there is to this.
In many ways, incident detection and automated-recovery is more important than casting your servers in concrete.
Emulated VM can create read-only signed backing images, and thus may revert/monitor states. RancherVM is actually pretty useful when you dig into the architecture.
Best policy is to waste as much time and money of the irrational, and interleave tantalizing payloads of costly project failures. Adversaries eventually realize the lame prize is just not worth the effort, or steal things that ultimately will cost them later. =3
https://en.wikipedia.org/wiki/Xbox_system_software#System
This was nice as a developer because we were not forced to patch our games when the overlay or underlying operating system of the console changed. In fact, On The Division 1 we shipped with a patched/modified version of the SDK- this wasn’t possible on Playstation.
Consequently, while the Xbox was marginally faster in a hardware sense, it was slower in reality. It even had the advantage of us using native rendering SDKs (Playstations OpenGL “with additions” was very much a bolted on second class citizen) and still we had higher quality and more consistency or our frame times on Playstation.
No free lunch.
However, I feel that “confidential computing” is some kind of story to justify something that’s not possible: keep data ‘secure’ while running code on hardware maintained by others.
Any kind of encryption means that there is a secret somewhere and if you have control over the stack below the VM (hypervisor/hardware) you’ll be able to read that secret and defeat the encryption.
Maybe I’m missing something, though I believe that if the data is critical enough, it’s required to have 100% control over the hardware.
Now go buy an Oxide rack (no I didn’t invest in them)
The CPU itself can attest that it is running your code and that your dedicated slice of memory is encrypted using a key inaccessible to the hypervisor. Provided you still trust AMD/Intel to not put backdoors into their hardware, this allows you to run your code while the physical machine is in possession of a less-trusted party.
It's of course still not going to be enough for the truly paranoid, but I think it provides a neat solution for companies with security needs which can't be met via regular cloud hosting.
The threat model for these technologies can also sometimes be sketchy (lack of side channel protection for Intel SGX, lack of integrity verification for AMD SEV, for example)
The code running this validation itself runs on hardware I may not trust.
It doesn’t make any sense to me to trust this.
If you control the hardware, you trust them blindly.
[edit: Took out the host bios, it's not part of the chain of trust, clarified it's only the host CPU firmware you care about]
AMD and Intel both have certainly had a bunch of serious security relevant bugs like spectre.
Even when running on bare metal I think the concept of measurements and attestations that attempt to prove it hasn't been tampered with are valuable, unless perhaps you also have direct physical control (eg: it's in a server room in your own building)
Looking forward to public clouds maturing their support for Nvidia's confidential computing extensions as that seems like one of the bigger gaps remaining
Yes, there are degrees of risk and you can pretend that the risks of third-parties running hardware for you are so reduced / mitigated due to 'confidential computing' it's 'secure enough'.
I understand things can be a trade-off. Yet I still feel 'confidential computing' is an elaborate justification that decision makers can point to, to keep the status quo and even do more things in the cloud.
Ultimately it's harder to get multiple independent parties to collude than a single entity, and for many threat models that's enough.
Whether today's solutions are particularly good at delivering this, I don't know (slides linked in another comment suggest not so good), but I'm glad people are dedicating effort to trying to figure it out
Read the Apple docs - they are very well written and accessible for the average HN reader.
For example, a direct link to his keynote slides from ESA 3S conference last year (PDF): https://indico.esa.int/event/528/attachments/5988/10212/Keyn...
They finish mentioning in "2023" though, we're in the back half of 2025 now - has anything changed significantly in the past couple of years? (I genuinely don't know)
The CPU and Secure boot has no reliable way to tell if the hardware was modded to allow bus snooping or a fake crash that still keeps the memory on a refresh loop.
Don't put things in the cloud if your threat model doesn't allow you to trust the cloud provider, or whoever has the power to compell your cloud provider to do things.
Even then, many secure enclaves have been compromised by people with enough time and motivation.
To me, those trust boundaries are in the same place.