New Attacks Are Diluting Secure Enclave Defenses From Nvidia, Amd, and Intel
Posted2 months agoActive2 months ago
arstechnica.comTechstory
controversialmixed
Debate
80/100
Hardware SecuritySecure EnclavesDrmTee
Key topics
Hardware Security
Secure Enclaves
Drm
Tee
New physical attacks are compromising secure enclave defenses in Nvidia, AMD, and Intel chips, sparking debate about the purpose and implications of these security features.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
10m
Peak period
43
0-6h
Avg / period
6.6
Comment distribution53 data points
Loading chart...
Based on 53 loaded comments
Key moments
- 01Story posted
Oct 29, 2025 at 9:44 AM EDT
2 months ago
Step 01 - 02First comment
Oct 29, 2025 at 9:54 AM EDT
10m after posting
Step 02 - 03Peak activity
43 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 1, 2025 at 8:13 AM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45746753Type: storyLast synced: 11/20/2025, 3:50:08 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
If an attacker with time and resources has physical access, you are doomed.
These things are often used because of contractual requirements. Mainstream media including video games are often contractually protected: you must not let it run/play on any device without sufficient hardware protections. So vendors have to include these protection systems even if they don't want to. If the systems were useless, this might end.
You might have mistaken it for say Intel ME and the AMD equivalent.
https://www.netspi.com/blog/executive-blog/hardware-and-embe...
https://github.com/ProjectLOREM/RayVLite
https://media.ccc.de/v/25c3-2896-en-chip_reverse_engineering
https://www.youtube.com/watch?v=Pp4TPQVbxCQ
Therefore requiring physical assess is still low complexity in context.
With that said, I'd rather see it broken than not, considering it's mostly used for negative stuff, and it isn't open enough to evaluate if it actually is secure enough.
In general, non-Android non-ChromeOS Linux is not good at this sort of thing: half a dozen sandboxing frameworks exist, but none of them are particularly secure.
Also, suppose you want to load an obscure kernel module that reads an obscure filesystem format. How do you sandbox the module?
There are no frameworks that use secure enclave for this purpose either. It's purpose is copyright protection and preventing user from removing features like advertisement and telemetry, not making your system safer.
> Also, suppose you want to load an obscure kernel module that reads an obscure filesystem format. How do you sandbox the module?
You should use microkernels.
Of course the obvious solution is don't run malware. Android's need for security partly comes from the fact that the primary repository/store distributes tons of dubious code that it grants network access and keeps up to date. If you stick to e.g. F-droid and turn off automatic updates, you don't find yourself in this adversarial position.
Like I said, the Android team does not think so. Nor does the ChromeOS team, which uses selinux to sandbox the browser, something no other non-Android Linux distro does (except possibly secureblue, which sadly almost no one uses).
Not only, it has many purposes. I'm also the administrator of my computer, and some things I want to be unchangable by software, unless I myself unlock it, like I don't want anyone to be able to boot or install other OSes than the ones I've installed myself. The secure enclave and secure boot is perfect for this, even if my computer gets malware they won't be able to access it, and even if someone gets physical access to my computer, they won't be able to boot their OS from a USB.
Any feature controlled by the owner of the computer is good; features controlled by anyone else like the manufacturer can be bad. And note that in this viewpoint, leasing makes you temporary owner.
But also: TPMs could be used to prevent evil maid attacks and to make it uneconomical for thieves who stole your device to also steal your data. It makes it possible for devices to remotely attest to their owners that the OS has not been compromised, which is relevant to enterprise IT environments. There are a lot of good uses for this technology, we just need to solve the political problems of aggressive copyright, TIVOization, etc.
No need for the keys or decryption to touch easily intercepted and rowhammered RAM.
IMO Amazon is the obvious choice for TEE because they make billions selling isolated compute
If you built a product on Intel or AMD and need to pivot do take a look at AWS Nitro Enclaves
I built up a small stack for Nitro: https://lock.host/ has all the links
MIT everything, dev-first focus
AWS will tell you to use AWS KMS to manage enclave keys
AWS KMS is ok if you are ok with AWS root account being able to get to keys
If you want to lock your TEE keys so even root cannot access I have something i the works for this
Write to: hello@lock.host if you want to discuss
And so there is no case where you find a Nitro TEE online and the owner is not AWS
And it is practically impossible to break into AWS and perform this attack
The trust model of TEE is always: you trust the manufacturer
Intel and AMD broke this because now they say: you also trust where the TEE is installed
AWS = you trust the manufacturer = full story
So, working as intended.