Modern Ios Security Features – a Deep Dive Into Sptm, Txm, and Exclaves
Posted3 months agoActive3 months ago
arxiv.orgTechstory
calmpositive
Debate
40/100
Ios SecurityApple Security FeaturesOperating System Design
Key topics
Ios Security
Apple Security Features
Operating System Design
The HN community discusses a research paper on modern iOS security features, including SPTM, TXM, and Exclaves, with comments praising Apple's security efforts and debating the complexity of iOS security.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
4h
Peak period
8
18-21h
Avg / period
4
Comment distribution32 data points
Loading chart...
Based on 32 loaded comments
Key moments
- 01Story posted
Oct 13, 2025 at 2:23 PM EDT
3 months ago
Step 01 - 02First comment
Oct 13, 2025 at 6:51 PM EDT
4h after posting
Step 02 - 03Peak activity
8 comments in 18-21h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 15, 2025 at 1:15 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45571688Type: storyLast synced: 11/20/2025, 6:56:52 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Not only are they willing to develop hardware features and plumb that throughout the entire stack, they're willing to look at ITW exploits and work on ways to mitigate that. PPL was super interesting, they decided it wasn't 100% effective so they ditched it and came up with other thigs.
Apple's vertical makes it 'easy' to do this compared to Android where they have to convince the CPU guys at QC or Mediatek to build a feature, convince the linux kernel to take it, get it in AOSP, get it in upstream LLVM, etc etc.
Pointer authentication codes (PAC) is a good example, Apple said f-it we'll do it ourselves. They maintained a downstream fork of LLVM, and built full support, leveraged in the wild bypasses and fixed those up.
https://developer.android.com/ndk/guides/arm-mte
https://source.android.com/docs/security/test/memory-safety/...
This is because MTE facilitate finding memory bugs and fixing them - but also consumes (physical!) space and power. If enough folks run it with, say Chrome, you get to find and fix most of its memory bugs and it benefits everyone else (minus the drawbacks, since everyone else has MTE off or not present).
trade offs, basically. At least on pixel you can decide on your own
PS: make sure you remove that pesky "USB accessories while locked allowed" profile that Configurator likes to sneak in.
Running something in the kernel is unavoidable if you want to actually show stuff to the user.
Attacker sends an imessage containing a PDF
imessage, like most modern messaging apps, displays a preview - which means running the PDF loader.
The PDF loader has support for the obsolete-but-part-of-the-pdf-standard image codec 'JBIG2'
Apple's JBIG2 codec has an exploitable bug, giving the attacker remote code execution on the device.
This exploit was purchased by NSO, who sold it to a bunch of middle eastern dictatorships who promptly used it on journalists.
https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-i...
* https://googleprojectzero.blogspot.com/2022/03/forcedentry-s...
[0] https://xkcd.com/1200/
What they do is against your interests, for them to keep the monopoly on the App Store.
Their commitment to privacy goes beyond marketing. They actually mean it. They staffed their security team with top hackers from the Jailbreak community… they innovated with Private Relay, private mailboxes, trusted compute, multi-party inference…
I’ve got plenty of problems with Apple hypocrisy, like their embrace of VPNs (except for traffic to Apple Servers) or privacy-preserving defaults (except for Wi-Fi calling or “journaling suggestions”). You could argue their commitment to privacy includes a qualifier like “you’re protected from everyone except for Apple and select telecom partners by default.”
But that’s still leagues ahead of Google whose mantra is more like “you’re protected from everyone except Google and anyone who buys an ad from Google.”
I found out about this when I was wiresharking all outbound traffic from my router and saw my phone making these weird requests.
Apple actually does warn you about this in the fine print (“About WiFi calling and privacy…”) next to the toggle in Settings. But I didn’t realize just how intrusive it was.
I know my mobile ISP can triangulate my location already, but I don’t want to offer them even more data about every public IP of every WiFi network I connect to, even if I’m not roaming at the time.
In theory it makes it easier to catch stuff that you can’t simply catch with static analysis and it gives you some level of insight beyond simply crashing.
Is this duct tape over historical architectural decisions that assumed trust? Could we design something with less complexity if we designed it from scratch? Are there any operating systems that are designed this way?
Any sufficiently secure system is, by design, also secure against it's primary user. In the business world this takes the form of protecting the business from its own employees in addition to outside threats.
https://medium.com/@tunacici7/sel4-microkernel-architecture-...
It's missing "the rest of the owl", so to speak, so it's a bit of a stretch to call it an operating system for anything more than research.
Yes, it's all making up for flaws in the original Unix security model and the hardware design that C-based system programming encourages.
> Could we design something with less complexity if we designed it from scratch? Are there any operating systems that are designed this way?
Yes, capability architecture, and yes, they exist, but only as academic/hobby exercises so far as I've seen. The big problem is that POSIX requires the Unix model, so if you want to have a fundamentally different model, you lose a lot of software immediately without a POSIX compatibility shim layer -- within which you would still have said problems. It's not that it can't be done, it's just really hard for everyone to walk away from pretty much every existing Unix program.
SPTM exists to fix a more fundamental problem with OS security: who watches the watchers? Regular processes have their memory accesses constrained by the kernel, but what keeps the kernel from unconstraining itself? The answer is to take the part of the kernel responsible for memory management out of the kernel and put it in some other, higher layer of privilege.
SPRR and GLs are hardware features that exist solely to support SPTM. If you didn't have those, you'd probably need to use ARM EL2 (hypervisor) or EL3 (TrustZone secure monitor / firmware), and also put code signing in the same privilege ring as memory access. You might recognize that as the design of the Xbox 360 hypervisor, which used PowerPC's virtualization capability to get a higher level of privilege than kernel-mode code.
If you want a relatively modern OS that is built to lock out the user from the ground-up, I'd point you to the Nintendo 3DS[1], whose OS (if not the whole system) was codenamed "Horizon". Horizon had a microkernel design where a good chunk of the system was moved to (semi-privileged) user-mode daemons (aka "services"). The Horizon kernel only does three things: time slicing, page table management, and IPC. Even security sensitive stuff like process creation and code signing is handled by services, not the kernel. System permissions are determined by what services you can communicate with, as enforced by an IPC broker that decides whether or not you get certain service ports.
The design of Horizon would have been difficult to crack, if it wasn't for Nintendo making some really bad implementation decisions that made it harder for them to patch bugs. Notably, you could GPU DMA onto the Home Menu's text section and run code that way, and it took Nintendo years to actually move the Home Menu out of the way of GPU DMA. They also attempted to resecure the system with a new bootloader that actually compromised boot chain security and let us run custom FIRMs (e.g. GodMode9) instead of just attacking the application processor kernel. But the underlying idea - separate out the security-relevant stuff from the rest of the system - is really solid, which is why Nintendo is still using the Horizon design (though probably not the implementation) all the way up to the Switch 2.
[0] In practice, Apple has to be trustworthy. Because if you can't trust the person writing the code, why run it?
[1] https://www.reddit.com/r/3dshacks/comments/6iclr8/a_technica...
It’s been designed with lower user trust since day one, unlike other OSes of the era (consumer Windows, Mac’s classic OS).
Just how much you can trust the user has changed overtime. And of course the device has picked up a lot of a lot of of capabilities and new threats such as always on networking in various forms and the fun of a post Spectre world.
I think that there's also inherent trust in "hardware security" but as we all know its all just hardcoded software at the end of the day, and complexity will bring bugs more frequently.
https://sneak.berlin/20231005/apple-operating-system-surveil...
[0] https://security.apple.com/blog/apple-security-bounty-evolve...