Introduction to Grapheneos
Posted4 months agoActive4 months ago
dataswamp.orgTechstoryHigh profile
controversialmixed
Debate
80/100
GrapheneosAndroidSecurityPrivacy
Key topics
Grapheneos
Android
Security
Privacy
The introduction to GrapheneOS sparked a lively discussion on its security features, user experience, and compatibility with Google services, highlighting both its benefits and drawbacks.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
4d
Peak period
119
Day 5
Avg / period
20
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 10, 2025 at 12:32 PM EDT
4 months ago
Step 01 - 02First comment
Sep 14, 2025 at 11:46 AM EDT
4d after posting
Step 02 - 03Peak activity
119 comments in Day 5
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 22, 2025 at 8:27 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45200133Type: storyLast synced: 11/20/2025, 8:23:06 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
It really is sad that there isn't any ROM with Graphene's permission and sandboxing features while still leaving the user in control. IIRC it's theoretically possible since they publish the code, but one assumes it would be a non-trivial effort:\
In the verified boot threat model, an attacker controls persistent state. If you have persistent root access as a possibility then verified boot doesn't work since persistent state is entirely trusted.
A userdebug build of AOSP or GrapheneOS has a su binary and an adb root command providing root access via the Android Debug Bridge via physical access using USB. This does still significantly reduce security, particularly since ADB has a network mode that can be enabled. Most of the security model is still intact. This is not what people are referring to when they talk about rooting on Android, they are referring to granting root access to apps via the UI not using it via a shell.
The same is true even of an operating system such as QubesOS. And it's a minimal risk.
Not providing optional root access on GOS makes it only useful if you have a constrained application in mind for the phone. I don't have time to compile GOS with root so I just use LineageOS instead.
But you could always do both. Compile it, but preinstall a specific set of apps as root, no su.
> If you have the UI layer able to grant root access, it has root access itself and is not sandboxed. If the UI layer can grant it, an attacker gaining slight control over it has root access. An accessibility service trivially has root access. A keyboard can probably get root access, and so on. Instead of a tiny little portion of the OS having root access, a massive portion of it does.
Android has an established way to handle permission dialogs that require the user to confirm their approval, including use of fingerprint/PIN/password to authenticate. If it's good enough to unlock and decrypt the device, it's good enough to control root access. Besides which, I think
> An accessibility service trivially has root access.
is hitting https://xkcd.com/1200/ ; an a11y service already has access to everything inside the sandbox (including all your sensitive data), and the system settings that control permissions and sandboxing.
> In the verified boot threat model, an attacker controls persistent state. If you have persistent root access as a possibility then verified boot doesn't work since persistent state is entirely trusted.
I'm tentatively willing to agree, but I don't see the point? 1. If an attacker controls persistent state, don't they already control all the other permissions, including what security domains exist and what permissions are given to apps. 2. You don't have to persist it; even just one-off root access is quite useful.
> A userdebug build of AOSP or GrapheneOS has a su binary and an adb root command providing root access via the Android Debug Bridge via physical access using USB. This does still significantly reduce security, particularly since ADB has a network mode that can be enabled. Most of the security model is still intact. This is not what people are referring to when they talk about rooting on Android, they are referring to granting root access to apps via the UI not using it via a shell.
Agreed, that's not what I want.
With the advent of choicejacking I don't think I want to trust permission dialogs anymore.
> including use of fingerprint/PIN/password to authenticate
IMO if you have the UI layer able to grant root access at all, even with requiring re-authentication, it still already has root access itself and is therefore not sandboxed.
So you're using a version of Android patched to remove all permissions? After all, in your threat model all apps can get permission to use the microphone and camera, make phone calls, access fine-grained location information, read and write files at will, etc. Frankly, I'm not sure what they'd get out of root at this point.
> IMO if you have the UI layer able to grant root access at all, even with requiring re-authentication, it still already has root access itself and is therefore not sandboxed.
Likewise, surely this applies to any permission system, and every other permission. The system UI controls every other permission in the system; if we assume it compromised, then everything else is already lost.
A permission that allows them to hide that they have access to everything, including other apps' data?
Would this have been easier or more possible if Android had a full capability-based security model?
[0] This was later obsoleted by the OS adding that feature natively, which is an interesting angle to consider; directly supporting the things people root for definitely helps, but you're unlikely to ever get everything so it's not a panacea.
For what it's worth, my understanding is that this has always been the position of GrapheneOS too. Given the resources and enough benefit/cost to allocate, the project would rather integrate or implement usability features at the OS level instead of encouraging people to expose attack surface. Specifically because GrapheneOS is a project meant to be primed to defend some of the most intimate and personal aspects of a person's life.
Yes, I'm sure it is. But I don't consider that a tolerable tradeoff, and I believe we could have a system that has most of the best of both worlds.
I dint understand why you insist on this massive risk to be laid on on everyone.
GOS publishes pretty detailed documentation. They don't explain step by step how to build an OS with root specifically, instead assuming that the users knowing the immense risks also have the skils they need to achieve it without handholding.
https://www.chromium.org/chromium-os/developer-library/guide...
So if I don't use sudo then the problem with root is solved?
Look, if your media player or game can just steal your ssh keys, or slightly modify your changes to your code, or inject a script into your startup sequence, that's not very safe, is it?
And that's even without having access to root (imagine if someone had written a malware like Heartbleed or Shellshock, which then could quietly persist, patch your firmware, or actually do anything it wants?)
I hope you're at least running your laptop with selinux in enforcing mode :)
The availability of application sandboxen and the availability of root access are two entirely separate security concerns.
If the GUI stack is vulnerable, then those sandboxes could be broken out of. The idea behind not allowing an app to access root is to remove the attack surface introduced by the GUI stack. An alternative interface to a GUI would be some physical connection (like usb-c). So accessing root exclusively via a console port or USB would be safer in theory.
This is true regardless if it's a phone or a PC.
Desktops are unfortunately waaaay behind something like GrapheneOS or iOS in terms of sandboxing. The closest in the desktop world is Qubes OS, but that's not a realistic alternative to normal OSes for the common user.
I very much don't want to have some external device to have root access to my computer.
If iOS type sandboxing where I can't access most of the data at all is ahead, I'm glad to be behind.
I don't want adb root access? I want to be able to run apps with root access.
Are you saying that the Qubes OS security model is worse than the GrapheneOS one?
By the way, personal attacks are against the HN Guidelines.
GrapheneOS is designed so you don’t need root to run apps or manage the device. Compartmentalization is on an per app level. And you already know how qubes does compartmentalisation.
This sounds like a great news to me, thank you.
GOS is not running a flavour of mainline Linux, but Android. They're nevertheless planning on moving to virtualisation as well https://discuss.grapheneos.org/d/24154-grapheneoss-roadmap-r...
For now it's as good as it gets.
Do you have any statistics to show about how secure a micro-kernel is? I can't believe it can be better than this: https://www.qubes-os.org/security/qsb/
An attacker can do cross-guest exploits via the network too, which would not be considered within the scope of that list. As an example, if an attacker could exploit the network VM via a Linux kernel TCP/IP vulnerability, then exploit connected virtual machines from there with the same vulnerability. Network driver vulnerabilities are another way to get that initial foothold. It wouldn't be a hole in the architecture implemented by QubesOS but it would still give them control over everything that matters to the user.
> GOS publishes pretty detailed documentation. They don't explain step by step how to build an OS with root specifically, instead assuming that the users knowing the immense risks also have the skils they need to achieve it without handholding.
It really sounds like you call it very easy, then promptly turn around and say that it's not easy but that's okay because it should be hard. You're also conflating the ability to assess security risks with the ability to build Android from source and modify it in the process, even though these skills are mostly unrelated.
> I dint understand why you insist on this massive risk to be laid on on everyone.
Largely, I don't agree that it's a "massive risk" in the first place. I don't believe that user-controlled root access is a problem, and I certainly don't believe that a default-off option to enable root access constitutes a problem.
You either build a debug image, so you just have it, or you add your own patches adding this capability (in exactly the same way the project modifies stock aosp), and build it.
Use your own keys to sign and you're golden.
The assumption is you know what you're doing, and then it's very easy. If you don't, then you likely shouldn't.
I am not really "conflating" these in a way you suggest: it's not just about building the image but deeper understanding that will bring both.
It's not disconnected from the project, but it's inherently within the project. SURE you can consider these two separate skills, but within the context of "getting the root on the GOS build" it's one. If you don't know how to make it happen, you don't have a skill to safely use it.
And lastly, it's okay if you don't consider it a massive risk. I do.
Now let's consider the risks of that, - https://cybernews.com/security/rooted-android-ios-devices-su... - https://www.talsec.app/blog/what-is-rooting-and-how-to-prote...
For you it's not a risk, okay, I guess. I mean, if you're a security researcher with a considerable reputation, you can certainly argue with authority, but I don't see the angle.
You argue from the position of convenience and capabilities. Is the risk high? The consensus is that it is. I agree, you don't, I'm okay with it.
It is my understanding that that only gives root to adb, not apps, so no.
> or you add your own patches adding this capability (in exactly the same way the project modifies stock aosp), and build it.
If we're at the point of patching source trees, then no, we've left the realm of "very easy" behind. Installing Magisk is easy. Building Android from source, let alone patching it, is not.
> It's not disconnected from the project, but it's inherently within the project. SURE you can consider these two separate skills, but within the context of "getting the root on the GOS build" it's one. If you don't know how to make it happen, you don't have a skill to safely use it.
I really disagree. Knowing when to click the allow button or not is a separate skill from building/patching a ROM from source.
> Now let's consider the risks of that, - https://cybernews.com/security/rooted-android-ios-devices-su... - https://www.talsec.app/blog/what-is-rooting-and-how-to-prote...
I'd love to, but you'll have to mention what they might be. Both of those links treat root as nearly synonymous with compromise but never bother to explain how that compromise would occur, just 1. root 2. ??? 3. malware. That's fear-mongering, not a threat model.
> I mean, if you're a security researcher with a considerable reputation, you can certainly argue with authority, but I don't see the angle.
Or, we could avoid Appeal to Authority and talk threat models. The only one I've seen yet in this thread is people claiming that malware can fake out permission dialogs and that this is a problem for root permissions but somehow leaves the rest of Android's permission model in a usable state, which is... an interesting claim.
> Is the risk high? The consensus is that it is. I agree, you don't, I'm okay with it.
Many people making vague claims might technically be a "consensus" but it's not actually meaningful. If you've got an actual threat model, let's hear it, otherwise there's not much point to this.
Here is a recent report of widespread advanced malware looking to see if a device is rooted - https://www.lookout.com/threat-intelligence/article/badbazaa...
Here is a report of malware using root - https://zimperium.com/blog/new-advanced-android-malware-posi...
Root does not only provide privilege escalation, it also provides attractive options for exploit persistence on a device, something which is difficult to achieve on modern Android and iOS.
Okay? I do actually think that should be blocked (good root is invisible), but I'm not seeing a problem.
> Here is a report of malware using root
To quote the article:
> In addition to collecting the messages using the Accessibility Services, if root access is available, the spyware steals the WhatsApp database files by copying them from WhatsApp’s private storage.
Note that it already uses a11y features to do the same thing regardless, but also this is another case of conveniently skipping all the important details. Seriously - "if root access is available, the spyware steals" - how did it get root access? If the "vulnerability" is that the malware asks the user for root access and the user gives it, that is not a vulnerability. A system where malware needs permission to do bad things is perfectly fine.
You are moving from a small handful of processes that get root access and are heavily constrained by selinux policies and are nowhere near userspace to putting root access behind a weak UI prompt. That is the ability to modify the system at runtime. If the system can be modified and the bar to that modification is trivially bypass-able, privilege escalation becomes monumentally easier for an attacker. Because the system can be modified *it cannot be trusted*.
See: github.com/chenxiaolong/avbroot
(Google Wallet runs fine for storing cards and tickets and whatnot, you just can't pay with it)
Google will not using their service for tap-to-pay.
I'm okay with losing access to Google wallet while using Graphene os (I can just use plain old credit cards), but I would like to have the option to revert it in the future.
Myself I have not reverse engineered the Titan M2 security chip, but surely it uses eFuse or OTP memory for anti rollback protection mechanisms and such.
These are really basic hardware security primitives. I'm curious why you're under the impression Pixels wouldn't use eFuse.
The Pixel 6 is only mentioned in regards to anti-rollback protection. This has nothing to do with unlocking and later relocking the bootloader. Pixels have always supported relocking the bootloader with a custom root of trust, i.e. custom AVB signing keys used by a custom, user-installed operating system.
https://source.android.com/docs/security/features/verifiedbo...
> The Xbox 360, Nintendo Switch, Pixel 6 and Samsung Galaxy S22 are known for using eFuses this way.[8]
Anti-rollback protection is a security feature, eFuses are hardware primitives that can be used to implement it. Bootloader locking is another security feature that can be implemented with eFuses.
If you have any data denying the use of eFuses in the Pixel 6, please share it, that is what I was interested in this sub-thread. I really did not understand the relevance and the correctness of your comment.
I thought you claimed that Pixels also used eFuses to disable certain features after unlocking the bootloder once, like Samsung devices do. That's why I pointed out that Pixel devices have always had support for relocking the bootloader with a custom root of trust.
Your response to this comment https://news.ycombinator.com/item?id=45244933 made it seem that way, because you appeared to disagree that "Pixel devices don't have anything like the Samsung Knox eFuse, which blows after running a third-party bootloader".
I guess that was a misunderstanding.
On Samsung devices, blowing the Knox eFuse permanently disables features tied to Knox (e.g. Samsung Pay, Secure Folder). ("can never go back to a state where it passes all checks")
Pixels do not have an equivalent eFuse that permanently disables features (discounting the ability to flash previous versions of Android). Restoring stock firmware and relocking the bootloader will give you a normal Pixel.
Indeed it may be true today that "restoring stock firmware and relocking the bootloader will give you a normal Pixel", I completely understand what you mean.
But that is NOT the same thing as "Pixels do not have eFuses to flag devices that have been modified before". Please share data supporting this claim if you have it.
It is possible that existing Pixels have such eFuses that internally flag your device (perhaps bubbling up to the Google Play Integrity APIs) but they don't kill device features per Google's good will.
My question is 100% about the hardware inside the Titan M2 and how it is used by Google. I don't think the answer is public, and anyone who has reverse engineered it to such detail won't share the answer either.
There's no such thing for Pixels, and it also doesn't void the manufacturer warranty.
I don't know if there's any good solution to this, since all this seems to be necessary for the security model.
EDIT: Wait, isn't this what A/B partitions are for? (ie, you can brick one partition and still boot from the other) Also, shouldn't it be possible to flash an image signed with the correct keys without unlocking the bootloader and wiping the user data?
Equating control to root is an outdated way of thinking that comes from a time before the principle of least privilege existed. The way UNIX did things should not be put on a pedestal.
It's true only if user is the threat for the user, e.g. a user with low IQ but high curiosity, but such user usually cannot install GrapheneOS.
Users know about this problem and know how to mitigate it. Get out of my way, please.
Antivirus scanners are essentially useless on modern mobile OSes because they are limited to accessing the same things a malicious app or file would be.
No, I cannot rely on the app sandbox. If someone else controls a device, then this device can be used against me.
Your antivirus, scanner or not, is useless on your device for you.
My antivirus on my device is useful for me. It works fine on GrapheneOS (Pixel 6), but banned by Google on Pixel 5, which is not supported by Google with security updates. WTF?
Also no matter how technical you are, it's almost impossible for you to detect zero-click 0days for which you are more vulnerable to than people without root privileges. You running rooted OS actually become easier and less costly target than people without rooted OS.
I doubt that user-controlled root access is a significant variable in the face of zero-days; LineageOS+Magisk is more likely to resist attack than vendor ROMs that are lagging security updates by months.
Providing app-accessible root compromises the security of the OS even for people not using it since it provides root access to a substantial portion of the OS and provides a way to maintain persistent root access for an attacker. A quick tapjacking vulnerability exploit is all that's required to gain full control over the device with no way to detect or eliminate it. The attacker has root so they control all the user interfaces, etc. and can hide it. They can hide what happened and block an attempt at revoking it. The idea that it only impacts people negatively if they use it poorly is wrong. Using it at all is using it poorly anyway, since the right way to implement anything is not giving root access to an application. App-accessible root access is used as an insecure shortcut to implement features without proper security models where components are given the privileges they need to function and are split up to reduce attack surface.
For example, in Android, there's an isolated netd process with CAP_NET_ADMIN for configuring the network but it can't load eBPF programs itself, only bpfloader which it only does via predefined programs. This avoids a compromise of netd being able to compromise the kernel via eBPF. Similarly, a VPN service app providing features like local filtering and/or an actual VPN does not have CAP_NET_ADMIN or other highly privileged access. User interfaces in the OS configuring firewall functionality and other network configuration do it via netd. A common use of app-accessible root is giving root access to a GUI application to manage firewall rules directly rather than having a tiny privileged component doing it and then the GUI only being given the privilege of configuring rules through that in a structured way. Principle of least privilege, isolation, etc. are basic security concepts violated by this whole approach.
Giving the user root access is not the same as giving apps root access. The user having a root access shell is not nearly as harmful as having apps able to request it.
Apps can and will coerce users into doing things they shouldn't. Root access is inherently not required by someone like a firewall configuration GUI and not the right way for the implementation to be made. That's an example of an insecure implementation leading people to believe it requires giving broad root access to the OS and the app when it's not needed by a well written implementation. It's similar to apps demanding a permission like Contacts and refusing to work without it despite it not being required, which is why GrapheneOS provides Contact Scopes and similar features for overruling the demands from the apps. App accessible root access goes against the Android and GrapheneOS privacy and security approach to an extreme.
Nubia was hacked remotely. It received no updates for years, so it was an easy target. I unlocked Nubia and plan to install LineAge OS to it when my Pixel 5 will die.
Pixel 5 was hacked from close distance via WiFi or BT.
Pixel 6 with Graphene is not hacked yet.
Lack of root doesn't protect me.
However, I use SafeDot to monitor phone access to microphone, camera, GPS, so I'm alerted when it starts to beep, which creates problems for spies, so SafeDot is banned by Google at request of СІА. I cannot fix this, because Google controls my phone instead of me. SafeDot still works on Pixel6 GrapheneOS with warning notification about it «unsafety» though.
There needs to be some escape hatch that you can use, even if your grandma doesn't have access to it.
The point is that you should always be some supported workflow instead of the user having to go out of their way and modifying the base system.
https://news.ycombinator.com/item?id=45101400
They have not said anything like that. In fact there are plenty of things about the current GrapheneOS + Pixel end result that they would change if they had the resources and support to do so. They have repeatedly praised or highlighted improvements in iOS and other less mainstream operating systems.
QubesOS is a completely different project with different goals and constraints. GrapheneOS have praised the isolation model of Qubes repeatedly, but have always said it is a strong approximation of many laptops. Older laptop operating systems (Windows/macOS/desktop Linux distros) do not aim to provide similar protections against threats that the newer mobile operating systems have done.
>They have not said anything like that.
Quote (https://news.ycombinator.com/item?id=30769666):
> Librem 5 has incredibly poor hardware/firmware security and it isn't possible for us to work around that at a software level. It's missing the basic hardware and firmware security features that are required.
The reality is that Librem 5 is secure according to a different threat model than the one GrapheneOS follows. This doesn't make it "incredibly" insecure, unless you believe that only you can define good threat models.
They have expressed interest in open hardware, well-designed open source secure elements, open source blob-free firmware with proper signature verification, open source greenfield kernel and OS projects, hardware kill switches with a proper threat model etc.
Why should anyone expect them to throw away everything they have accomplished to start several steps backward on platforms that don't achieve any of these things?
I don't expect them to throw away anything. I just wanted to get a statement from them concerning what's a reasonable goal and what isn't. But still:
- to break from their life-threatening dependence on Google? https://news.ycombinator.com/item?id=45208925;
- to allow more people benefit from a better security, not just those who can afford a Pixel?
> open source blob-free firmware
Where did you find that?
> open source greenfield kernel and OS projects
I'm not sure what you're talking about but not GrapheneOS, which depends on a bunch of proprietary drivers and firmware.
> hardware kill switches with a proper threat model
Like those on Librem 5? Or do you mean some other device? I'm not aware of any other device with usable kill switches.
Please contact the OEMs/manufacturers and ask them why they cannot support reasonable requirements like: minimum five years of full (OS, firmwared, security patches etc.) device support code, hardware accelerated virtualisation, isolated radios (Wi-Fi/Cellular/Bluetooth etc.), a decent secure element implementation and support for throttling, proper Wi-Fi privacy support etc. (https://grapheneos.org/faq#future-devices)
You are clearly passionate about this, and you are not alone. But I have never understood why people expect GrapheneOS to compromise their goals and values, rather than wanting others to improve to meet them. GrapheneOS is completely free.
>to break from their life-threatening dependence on Google?
They have had talks with at least one OEM in the years past before the recent round of communications, trying to secure support for GrapheneOS on non-Pixel hardware. They have had builds in the past for Samsung hardware and development boards to explore alternative platforms and interest in minimal blob platforms which they had to abandon because those were not suitable for the project long term.
They have always been completely willing to support non-Pixel devices that meet their requirements, but none have been available or validated so far.
It's not appropriate at all to say they have no interest in breaking their dependence on Google Pixels. Personally, I think it's bizarre that numerous companies like Fairphone, Punkt and OSOM can come and go without ever seriously attempting to meet the reasonable requirements set by GrapheneOS and offer alternative options.
>to allow more people benefit from a better security, not just those who can afford a Pixel?
GrapheneOS is not a hardware manufacturer with the benefits of economies of scale. If they could magically create an affordable device that met their standards they would do it...
As you already understand, GrapheneOS is a project with limited resources. They put those resources to use in significantly improving the privacy/security for an operating system (OS) that guards regular people's most personal data and device. Any device they support that lowers that standard means much much more time spent on device support and much less time on important work that improves on what you can have now (GrapheneOS + Pixel 8 and above).
GrapheneOS is also open source, so nothing stops you or I from adapting parts of it for a less suitable platform and releasing that for others to use. DivestOS for example used code and ideas from GrapheneOS in their own project, and GrapheneOS were always supportive of the project and even offered the founding developer a role in the development team after they stopped working on DivestOS.
>Where did you find that?
As I've said above, it is something they experimented with in the past. But the reality is that they are intimately familiar with some past and present projects around that area including Raptor Engineering stuff, SiFive, Betrusted, coreboot, OpenTitan, Tropic Square etc. GrapheneOS has always been deeply interested in quality open source software and open hardware, unfortunately they just never had opportunities to support those things while improving on the progress they have already made.
>I'm not sure what you're talking about but not GrapheneOS, which depends on a bunch of proprietary drivers and firmware.
I did not say Pixels are a blob-free platform. I said GrapheneOS have expressed interest in open source kernel and OS projects even outside of GrapheneOS itself.
>Like those on Librem 5? Or do you mean some other device? I'm not aware of any other device with usable kill switches.
No, I don't mean usability, I mean physical privacy switches with a proper threat model. GrapheneOS have stated they would be 100% interested in usable privacy switches for specific goals like stopping audio recording or location detection but it is a lower priority than other ideas they have to improve privacy/security on the device as a whole.
This is plain wrong: https://source.puri.sm/Librem5/community-wiki/-/wikis/Freque...
(And I think I already told you that once. Upd: @amosbatto did: https://news.ycombinator.com/item?id=30769589, and you ignored that.)
Yes, because the FSF endorsement matters, https://news.ycombinator.com/item?id=25504641. Why does this matter at all how the updates are implemented? The fact is, the updates are possible and being performed via other means.
> Crucially, there aren't cellular radio firmware updates.
This is false:
https://forums.puri.sm/t/librem-5-firmware-updates/20604
https://forums.puri.sm/t/updating-firmware-on-the-librem-5/2...
https://source.puri.sm/Librem5/librem5-fw-jail
I think protecting people from themselves is a noble goal that is often overlooked, even if many will disagree with me.
Indeed, this is where we disagree. "If you are protected by a steel door, but you don't have the key, you aren't safe: You're imprisoned."
See also: https://news.ycombinator.com/item?id=45081344
GrapheneOS is focused on privacy and security overall including protecting applications and the OS from exploitation in general. GrapheneOS does use sandboxing and compartmentalization to improve security. Hardware-based virtualization is one of the GrapheneOS hardware requirements (https://grapheneos.org/faq#future-devices) and is used through Android's virtualization framework. It's provided by pKVM on Pixels and Gunyah on Snapdragon. Making more use of virtualization beyond isolating system services via microdroid and running a desktop OS via Android's virtual machine management app (Terminal) is planned and being gradually worked on. It's part of what we work on overall, not the whole picture or primary focus. It will be a bigger focus over time as hardware improves to make it more viable.
Smartphones didn't have a lot of memory for virtualization until recently and GrapheneOS needs memory for other protections too. The Pixel 6 was the first Pixel with CPU hardware virtualization support and the Pixel 10 is the first with native GPU hardware virtualization support not requiring proxying to the host for GPU acceleration. Secure GPU acceleration is quite important for making it into a highly usable feature, especially on a phone, so the hardware was not ready yet and still isn't on most other devices. QubesOS largely doesn't have that available either, but laptop or desktop hardware is more powerful.
Why would you need that if you don't run any untrusted apps in a trusted VM? Also, you don't have any private information in the untrusted VMs. It might only be helpful in the context of security in depth, but this barrier for attackers is much lower than the virtualization itself.
> data extraction in the After First Unlock state
By whom? A physical attacker?
> Hardware-based virtualization is one of the GrapheneOS hardware requirements
Qubes doesn't force the user to have it. Could GrapheneOS also allow using devices which don't support it? It would make millions of people more secure, not less. And it would make GrapheneOS more popular, too. You could name it "GrapheneOS lite" if you're afraid of a false security message.
> Applications and guest operating systems are just as vulnerable to exploitation
Which exploitation? Where would it come from?
A user's web browser, messaging app, etc. getting exploited is going to result in an attacker getting their data from it and the OS. Containing it within a VM limits damage to that VM which depends entirely on how the user has split things up. It's not a substitute for protecting against exploitation of containing successful exploitation within applications or operating systems.
> By whom? A physical attacker?
It's in reference to situations where disk encryption matters to prevent data extraction. One of the two purposes of verified boot is as part of an overall approach to protecting against data being extracted from the device.
> Qubes doesn't force the user to have it.
No, it does require it. It works without extra features for properly containing devices, etc. but does require hardware virtualization support in the CPU. They do have mandatory hardware features.
> Could GrapheneOS also allow using devices which don't support it?
Without hardware-based virtualization? It could allow it, but then the functionality we build resembling how QubesOS does compartmentalization won't be available to users on those devices. There are much more important security features in our requirements than this. Hardware-based virtualization support was added there because any devices we'll support in the future are going to have it anyway since it's a standard feature on Snapdragon. We avoid adding features to the list which would rule out supporting a 2026 or later Snapdragon device. Memory tagging was an essential feature which is game changing for security when deployed throughout the OS and for apps, which is why it was added as a requirement despite Snapdragon temporarily losing it. Snapdragon had a very early implementation of memory tagging and then lost it due to their custom cores prioritizing performance and cost over security.
> Which exploitation? Where would it come from?
Remote attacks on the applications and OS running in the VMs. Alternatively, supply chain compromises or other forms of attacks against the applications, etc. These are the reasons why QubesOS is providing compartmentalization in the first place. However, it does not protect what's inside each VM from attacks against what's in that VM. It protects other VMs after that VM gets compromised. It depends entirely on the user dividing things up well and limits the damage of a compromise rather than avoiding it.
For example, if the user's main everyday usage web browser instance they use for most things get remotely compromised, then they're going to have all their logins, passwords and other data in that VM obtained by the attacker but other VMs will be safe. It's containing the damage, but it's still very bad. If someone divided up their web browsing heavily between VMs, they'll limit the damage more, but it could be one of their most important instances which got exploited.
For example, a journalist may run an email client and web browser related to their work in a VM which may be targeted, get exploited, and now a lot of their most important work related data is available to their attacker. The compartmentalization will likely protect their personal life, etc. but the same exploit could be used against their email client / web browser used for personal usage too. The exploit not being prevented leaves it open for use against their other compartments too.
I still enjoy Harry Potter despite controversy around what J.K. Rowling has said on some topics.
At checkout they looked at me like I was up to no good when I said I didn’t want to give them my name, address, and phone number just to purchase the device. I didn’t set up a plan. They said it was for “restocking” or something.
Fortunately they accepted obviously fake info. These front line sales people just don’t care as long as they can say they followed the policy.
The user containers are very helpful. I have to have TikTok for work and I put it in a container all by itself with a vpn on kill switch. And for one app that needs google play services, I have it a container with that.
The duress passcode is super clever, too. You enter a different device passcode and it just wipes the device.
You mean different user accounts? Those are available on stock Android, too.
Samsung also has "secure folder" which isolates apps and files and presumably uses multiple users to do the isolation.
Huh, I didn't realize they had added additional functionality not present on stock Android. Thanks!
Also, what if you ever want to share a file across user profiles?
It also shows that profiles can't really prevent an app from correlating profiles on the same device, by listening on a local socket.
You can share with file synchronisation apps like Syncthing/Ouisync [0], exploit a temporary weakness in the isolation model with Inter Profile Sharing [1], or simply copy the files over to an external storage device and transfer them that way.
[0]https://github.com/Catfriend1/syncthing-android
[0]https://github.com/equalitie/ouisync
[1]https://github.com/VentralDigital/InterProfileSharing
There are apps like Inter Profile sharing (appID: digital.ventral.ips).
[1]: https://f-droid.org/packages/me.zhanghai.android.files/
User profiles (secondary profiles, private space) don't enhance this sandboxing. The apps already were sandboxed. What they do, though, is aid in isolation in a number of ways. The allow the use of a seperate VPN slot which can help split up identities, they restrict the IPC to communication with apps within that profile (not other profiles), they have separate clipboard, user data and non-global settings, they have distinct encryption keys and can be put at rest on demand without rebooting the phone (not possible for Owner profile).
Our more prominent 2-factor fingerprint authentication feature is also relevant when switching between users a lot.
I'm sorry but what? Your job demands what apps you have installed on your PRIVATE phone!?
Sadly, biometric authentication as 2fa is not sufficient for that.
Edit: "experts" > "workers"
?
How would they even do that? As part of the machine that checks for counterfeit notes? They don't always use that, right?
No but when you took that cash out of an ATM, it logged the serial numbers on the bills it gave you. Then when Best Buy deposited that cash at the bank they again scanned that serial number and can make an assumption that you spent that money at Best Buy.
What that information is used for, who knows? But the flow of cash is definitely logged somewhere, for some reason!
But yes, your bank could know you were at Best Buy, maybe.
If you have knowledge of a withdrawal and even a rough ball park of that amount, then you can probably determine it was a phone purchase. If you're a big company like Google or Facebook, you're going to be pretty good at that regardless of the prior knowledge (which then can be back inferred). The tracking is not just limited to what information your phone sends out but what information other devices get. It's good to mix up your fingerprints and all that, but this only goes so far.
The social graph is a pretty critical tool for those doing the tracking, and that graph isn't just composed of other humans. Every device is constantly talking to every other device. Snoop on your radios and look at what they're doing. Things like WiFi and Bluetooth are constantly pinging things around you and this can be used for tracking if you know where certain MAC addresses are physically located. This won't work anymore, but like 15 years ago Samy Kamkar made a tool to do exactly that[0], because while they were mapping the streets they also recorded all the SSIDs, MACs, and whatever else they could get. So if you have a device like a router that is constantly connecting to something that's saying it is a phone, and you can see that that device is at a location at specific hours and you can reidentify someone by that. Especially when a device that normally fit a pattern stops fitting that pattern.
I mean some of this sounds crazy but I feel like 10 years ago we had more posts and conversations about things like [0]. Where people were doing things like tracking their friends' sleeping schedules[1], exploiting Facebook ads to microtarget and prank your friends[2], or spending $1k to geolocate your friends[3]. While it's become more difficult to exploit this information from the user side, the capabilities haven't gone away. They've only grown over the last decade and been placed behind more expensive walls. Funny enough, it is a time of the internet I missed. These things were fun, scary, motivating, and made us talk more openly about the implications of surveillance capitalism. We've only just become used to it, while the severity has significantly magnified. I mean when I deleted my Facebook account in like 2016/2017 I did a takeout and found that they accurately were able to geolocate my photos to where I was standing inside a specific room of my house, by aggregating the GPS information with the WiFi information (you have neighbors?).
I feel like we need to bring these conversations back. But I'm not sure how best we do them while being productive and not turning towards apathy. No one's going to kill the beast overnight, but I want to stress that it's at least better to reduce exposure. Apathy tends to come from the interpretation that it is binary. You're either fucked or not, and we're only fucked. But there's a big difference between a floor covered in shit and being neck deep in shit. I don't want to be in either situation, but if I had to choose then that's a very easy situation. It's also easier to clean up. So I guess... can we get more people to start normalizing things like Signal and Firefox? Or pick some other tools, I don't care. But encrypted communications and non-chromium based browsers (sorry Brave and Opera) do a lot to help. At worst they send a signal to these big companies that we care. Maybe all they see is money, but they'll care about your privacy if it is more profitable than not caring. They go with the tides, even if they don't really believe it. So they can be reigned, but people mostly don't know how to send a signal.
[0] https://sa.my/androidmap/
[1] https://medium.com/@sorenlouv/how-you-can-use-facebook-to-tr...
[2] https://ghostinfluence.com/the-ultimate-retaliation-pranking...
[3] https://www.wired.com/story/track-location-with-mobile-ads-1...
I mean, this is only true if you pull out money once just for this one purchase. If you pull out money regularly and make most of your purchases with that cash, it would be incredibly difficult. If I pull out $500-$1000 per week in cash, with occasional $1,500-$2,000 weeks, there would be very little ability to know if I was buying a phone on a $2,000 week or a car repair. If a $1,500 week was a regular week with a fancy dinner, or a low expense week with a phone purchase.
It would be like trying to tell what you bought with the credit card you use for all your expenses by only looking at the size of the payment each month from your bank account.
A physical pop-up? The online google store requires a google account that has your personal info already..
I use a google account for convenience for some purposes, and host my own email (out of principle, not exactly super interesting material). It would be nice if when I enter the 'duress' password it erased everything except the gmail related activity.
That's a thing in the US? Here, clerks in various stores ask me for postal code but nothing else and I could refuse giving that info.
Loving it.
145 more comments available on Hacker News