Ex-Whatsapp Cybersecurity Head Says Meta Endangered Billions of Users
Posted4 months agoActive4 months ago
theguardian.comTechstoryHigh profile
heatednegative
Debate
85/100
WhatsappMetaData SecurityPrivacySurveillance
Key topics
Whatsapp
Meta
Data Security
Privacy
Surveillance
A former WhatsApp cybersecurity head alleges that Meta endangered billions of users by allowing unrestricted access to user data, sparking controversy and criticism about Meta's data handling practices.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
3m
Peak period
88
0-6h
Avg / period
20
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 8, 2025 at 5:26 PM EDT
4 months ago
Step 01 - 02First comment
Sep 8, 2025 at 5:28 PM EDT
3m after posting
Step 02 - 03Peak activity
88 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 12, 2025 at 2:36 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45174221Type: storyLast synced: 11/20/2025, 7:50:26 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
FTA:
> Attaullah Baig, who served as head of security for WhatsApp from 2021 to 2025, claims that approximately 1,500 engineers had unrestricted access to user data without proper oversight, potentially violating a US government order that imposed a $5bn penalty on the company in 2020.
From the article: > including contact information, IP addresses and profile photos
I can confirm this, I used to work at WhatsApp.
Academics have also reverse engineered it as well, and though there are some weakness it's not a lie that WhatsApp is E2EE. Here's some I just found:
- https://eprint.iacr.org/2025/794.pdf
- https://i.blackhat.com/USA-19/Wednesday/us-19-Zaikin-Reverse...
You're still just blindly trusting this is the case. You can't verify the encryption or any of the code.
It would be trivial to actually encrypt the message and send it out and then store an unecrypted version locally and quietly exfiltrate it later.
They have to already be storing an unecrypted version locally, because you can see the messages. So unless your analyzing packets on the scale of months or years, you cannot possibly know that it isn't being exfiltrate at some point.
Take it a step further: put the extiltration behind a flag, and then when the NSA asks, turn on the flag for that person. Security researchers will never find it.
Remember, kids: End to end encryption is useless if the "ends" are fully controlled by an (untrustworthy) third party.
If I were Evil-Tim-Cook, I'd have a deal with the FBI (and other agencies) where I'd hand over some user's data, in return for them keeping that secret and occasionally very publicly taking Apple to court demanding they expose a specific user and intentionally losing - to bolster Apple's privacy reputation.
The FBI wants its investigations to go to court and lead to convictions. Any evidence gained in this way would be exposed as coming form Apple; notwithstanding parallel construction:
* https://en.wikipedia.org/wiki/Parallel_construction
As for other agencies, I'm sure many have exploits to attack these devices and get spyware on them, and so may not need Apple's assistance.
For instance, if someone shared something incriminating in a group chat and got arrested, and that info was only shared in the group chat, they'd have to silence everyone in that group chat to ensure that the channel still seemed secure. I don't think at least our government is that competent or careful.
But also, people wayyyy overhype how much apple tries to come off as privacy-forward. They sell ads and don't even allow you to deny apps access to the internet, and for the most part their phone security seems more focused on denying you control over your own phone rather than denying a third party access to it. I think they just don't want the hassle of complying with warrants. Stuff like pegasus would only be so easy to sell if you couldn't lean on the company to gain access, and I think it'd be difficult for hundreds of countries to conspire to obscure legal pressure. Finally Apple generally has little to gain from reading your data, unlike other tech giants with perverse incentives.
Of course this is all speculation, but I do trust imessages much more than I trust anything coming out of meta, and most of what comes out of google.
Corrupt investigators can use parallel construction to pretend that the key breakthrough in the case was actually something legal.
Clearly, you are underestimating the intelligence and capabilities of the US government. They have a lot of money. Like... A lot of money.
“Only” is doing an incredible amount of work there.
Unless you concoct something incriminating solely for the purpose of testing this, the something incriminating being discussed in group chat previously happened in the real world. Ripples of information were created there and can be found (parallel construction).
If they fail in parallel construction, they always have the option to continue. For the vast majority of cases where opsec isn't 100% foolproof, we hear about them. For the few cases where it was foolproof, we just don't hear about them.
Apple is a part of PRISM so there's approximately a 100% chance that anything you send to Apple via message, cloud, or whatever else, gets sent onto the NSA and consequently any agency that wants it. But the entire mass data collection they are doing is probably unconstitutional and thus illegal. But anytime it gets challenged in courts it gets thrown out on a lack of standing - nobody can prove it was used against them, so they don't have the legal standing to sue.
And the reason this is, is because its usage is never acknowledged in court. Instead there is parallel construction. [1] For instance imagine the NSA finds out somebody is e.g. muling some drugs. They tip off the police and then the police find the car in question and create some reason to pull it over - perhaps it was 'driving recklessly.' They coincidentally find the cache of drugs after doing a search of the car because the driver was 'behaving erratically', and then this 'coincidence' is how the evidence is introduced into court.
----
So getting back to Apple they probably want to have their cake and eat it too. By giving the NSA et al all they want behind the scenes they maintain those positive relations (and compensatory $$$ from the government), but then by genuinely fighting its normalization (which would allow it to be directly introduced) in court, they implicitly lie to their users that they're keeping their data protected. So it's this sort of strange thing where it's a facade, but simultaneously also real.
[1] - https://en.wikipedia.org/wiki/Parallel_construction
It's kind of wild that this is the part of the deep state MAGA just forgot about.
* Recovery Keys
* Recovery Contact (someone who holds your recovery key in key escrow)
you probably mean outside of the USA, it's huge in Europe/UK
(which doesn't contradict your main point)
USA is special because it is the (only?) country where iPhone has more users than Android.
If you give someone your number, they’ll text you on WhatsApp.
Blue bubble isn't really a thing ever mentioned in France either, not enough iPhone market share.
Nobody uses iMessage. People with iPhone use WhatsApp too.
The user experience of iMessage used to be subpar and now everyone has WhatsApp installed anyway, the feature set is the same and it works on all phone brands so nobody feels like switching.
Russia: Telegram
Taiwan: Line
Japan: Line
By contrast, WhatsApp is best known to me for being used in Europe, Australia, and India.
For business comms drop instagram and move WhatsApp to first.
For Singapore it seems LinkedIn messages are the go to IM for business.
Europe p2p: telegram number one by a huge margin, then WhatsApp. B2b: WhatsApp, period.
YES!
> According to the 115-page complaint, Baig discovered through
> internal security testing that WhatsApp engineers could “move
> or steal user data” including contact information, IP addresses
> and profile photos “without detection or audit trail”.
That isn't really the breach you're making it out to be. Profile photos, unless made private/contacts only, are already publicly visible, and so is "contact information".
Of course these are useful to intelligence services, but this doesn't mean that Baig found they don't have true end-to-end encryption.
Also makes me wonder about Google's change wrt android security patches - under the guise of "making it easier for OEMs" by moving to quarterly is actually just so that Paragon and other nation state spyware has access to the vulnerabilities for at least 4 months before they get patched.
Skeletons keep piling up while PR try to dismiss them
Corporate communications has playbook damage control responses, and this quote seems to be suggesting that the quoted response is one of them (it's "familiar").
Whether "former employees" are sketchily operating from playbooks, who knows. Because PR playbook-sounding statements don't have a lot of credibility.
Or the PR team undermines their own credibility with a stock and specious fact-free non-response.
I think the point of these is to dodge the even guiltier look of “no comment”. And signal there won’t be any potentially costly cooperative engagement from their side to their shareholders.
They don’t expect to be believed.
It was bought as a power play, consolidation of tech power. Why would I trust them to do the right thing?
I'm guessing there will be some tricky legal wording in their T&C that wouldn't rule them out from being an intermediate entity that can see messages.
From enabling genocide in Myanmar, to interfering with elections, to giving user data to third parties in violation of its own daya policies, to straight up weird stuff like pirating/torrening books to train their steaming pile of garbage called llama, to having sex chatbots be weird to children.
And then there is the even weirder decisions of zuck, the biggest loser of all:
- VR didnt seem to catch on
- the metaverse is a giant smelly pile of poo and he sunk millions in it
- he is hiring AI engineers at absurd money in a rapidly cooling bubble market
- he immediately started ass kissing the orange stain that calls himself president
Is he purposefully trying to be a caricature cartoon vilain, a grotesque loser, and his company an emblem of evil? Or is it just cluelessness?
He sunk tens of billions.
Estimates (because we don't have "Reality Labs" broken out before 2019) put Zuck's Metaverse Misadventure & Boondoggle about $75B in the hole ($10B revenue on $85B spend) with no signs of a turnaround in revenue.
There are plans to turn things around with AR spectacles but decent ones are years off and will require entirely new investment with little re-use of that $75B Metaverse nonsense (Oculus acquisition, 5 generations of Quest R&D, Horizon Worlds, partnered and sponsored games and content, etc.)
The only real ROI will be the experience and staff gained. The rest will almost certainly land in the dustbin.
1) leave quietly and tell no one: con - no one on HN gets to talk about it. The next person needing money does it anyway.
2) leave loudly when you're still poor: con - you get blacklisted from tech and die from a preventable disease working at a gas station without insurance. The company implements the policy anyway.
3) leave loudly when your rich: con - people accuse you of selling out the users.
4) Don't join Meta in the first place
I have consistently told recruiters from Meta to leave me alone. It is a company that has knowingly done massive harm to our culture and our children, and I have no interest in ever working with or for them.
This said, WhatsApp is not open source, so it's impossible for users to verify how the encryption works, so users have to trust that it's properly end-to-end encrypted.
If you care about privacy (and you should), then you should use Signal instead of WhatsApp.
Well with WhatsApp they most definitely can, but it has never been a secret. WhatsApp always had access to the metadata (whereas Signal makes a lot of effort to reduce the metadata they have access to). In ~2016 WhatsApp integrated the Signal protocol to add end-to-end encryption, but did nothing about the metadata.
Again: if you care about privacy, use Signal.
If an app sends the message content in clear through the notifications, then it is badly designed, period.
If WhatsApp central servers could push a notification to your phone that contained your actual message content, it couldn't be E2EE.
I don't even take this statement at face value. It's trivially easy to include models on client side that can do some message classification and treat that as "metadata" that would give insight into the content of the message.
from here: https://www.courtlistener.com/docket/71293063/baig-v-meta-pl...
WhatsApp is way beyond just texting and calling, it is basically global infrastructure now, used daily by governments, NGOs, and billions. This is not a startup screw-up, it's a public utility gone seriously messed up. Heads need to roll. Stop playing god. Secure the platform or step aside.≥ Company refused to allocate more than around 10 engineers to the Security team at any point
If true, this tells the story here with security culture at WhatsApp. Assuming a backlog of known weaknesses (as any established code base will have), and the velocity that 100 PMs and 1200 SWEs implies, how would you do anything as a security team besides stick your fingers in the figurative holes in the dike? The ensuing conflict between Baig and his superiors about not fixing stuff is surely going to result in an assessment of "poor performance" but is likely just Baig giving a f** about user data.
There are very, very few apps I really trust. E.g. the only mechanism I trust for communicating passwords securely is GPG, I wouldn’t even use Signal for that.
Onavo Protect, the VPN client from the data-security app maker acquired by Facebook back in 2013, has now popped up in the Facebook iOS app itself, under the banner “Protect” in the navigation menu. Clicking through on “Protect” will redirect Facebook users to the “Onavo Protect – VPN Security” app’s listing on the App Store.
https://techcrunch.com/2018/02/12/facebook-starts-pushing-it...
There is no oversight of these monstrosities of any sort. I doubt anyone would have issues with the thesis that Meta would implement anything that might curb their user numbers unless it was mandated.
Why would they? They are beholden to their shareholders first. If it isn't illegal then it isn't illegal, immoral perhaps but that is not illegal, unless it is illegal.
My learned friends are going to have to really get their bowling arms warmed up for this sort of skit. For starters, you need a victim ... err complainant.
And not every CEO begins life in their company with "if you need any info just ask, they trust me, dumb fucks"
If the company is so bad (it is), why does he want back?!
'Just pay me the salaries I "missed", and keep them coming.' The regulatory action is just "potential".
I have no sympathy for Meta, but this guy...
Even if nothing changes (the regulatory action is optional), he's happy to contribute (he insists, in fact). Even among people who don't want him there.
Any full remedy would require his position is reinstated.
If he wins the right to be reinstated, he will be happy to negotiate a payment instead. He is made whole.
What about any of that lacks sensible motives?
> he will be happy to negotiate a payment instead.
This, indeed, sounds way more normal than wanting to keep working for the evil company, and in a toxic environment.
It hasn't occurred to me that one can change their mind and choose a different compensation after the court decision like that.
You don't negotiate with what you don't have yet. But the idea that he or they would actually want to resume working together is beyond unlikely. They will be happy to pay for him to go away, if that's the only way they can legally get rid of him.
https://www.cnbc.com/amp/2022/11/17/meta-disciplined-or-fire...
A related scheme is the existence of brokers who will, for a fee, recover banned or locked accounts. User pays the broker $X, broker pays their contact at Meta $Y, and using internal tooling suddenly a ban or suspension that would normally put someone in an endless loop of automated vague bullshit responses gets restored.
From people at Facebook circa 2018, I know that end user privacy was addressed at multiple checkpoints -- onboarding, the UI of all systems that could theoretically access PII, war stories about senior people being fired due to them marginally misunderstanding the policy, etc.
Note that these friends did not belong to WhatsApp, which was at that time a rather separate suborg.
The privacy violations and complete disregard for user data are too numerous to mention. There's a Wikipedia article that summarizes the ones we publicly know about.
Based on incentives alone, when the company's primary business model is exploiting user data, it's easy to see these events as simple side effects. When the CEO considers users of his products to be "dumb fucks", that culture can only permeate throughout the companies he runs.
Your comment talks about incentives, but you haven’t actually made a rational argument tying actual incentives to behaviour.
So whatever they claim publicly, and probably to their low-level employees, is just marketing to cover their asses and minimize the impact to their bottom line.
You claim it’s all talk, but it’s not much more effort to walk the walk. It doesn’t hurt profits to do it.
The problem is similar to that of government efforts to ban encryption: if you have a backdoor, everyone has a backdoor.
If Meta is collecting huge amount of user info like candy (they are) and using it for business purposes (they are), then necessarily those employees implementing those business purposes can do that, too.
You can make them pinky promise not to. That doesn't do anything.
Meta has a similar problem with stalking via Ring camera. You allow and store live feeds of every Ring camera? News flash: your employees can, too! They're gonna use that to violate your customers!
Personally it doesn't matter if there are auditing systems in place, if the data is readable in any way, shape or form.
I haven’t touched a lot of these cyber security parts of industry: especially policies for awhile…
… but I do recall that auditing was a stronger motivator than preventing. There were policies around checking the audit logs, not being able to alter audit logs and ensuring that nobody really knew exactly what was audited. (Except for a handful of individuals of course.)
I could be wrong, but “observe and report” felt like it was the strongest possible security guarantee available inside the policies we followed (PCI-DSS Tier 1). and that prevention was a nice to have on top.
That strategy doesn't help a victim who's being stalked by an employee, who can use your system to find their new home address. They often don't care if they get fired (or worse), so the motivator doesn't work because they aren't behaving rationally to begin with.
I'm not talking about small businesses here, but large corporations that have more than enough resources to do better than just auditing.
> crime happens but perpetrators will be punished
Societies can't prevent crime without draconian measures that stifle all of our freedoms to an extreme degree. Corporations can easily put barriers in place that make it much more difficult (or impossible) to gain unauthorized access to customer information. The entire system is under their control.
There may be some details with the implementation of this, but once we've got that check box, then things will be secure.
Or maybe trillions of dollars can't change digital physics. I don't care how much money you have, you can't make water not be wet.
Different culture from the blue app, or whatever they call it?
So not messages.
Counterpoint: he's a monopolist and scummy person (https://news.ycombinator.com/item?id=1692122) who refuses to stop (https://arstechnica.com/tech-policy/2019/09/snapchat-reporte...) from the early days onwards (https://news.ycombinator.com/item?id=1169354)
https://news.ycombinator.com/item?id=15007454
22 more comments available on Hacker News