Wanted to Spy on My Dog, Ended Up Spying on Tp-Link
Key topics
The author reverse-engineered a TP-Link Tapo camera to monitor their dog, discovering security vulnerabilities and limitations in the process, sparking a discussion on IoT security and the frustrations of working with cloud-first devices.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
5m
Peak period
120
0-12h
Avg / period
26.7
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 15, 2025 at 12:28 PM EDT
4 months ago
Step 01 - 02First comment
Sep 15, 2025 at 12:33 PM EDT
5m after posting
Step 02 - 03Peak activity
120 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 23, 2025 at 6:30 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The thing I've most been convinced of in the past 5 years of building as much 'iot/smart home' stuff out as possible in my house is that nearly every vendor is selling crap that has marginal usefulness outside of a 'party trick' in isolation. Building out a whole smart home setup is frustrating unless it's all from one vendor, but there isn't one vendor which does all of it well for every need.
On my phone I have apps for: Ecobee, Lutron, Hue, 4 separate camera vendors[1], Meross, and Smart Life. Probably a couple more that I'm forgetting.
Only Lutron and Hue are reasonable in that they allow pretty comprehensive control to be done by a hub or HomeKit so I never have to use those apps.
It's been years since Matter and Thread were supposedly settled upon as the new standards for control and networking, but the market is, instead of being full of compatible devices, instead absolutely packed with cheap wi-fi devices, each of which is cloud-dependent and demands to be administered and even used day-to-day only through a pile-of-garbage mobile app whose main purpose is to upsell you on some cloud services.
[1] I admit the fact I have 4 is my fault for opportunistically buying cameras that were cheap rather than at least sticking with one vendor. But many people have a good excuse, perhaps one vendor makes the best doorbell camera, while another might make a better PTZ indoor camera.
Even if your hardware doesn't support local APIs, there's a good chance someone has made an HA integration to talk to their cloud API.
And if they haven’t, you can pretty trivially write your own and distribute it through HACS (I’ve got three integrations in HACS and one in mainline now)
My #1 wish would be for someone to build a HA-native voice assistant speaker. I'd pay $100 each for a smart speaker of the physical quality of the $30 Google Home Mini but which integrated directly with HA and used a modern LLM to decide what the user's intent was, instead of the Google Assistant or Siri nonsense which is like playing a text adventure whose preferred syntax changes hourly. I'd pay that plus a monthly fee to have that exist and just work.
This M5 Stack ASR unit costs $7.50, and has a vocab of about 40-70 words. That's enough to turn on/off lights and timers. You might need to come up with your own command language, but all of the ASR is extremely local
https://shop.m5stack.com/products/asr-unit-with-offline-voic...
Sadly for family reasons I sadly can't take on projects that require more than a few minutes, so I'm holding out hope for someone to bridge the gap between the "project boards that require writing a bunch of code to interface with Home Assistant and define all of its possible abilities and commands" and "dumb as a post Google thing that you just plug in" with a hardware device that is easy to connect to HA and starts out doing what the Google thing can do, but smart instead of stupid like the legacy voice assistants are.
or
https://www.home-assistant.io/voice_control/thirteen-usd-voi...
This is the way most apps work if they have a default password the user is supposed to change.
Some form of "enter the code on the device" or "scan the QR code on the device" could then mutually authenticate the app using proof-of-presence rather than hardcoded passwords. This can still be done completely offline with no "cloud" or other access, or "lock in"; the app just uses the device secret to authenticate with the device locally. Then the user can set a raw RTSP password if desired.
This way unprovisioned devices are not nearly as vulnerable to network-level attacks. I agree that this is Not Awful but it's also Not Good. Right now, if you buy this camera and plug it into a network and _forget_ to set it up, it's a sitting duck for the time window between network connection and setup.
But that also means then that often anyone with physical access can easily get into the device. The complicated password provides an additional layer of illusion of security, because people then figure "it's not a default admin password, it should be good". The fundamental problem seems to be "many people are bad at passwords and onboarding flows", and so trying variations on shipping passwords seem to result in mostly the same problems.
It's hard to decide whether it's good or bad. It is definitely easier. Which I guess matters most in consumer grade routers.
That's the only real vulnerability here, and it's no big deal, but it is A Thing and there is definitely a better way to do this that doesn't lose the freedom of full-offline.
I used to sell a home networking device,[0] and I wouldn't do what you're describing. If there were an issue where the labels calculate the wrong password or the manufacturer screws up which device gets which label, you don't find out until months later when they're in customer hands and they start complaining, and now you have to unwind your manufacturing and fulfillment pipeline to get back all the devices you've shipped.
All that to protect against what attack? One where there's malicious software on the user's network that changes the device password before the user can? In that case, the user would just not use the camera because they can't access the feed.
[0] https://mtlynch.io/i-sold-tinypilot/
> I agree that would be nice, but it also doesn't sound all that practical for a small vendor.
Personalizing / customizing per device always introduces a huge amount of complexity (and thus cost). However, this is TP-Link we're talking about, who definitely have the ability to personalize credentials at scale on other product lines.
And again, to be clear, I'm not trying to argue that the current way is some horrible disaster from TP-Link, just advocating for a better solution where possible. I think the current system reads as fine, honestly, it sounds like typical cobbled together hardware vendor junk that probably has some huge amount of "real" vulnerability in it too, but this particular bit of the architecture doesn't offend me badly.
> now you have to unwind your manufacturing and fulfillment pipeline to get back all the devices you've shipped.
This can be avoided with some other type of proof-of-presence side channel which doesn't rely on manufacturing personalization - for example, a physical side-channel like "hold button to enable some PKI-based backup pairing or firmware update mode." For a camera, there should probably be an option to make this go away once provisioning is successful, since you don't want an attacker performing an evil maid attack on the device, but for pre-provisioning, it's a good option.
For a hardware product mass produced like this, they should already have a custom label that has the unique serial number on it which is also programmed into each device, so they should already have the infrastructure to do that (potentially as part of automated board testing/flashing).
Adding a randomly generated password is hardly more work once you have the ability to do that.
Thats the wrong way to do it, just require the user create a secret on first boot, and have factory reset functionality for when you forget it.
It is better than simple secret like 12345678 but it can go wrong too, like in the case of UPC UBEE routers where the list of potential passwords can be narrowed down to like ~60 possibilities using a googled generator [1] whilst knowing only the SSID.
It did require firmware reverse engineering to figure out [2][3] but applies to most devices I've encountered. User should ideally always change the default password regardless.
[1] https://upcwifikeys.com/UPC1236567
[2] https://deadcode.me/blog/2016/07/01/UPC-UBEE-EVW3226-WPA2-Re...
[3] https://web.archive.org/web/20161127232750/http://haxx.in/up...
Network devices can at least be monitored and discovered like this.
The fact that OP did all this work to find out the dog sleeps is pure hacker culture. Love to see it :)
At this point Android isn’t meaningfully an open-source platform any more and it haven’t been for years.
On the somewhat refreshing side, they are no longer being dishonest about it.
These are sort of orthogonal rants. People view this as some kind of corporate power struggle but in this context, GrapheneOS, for example also doesn't let you do this kind of thing, because it focuses on preserving user security and privacy rather than using your device as a reverse-engineering tool.
There is certainly a strong argument that limiting third-party app store access and user installation of low-privilege applications is an anticompetitive move, but by and large, that's a different argument from "I want to install Frida on the phone I do banking on," which just isn't a good idea.
The existence of device attestation is certainly hostile to reverse engineering, and that's by design. But from an "I own my hardware and should use it" perspective, Google continue to allow OEM unlock on Play Store purchased Pixel phones, and the developer console will allow self-signing arbitrary APKs for development on an enrolled device, so not so much has changed with next year's Android changes.
That's Google, not GrapheneOS.
* Market forces demand they provide both a website and an Android app.
* If both platforms are equally full of fraud, have the same features, and both have similar use, they cut out half the fraud even if they can only make one or the other fraud proof.
* But it isn't like that in reality: in reality, something more like 80% of their use and 90% of their fraud comes from mobile devices, and so cutting off that route immediately reduces their fraud-load by a lion's share.
Ergo, locking down the app is still in everyone's best interest, before we even get into the mobile app having features the desktop one does not (P2P payments, check deposit, etc.)
And this isn't just a weird theory / ivory tower problem: Device Takeover banking fraud on Android is _rampant_ (see Gigabud/GoldDigger).
If it's true that 90% of fraud comes from mobile despite all of the restrictions, what that tells me is that locking down devices doesn't actually prevent fraud.
---
> before we even get into the mobile app having features the desktop one does not (P2P payments, check deposit, etc.)
I think it would be reasonable to disable those specific features on mobile while leaving the rest of the app accessible.
Actually, back when jailbreaking iOS was still actually feasible, I recall the Chase app doing exactly that. The app worked fine, but it wouldn't let me deposit checks, I had to go to a branch for that. A bit annoying, but I can mostly understand that one.
Statistics on mobile vs. desktop banking will really shock you; the mobile usage penetration is easily well upwards of 90% in many markets. There's also a skewed distribution for fraud-vulnerable users and scenarios.
> I think it would be reasonable to disable those specific features on mobile while leaving the rest of the app accessible.
I agree with you in an idealist sense; it would be awesome to be able to use GrapheneOS and have 80% app functionality instead of 0% app functionality. I also completely understand why nobody does it; supporting what's probably <0.001 (if not lower)% of legitimate users in exchange for development time and fraud risk isn't a particularly appealing tradeoff. If I were in a situation to advocate for such a trade-off, I probably would, but I don't think it's evidence of a sinister conspiracy that nobody does that.
But if my goal was to commit fraud, wouldn't I go to wherever it was easiest to commit fraud? The actual market penetration of each platform shouldn't matter.
But that's not really using it, is it? If the process of getting access to do whatever I want on my smartphone makes it cease to be a viable smartphone, can you really count that as being able to use it?
It's like if having your car fixed by a third party mechanic made it not street legal. It is still a car and it does still drive, but are you really still able to meaningfully use it?
And before anyone jumps on my metaphor with examples of where that's actually the case with cars, think about which cases and why. There are modifications that are illegal because they endanger others or the environment, but everything else is fair game.
Firmware which requires updates to be signed with a manufacturer key can still be open source. As long as its code is available publicly, under a license which lets the user create derivative works, it meets the definition. You can still make a version of it that doesn't contain that check, you just can't install that version on the device you bought from the original firmware developer. Some FIDO keys (and I think Bitcoin wallets) do this.
That's not universally true, it depends on the license we're talking about.
As an arbitrary counterexample, the LGPL specifically requires you to give end users of your thing a way to link your object code with their own modified version of the LGPL'd library.
If e.g. Slack required attestation that would be a different story. I need that for work.
Google Android as installed on 99% of stock android phones never was open source. AOSP continues to be open source and is not effected by any changes made in the proprietary google android and google play services eco system.
People would do good by stop conflating the two.
For frida to work you need to root the device, which is impossible on ever more models, and there's an endless supply of very good rooting detection SDKs on the market, not to mention Play Integrity.
This is the key thing, and the part that will change next year: previously, you could unpack, patch, and repack an APK with the Frida gadget and install it onto an Android device in Developer mode, while the device remained in a "Production" state (with only Developer mode enabled, and no root). Now, the device would either need to be removed from the Android Certified state (unlocked/rooted) or you would need to sign the application with your own Developer Console account and install it on your own device, like the way iOS has worked for years.
There's plenty of physical devices where it is possible, and Google publish official emulator images with root access for every Android version released to date. This part is still OK.
> there's an endless supply of very good rooting detection SDKs on the market, not to mention Play Integrity
Most of the root detection is beatable with Frida etc, mostly.
Play Integrity & attestation (roughly: 'trusted computing' on your phone, which signs messages as 'from an unmodified certified device' in a way that the server can verify, to only allow connections from known-good devices) is a much larger problem. Best hope here is that a) it creates much work for most apps to bother and b) it eventually gets restricted as anti-competitive. It's literally them charging & setting rules on their competitors for how they get a certificate which allows phones they make to function with all the Android apps on the market, and pushing app makers to restrict their apps to not work on phones from competitors who don't play ball, so I don't think anti-competition pushback here is that implausible medium term.
Yup, but say Samsung, kiss KNOX goodbye. Fused off once you flash a non-Samsung image.
> and Google publish official emulator images with root access for every Android version released to date. This part is still OK.
Many apps will straight refuse to run in emulators unless you're lucky to snag a debug build that accidentally got pushed to production.
> Most of the root detection is beatable with Frida etc, mostly.
It's a cat and mouse game and frankly, I'm sick of it - and especially about the fact that it's either "accept that you'll need to wait X weeks until <Magisk plugin> gets an update" or "install some unofficial closed source fork that may or may not be laced with malware".
> Best hope here is that a) it creates much work for most apps to bother and b) it eventually gets restricted as anti-competitive.
Rooting detection used to be too much work, then SDKs cropped up that made it very easy, and that will be the case for remote-verifiable hardware attestation.
And restrictions from anti-trust? No way that will happen in the next three years in the US, and here in the EU it takes about 5-10 years until our parliament finally gets to work after a problem gets too much attention for their lazy asses to ignore. And even then, the lobby from banks, game studios ("them cheaters!!!" in f2p scam games) and other influential lobbyists will likely prevent any serious action.
For the (less common) cases where you want to use a non-rooted device (e.g. using Frida by injecting it into the APK via gadget) it gets trickier, but I think in practice there will still be a way for developers to build & install their own APKs with developer mode enabled. This will be tightened, but removing that restriction would effectively make Android development impossible so it seems very unlikely - I think they will block sideloading on all non-developer devices only, or allow you to add your own developer cert for development or similar (all of which would probably be fine for development & reverse engineering, while still being a massive pain for actual distribution of apps).
The larger issue is device attestation, which _could_ make all rooted/non-certified devices progressively less practical, as more apps attempt to aggressively detect unmodified devices. Right now that's largely limited to big financial apps, and has some downsides (you get a bunch of complaints from all 3 GrapheneOS users, and it requires a bunch of corresponding server work to be reliable) but it could become more widespread.
(yes i know the cover image is AI-generated, that's incidental to the content)
It's too bad people spend energy for generating them now.
How do you mean?
Some quick back of the napkin math.
Creating a 'throwaway' banner image by hand, maybe 15 minutes on a 100W CPU in Photoshop:
Creating a 'throwaway' banner image by stable diffusion on a 600W GPU. In reality it's probably less than 20 seconds to generate, but let's round it up to one full minute of compute time: The way I see it it seems to spend less energy, regardless of whether you're talking about human energy or electrical energy. What's the issue here exactly?The further we can go, the further we will go.
The more CPU power we get, the more JS heavy websites get.
The more images we can generate, the more we will generate.
The more we can do, the more we do, whether we should or not.
You are not accounting for the model training (which can't be ignored, first because you can't ignore fixed costs, and second, because we keep training newer models, so amortizing doesn't quite work), rebound effect, the subsidized bot crawling, etc.
I won't comment further on this, this discussion has been rehashed to death anyway and in better ways that I can.
IMHO the better way is to not do meaningless cover images, and this is also true of stock, non-AI generated images (I'm not against art, so if it's your strength, by all means, please do meaningful or nice cover images).
Also, fantastic write-up
someone needs to make replacement firmware
ffmpeg can fake it but takes a few seconds to grab from the video stream and of course you can't run ffmpeg from your browser (or wait, can you now?)
https://nvd.nist.gov/vuln/detail/CVE-2022-37255
Nice project, great to see the scripts doing good work in the wild. If you needed any extra additions or tweaks to get them working, I'd love to hear about it.
frida -U \
-f com.tplink.iotThe bsd based distributions sure are powerful, but with the power/heat budget to match.
But I don't like the limitations of BSD systems in terms of hardware compatibility and performance, so I build my router using a plain Linux distro (Debian).
sounds like the core of the issue was that Netgate hired a weirdo, and then botched how they handled it when the weirdo got -- go figure -- weird.
and it showed how FreeBSD does commits badly and may not have any (or few) code reviews
honestly makes me feel bad about using netgate boxes -- what else needs to be fixed?
And WIPO had to take the domain away from them: https://en.wikipedia.org/wiki/PfSense#OPNsense
The solution is nftables.
The solution is bpf.
The solution is emacs-m-x-butterfly-bpf.
;)
It's worth noting that Ubiquiti provides local admin support, and that the Ubiquiti Cloud data breach was actually a false story spread by a disgruntled internal engineer in an attempt to extort his employer.
I am always surprised by how many people give me their ISP chosen router name and ISP chosen password when I connect to their WiFi. I don't want to give my ISP that much control.
Coz I would absolutely 100% not be surprised for your average consumer.
For your average HN reader I would hope they treat whatever their ISP gave them as just some dumb "switch" type device that sits outside their trusted network and handles nothing but encrypted traffic. Like my ISPs device definitely does have a WiFi and such, which I disabled. I treat it as a bridge / modem and it's definitely not part of my "inner circle". Hasn't been in 25 years.
Comparing a residential router to a network operator’s router is spurious: those routers don’t perform any sort of filtering for the public internet traffic flowing through them.
What use is reducing the attack surface of a device which only ever initiates connections?
Edit: also there are network operators that block customer traffic on certain ports liike NetBIOS, SMB or SMTP to name a few.
As for how the router that is theoretically not accepting incoming connections from the internet itself gets compromised in the first place: among other issues some routers can be RCEd by a webpage visited by someone inside the LAN[1]. That’s just one example, you can find tons of these if you search for router vulnerabilities. In practice out of date routers end up in botnets frequently.
It has nothing to do with network operators blocking SMB traffic; the attacker can communicate with the router via whatever C2 mechanism they put in the malware, which probably won’t even involve opening a port on the router. The SMB or what have you to the endpoint would be entirely within the LAN.
[1]: https://www.malwarebytes.com/blog/news/2023/02/arris-vulnera...
The edit was in response to "network operator's routers [...] don't perform any sort of filtering" and had nothing to do with C2 traffic?
Your point of course stands, the situation is terrible.
I think IoT demands a rethink of security.
Like sometimes I want IoT devices to just bloody connect, and if I have to use a published exploit that circumvents online only requirements I will do it.
But some people do genuinely have use cases for cloud speaking IoT stuff.
Really I think the device should ask at first run, and then burn in your response and act only in the selected mode. If you want it to require Cloud MFA, thats an option, if you want to piss python at your lightbulb to make it blink, then thats where it lives permanently.
RTC setup section:
Main section: Where:* <Camera RTC name> is just any old short name you want to assign to the camera.
* <Camera name> is the main name for the camera that will be shown in the frigate UI
* <local camera account password> is something set individually on each camera (settings > Advanced > Camera Account, set it to On and setup username/password > Account Information)
* <Tapo cloud password> is the password setup for the Tapo app (I'm not sure how necessary this is, since there's nowhere that the username is specified... this is the only bit I'm fuzzy on)
This is the basics that works for me for the Tapo cameras. There are a boatload of other settings specific to Frigate (but not specific to Tapo cameras).
This is nowhere near as cool hack as the article, however.
I use Reolink's for outdoor. The Tapo's are all indoor C210/C211 (cheap, but do the job just fine).
Looks like the C402 has two different hardware versions[0] so maybe the old one doesn't work but the new one does? A firmware upgrade might also be worth trying. This reddit page suggests trying ONVIF as the go2RTC connection[1].
Good luck!
[0]: https://www.tp-link.com/us/support/download/tapo-c402/
[1]: https://www.reddit.com/r/frigate_nvr/comments/1liosei/tapo_c...
Thanks for your comments.
Thanks for the info, I just hacked away at various config suggestions until one combination worked.
Cheers!
But today I got a c402 (outdoor) thinking I could use it to capture my son's soccer practice. But that doesn't have the camera account option under advanced.
I love the price point of these devices but the functionality is all over the place.
If anyone knows a good outdoor camera, preferably with solar panel, that is cheap and has an rtsp stream, please let me know.
Why do you use rtsp for some streams and then tapo protocol for others? Are these all tapo cameras?
Go2rtc is working flawlessly now with these tapo cameras. Thanks so much. Incredible how easy it was. And integrated with frigate as well.
(TP-Link Firmware Decryption C210 V2 cloud camera bootloaders) https://watchfulip.github.io/28-12-24/tp-link_c210_v2.html?u...
Annoyingly when this is in use, I can't use ONVIF which seems like the only way to pan and tilt the camera using open tools. So if I want to use two way audio and also control the camera, I have to stop the process reading tapo:// stream, start onvif client and rotate, turn off onvif client and start streaming using tapo:// again
The Tapo C200 research project https://drmnsamoliu.github.io/ (https://news.ycombinator.com/item?id=37813013)
PyTapo: Python library for communication with Tapo Cameras https://github.com/JurajNyiri/pytapo (https://news.ycombinator.com/item?id=41267062)
15 more comments available on Hacker News