Kurt Got Got
Posted3 months agoActive3 months ago
fly.ioTechstoryHigh profile
calmmixed
Debate
60/100
PhishingSecurityMfa
Key topics
Phishing
Security
Mfa
The CEO of Fly.io shares a story of getting phished and discusses the security measures they have in place, sparking a discussion on the importance of security and the limitations of MFA.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
54m
Peak period
81
0-6h
Avg / period
16
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 8, 2025 at 5:02 PM EDT
3 months ago
Step 01 - 02First comment
Oct 8, 2025 at 5:56 PM EDT
54m after posting
Step 02 - 03Peak activity
81 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 11, 2025 at 1:51 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45520615Type: storyLast synced: 11/20/2025, 8:28:07 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
We would like to think that we're the smart ones and above such low level types of exploits, but the reality is that they can catch us at any moment on a good or bad day.
Good write up
They literally admit they pay a Zoomer to make memes for Twitter. I think you are falling for the PR.
I have been almost got, a couple of times. I'm not sure, but I may have realized that I got got, about 0.5 seconds after clicking[0], and was able to lock down, before they were able to grab it.
[0] https://imgur.com/EfQrdWY
We shouldn't have, and we do take it seriously now.
https://apnews.com/article/myanmar-usaid-thailand-trump-rubi...
Wouldn't that also require convincing your customers to follow that account?
Use passkeys for everything, like Thomas says.
I’d like to write a follow-up that covers authentication apps/devices, but I need to do some research, and find free versions.
> They just rely on you being busy, or out, or tired, and just not checking closely enough
For example, Twitter relatively recently changed from authenticating on twitter.com to redirecting you to x.com to authenticate (interestingly, Firefox somehow still knows to auto fill my password, but not my username on the first page).
Now on a few occasions I’ve had to copy passwords in order to access things in a different browser, and I think I did encounter one site some years ago where autofill didn’t work, but I really do find autofill almost completely reliable.
You'll identify on id.me
People have just gotten used to this sort of thing unfortunately
For password safe users, auth being handled entirely on a different origin is completely fine, so long as the credentials are bound to (only used on, including initial registration) that origin. The hazard is only when login occurs via multiple domains—which in this case would mean if you had <input> elements on both tax.gov and id.me taking the same username and password, which I don’t believe you do. Your password safe won’t care if you started at https://tax.gov, the origin you created the credentials on was https://id.me, and so that’s the origin it will autofill for.
In Texas I've had more than one site where create the login on one site, but use that same login on multiple different domains that are NOT directly connected to a singular authentication site (id.me in the example).
For example, BitWarden has spent the past month refusing to auto fill fields for me. Bugs are really not uncommon at all, I'd think my password manager is broken before I thought I'm getting phished (which is exactly how they get you).
Luckily the only things I don't use passkeys or hardware keys for are things I don't care about, so I can't even remember what was phished. It goes to show, though, that that's what saved me, not the password manager, not my strong password, nothing.
Example: Citi bank has citibankonline.com, citi.com, citidirect.com, citientertainment.com, etc. Would you be suspicious of a link to citibankdirect.com? Would you check the certificate for each link going there, and trace it down, or just assume Citi is up to their shenanigans again and paste the password manually? It's jungle out there.
What do you get from checking a certificate? Oh yeah, must really be citibank because they have a shitton of SANs? I'd guess most banks do have a cert with an organization name, but organization names can be misleading, and some banks might use LetsEncrypt?
Any site that wants to phish you will either just not show the passkey flow and hope you forget or show it and make it look like it failed and pop up a helpful message about being able to register a new Passkey once you're logged in. And Passkeys are so finicky in browsers that I'd buy it.
According to WebAuthn, this is not true. Such passkeys are considered "synced passkeys" which are distinct from "device bound" passkeys, which are supposed to be stored in an HSM. WebAuthn allows for an RP to "require" (scare quotes) that the passkey be device bound. Furthermore, the RP can "require" that a specific key store be used. Microsoft enterprise for example requires use of Microsoft Authenticator.
You might ask, how is this enforced? For example, can't KeepassXC simply report that it is a hardware device, or that it is Microsoft Authenticator?
The answer is, there are no mechanisms to enforce this. Yes, KeepassXC can do this. So while you are actually correct that it's possible, the protocol itself pretends that it isn't, which is just one of the many issues with passkeys.
A few years ago, I managed to get our InfoSec head phished (as a test). No one is safe :)
For everyone reading though, you should try fly. Unaffiliated except for being a happy customer. 50 lines of toml is so so much better than 1k+ lines of cloudformation.
We will get to this though.
https://fly.io/blog/tokenized-tokens/
> Fly.io supports Google and GitHub as Identity Providers[1]
How about you just support SAML like a real enterprise vendor, so IdP-specific support isn't your problem anymore? I get it, SAML is hard, but it's really the One True Path when it comes to this stuff.
[1] https://fly.io/docs/security/sso/
I'm not exaggerating; you can use the search bar and find longer comments from me on SAML and XMLDSIG. You might just as well ask when we're going to implement DNSSEC.
My favourite slop-generator summarizes this as "While SAML is significantly more complex to implement than OIDC, its design for robust enterprise federation and its maturity have resulted in vendors converging on a more uniform interpretation of its detailed specification, reducing the relative frequency of non-standard implementation quirks when dealing with core B2B SSO scenarios." That being said, if your org is more B2C, maybe it makes sense you haven't prioritized this yet. You'll get there one day :)
Here is a major vulnerability we disclosed earlier this year:
https://workos.com/blog/samlstorm
Code-based 2FA, on the other hand, is completely useless against phishing. If I'm logging in, I'm logging in, and you're getting my 2FA code (regardless of whether it's coming from an SMS or an app).
But it's also why sites that don't work well with a password manager are actively setting their users up to be phished.
Same with every site that uses sketchy domains, or worse redirects you to xyz.auth0.com to sign in.
The key here is the hacker must create the most incisive, scary email that will short circuit your higher brain functions and get you to log in.
I should have realized the fact that bitwarden did not autofill and take that as a sign.
... and specifically by using the link in the email, yes?
It's always possible to have issues, of course, and to make mistakes. But there's a risk profile to this kind of stuff that doesn't align well with how certain people work. Yet those same people will jump on these to fix it up!
Blaming some attribute about user as why they fell for a phishing attempt is categorically misguided.
> the 1Password browser plugin would have noticed that “members-x.com” wasn’t an “x.com” host.
But shared accounts are tricky here, like the post says it's not part of their IdP / SSO and can't be, so it has to be something different. Yes, they can and should use Passkeys and/or 1password browser integration, but if you only have a few shared accounts, that difference makes for a different workflow regardless.
"Properly working password managers" do not provide a strong defense against real world phishing attacks. The weak link of a phishing attack is human fallibility.
"I went to the link which is on mailchimp-sso.com and entered my credentials which - crucially - did not auto-complete from 1Password. I then entered the OTP and the page hung. Moments later, the penny dropped, and I logged onto the official website, which Mailchimp confirmed via a notification email which showed my London IP address:"
One of the most memorable things they shared is they'd throw USB sticks in the parking lot of the company they were pentesting and somebody would always put the thing into a workstation to see what as on it and get p0wned.
Phishing isn't really that different.
Great reminder to setup Passkeys: https://help.x.com/en/managing-your-account/how-to-use-passk...
There are a _lot_ of drivers for devices on a default windows install. There are a _lot more_ if you allow for Windows Update to install drivers for devices (which it does by default). I would not trust all of them to be secure against a malicious device.
I know this is not how stuxxnet worked (instead using a vulnerability in how LNK files were shown in explorer.exe as the exploit), but that just goes to show how much surface there is to attack using this kind of USB stick.
And yeah, people still routinely plug random USBs in their computers. The average person is simultaneously curious and oblivious to this kind of threat (and I don't blame them - this kind of threat is hard to explain to a lay person).
They'll pick up the SD/TF card and put it into a card reader that they already have, and end up running something just by opening things out of curiosity to see what's on the card.
One could pull this same trick back in the days of floppy discs. Indeed, it was a standard caution three decades ago to reformat found or (someone else's) used floppy discs. Hell, at the time the truly cautious even reformatted bought-new pre-formatted floppy discs.
This isn't a USB-specific risk. It didn't come into being because of USB, and it doesn't go away when the storage medium becomes SD/TF cards.
I'm not, because I am talking about a USB-specific risk that has been described repeatedly throughout the thread. In fact, my initial response was to a comment describing that risk:
> A USB can pretend to be just about any type of device to get the appropriate driver installed and loaded. They can then send malformed packets to that driver to trigger some vulnerability and take over the system.
The discussion is not simply about people running malware voluntarily because they have mystery data available to them. It is about the fact that the hardware itself can behave maliciously, causing malware to run without any interaction from the user beyond being plugged in.
The most commonly described mechanism is that the USB device represents itself to the computer as a keyboard rather than as mass storage; then sends data as if the user had typed keyboard shortcuts to open a command prompt, terminal commands etc. Because of common controller hardware on USB keys, it's even possible for a compromised computer to infect other keys plugged into it, causing them to behave in the same way. This is called https://en.wikipedia.org/wiki/BadUSB and the exploit technique has been publicly known for over a decade.
A MicroSD card cannot represent anything other than storage, by design.
1. SD is not storage-only, see SDIO cards. While I don’t think windows auto-installs drivers for SDIO device on connection, it still feels risky.
2. It’s worth noting stuxxnet would have worked equally well on a bog standard SD drive, relying only on a malformed file ^^.
I wouldn’t plug a random microsd in a computer I cared about.
The enrichment facility had an air-gapped network, and just like our air-gapped networks, they had security requirements that mandated continuous anti-virus definition updates. The AV updates were brought in on a USB thumb drive that had been infected, because it WASN'T air-gapped when the updates were loaded. Obviously their AV tools didn't detect Stuxnet, because it was a state-sponsored, targeted attack, and not in the AV definition database.
So they were a victim of their own security policies, which were very effectively exploited.
I can't find any sources saying that..
(They will run programs, though. They always do.)
These devices want a physical interaction (this is called "User present") for most operations, typically signified by having a push button or contact sensor, so the attacker needs to have a proof of identity ready to sign, send that over - then persuade the user to push the button or whatever. It's not that difficult but it's one more step and if that doesn't work you wasted your shot.
Definitely use FIDO2, but understand that it's not foolproof. Malware, OAuth phishing, XSS, DNS hijacking, etc. will still pwn you.
But, haven’t there been bugs where operating systems will auto run some executable as soon as the USB is plugged in? So, just to be paranoid, I’d classify just plugging the thing in as “running random executables.” At least as a non-security guy.
I wonder if anyone has tried going to a local staples or bestbuy something, and slipping the person at the register a bribe… “if anyone from so-and-so corp buys a flash drive here, put this one in their bag instead.”
Anyway, best to just put glue in the USB ports I guess.
Good systems these days won't accept such a "keyboard" until it's approved by the user.
Working thus far on NetBSD, FreeBSD, and Linux. OpenBSD to come when I can actually get it to successfully install on the hardware that I have.
* https://jdebp.uk/Softwares/nosh/guide/user-virtual-terminal-...
In principle there's no reason that X11 servers or Wayland systems cannot similarly provide find-grained control over auto-configuration instead of a just-automatically-merge-all-input-devices approach.
The first possible approach is to start off with a non-empty ruleset that simply uses the "old model" (q.v.) and then switch to "opt-in" before commissioning the machine.
The second possible approach is to configure the rules from empty having logged in via the network (or a serial terminal).
The third possible approach is actually the same answer that you are envisaging for the laptop. On the laptop you "know" where the USB builtin keyboard will appear, and you start off having a rule that exactly matches it. If there's a "known" keyboard that comes "in the box" with some other type of machine, you preconfigure for that whatever it is. You can loosen it to matching everything on one specific bus, or the specific vendor/product of the supplied keyboard wherever it may be plugged in, or some such, according to what is "known" about the system; and then tighten the ruleset before commissioning the machine, as before.
The fourth possible approach is to take the boot DASD out, add it to another machine, and change the rules with that machine.
The fifth possible approach is for there to be a step that is part of installation that enumerates what is present at installation time and sets up appropriate rules for it.
It feels to me more like OSes ought to be more secure. But USB devices are extremely convenient.
E.g. https://shop.hak5.org/products/usb-rubber-ducky
New USB-HID keyboard? Ask it to input a sequence shown on screen to gain trust.
Though USB could be better too; having unique gadget serial numbers would help a lot. Matching by vendor:product at least means the duplicate-gadget attack would need to be targeted.
One of my favorite quotes is from an unnamed architect of the plan in a 2012 article about Stuxnet/the cyber attacks on Iran's nuclear program:
"It turns out there is always an idiot around who doesn't think much about the thumb drive in their hand."
Relevant: https://www.schneier.com/blog/archives/2016/10/security_desi...
I know that at least on Linux mounting filesystems can lead to nasty things, so there's FUSE, but ... I have no idea what distros and desktop environments do by default. And then there's all the preview/thumbnail generators and metadata parsers, ...
The U stands for Universal, and it's awfully convenient, but it contributes to the security nightmare.
A CD we can just passively read the bytes off, but if we want our keyboards to just work when we plug them in, then it's going to be harder to secure a supposedly dumb storage device.
Never mind that that 10% is still 1500 people xD
It’s gone so far that they’re now sending them from our internal domains, so when the banner to warn me it was an external email wasn’t there, I also got got.
I got got when they sent out a phishing test email disguised as a survey of user satisfaction with the IT department. Honestly I couldn't even be mad about it - it looked like all those other sketchy corporate surveys complete with a link to a domain similar to Qualtrics (I think it was one or two letters off).
(Speaking as one of the technical users here. Of course, it wouldn't happen to ME! :P )
Are your phishing emails good? If so if you don't mind name dropping the company so I can make a pitch to switch to them.
Congrats on the loot, though! Your former company can't be all bad. ;)
This pisses me off when the company I work for as a website for the new application for the week. I couldn't even begin to tell you how many websites we have. They don't have a list of them anywhere.
These are so obviously useless. When the majority of your email has a warning banner it stops to be any sort of warning. It's like being at "code orange" for 20 years after 9/11; no-one maintained "heightened security awareness" for decades, it just became something else to filter.
All they've done is teach me to spot the phishing tests, because our email is configured to let the test bypass the banner.
So, of course, we got to a point as a company where no one opened any email or clicked any link ever. This caused HR pain every year during open-enrollment season, for other annual trainings, etc.
At one point they started putting “THIS IS NOT A PHISH” in big red letters at the top of the email body to get folks to open emails and handle paperwork.
So then our trainers stole the “NOT A PHISH” header and got almost the entire company with that one email.
When I see these sophisticated phishing messages I like to click through and check out how well-made the phishing site itself is, sometimes I fill their form with bogus info to waste their time. So I opened the link in a sandboxed window, looked around, entered nothing into any forms.
It turns out the email was from a pen testing firm my employer had hired, and it had a code baked into the url linked to me. So they reported that I had been successfully phished, even though I never input any data, let alone anything sensitive.
If that's the bar pen testing firms use to say that they've succeeded in phishing, then it's not very useful.
For what it's worth, all vendors I've worked with in that space report on both. I'm pretty sure even o365's built-in (and rather crude) tool reports both on "clicked link" and "submitted credentials". I'd estimate it's more likely your employer was able to tell the difference, but didn't both differentiating between the two when assigning follow-up training because just clicking is bad enough.
They have no import/export so you are stuck in the iOS/Android ecosystem or have to do the passkey setup for all pages all over again
Live. On stage. In minutes. People fall for it so reliably that you can do that.
When we ran it we got fake vouchers for "cost coffee" with a redeem link, new negative reviews of the company on "trustplot" with a reply link, and abnormal activity on your "whatapp" with a map of Russia, and a report link. They were exceptionally successful even despite the silly names.
The avenue for catching this is that the password manager’s autofill won’t work on the phishing site, and the user could notice that and catch that it’s a malicious domain
Obviously SSO-y stuff is _better_, but autofill seems important for helping to prevent this kind of scam. Doesn't prevent everything of course!
Since this attack happened despite Kurt using 1Password, I'm really not all that receptive to the idea that 1Password is a good answer to this problem.
We can always make mistakes of course. And yeah, sometimes we just haven't done something.
81 more comments available on Hacker News