I Almost Got Hacked by a 'job Interview'
Key topics
The author narrowly avoided being hacked during a fake job interview, highlighting the growing threat of sophisticated scams targeting developers, and sparking a discussion on security best practices and the risks associated with blockchain-related job offers.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
123
0-12h
Avg / period
20
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 15, 2025 at 8:56 AM EDT
3 months ago
Step 01 - 02First comment
Oct 15, 2025 at 10:33 AM EDT
2h after posting
Step 02 - 03Peak activity
123 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 20, 2025 at 6:43 PM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
But then again, aren't there obviously scams, and scams that are deemed legal? Like promising a car today that will be updated "next year" to be able to drive itself? Or all the enshittified industry's dark patterns, preying on you to click the wrong button?
Let's not downplay dark pattern strategies of some companies that actually do not benefit anyone in society.
I would say they just transition to something else where there is a lower risk with the same reward.
His intuition did.
But AI helped. He did not have to read and process the entire source code himself.
It is a little wild how many things expect to communicate with the internet, even if you tell them not to.
Example: the Cline plugin for vscode has an option to turn off telemetry, but even then it tries to talk to a server on every prompt, even when using local ollama.
In any case, even if your firewall protects you, you'll still have to treat the machine as compromised.
https://github.com/evilsocket/opensnitch/wiki/Rules#best-pra...
[1]: https://github.com/sandbox-utils/sandbox-venv [2]: https://github.com/sandbox-utils/sandbox-run
Embedded into this story about being attacked is (hopefully) a serious lesson for all programmers (not just OP) about pulling down random dependencies/code and just yolo'ing them into their own codebases. How do you know your real project's dependencies also don't have subtle malware in them? Have you looked at all of them? Do you regularly audit them after you update? Do you know what other SDKs they are using? Do you know the full list of endpoints they hit?
How long do we have until the first serious AI coding agent poisoning attack, where someone finds a way to trick coding assistants into inserting malware while a vibe-coder who doesn't review the code is oblivious?
Any update I may do to any project dependencies I have on my workstation? Either I bet, pray and hope that there's no malicious code in these.
Either I have an isolated VM for every single separate project.
Either I just unplug the thing, throw it in the bin, and go make something truly lucrative and sustainable in the near future (plumber, electrician, carpenter) that let's me sleep at night.
That's not too hard to do with devcontainers. Most IDEs also support remote execution of some kind so you can edit locally but all the execution happens in a VM/container.
If each developer can audit some portion of their dep tree and reuse prior cached audits, maybe it’s tractable to actually get “eyeballs” on every bit of code?
Not as good as human audit of course, but could improve the Pareto-frontier for cost/effectiveness (ie make the average web dev no-friction usecase safer).
You can't, end of story. ChatGPT is nothing more than an unreliable sniff test even if there were no other problems with this idea.
Secondly, if you re-analyzed the same malicious script over and over again it would eventually pass inspection, and it only needs to pass once.
You’d need some probabilistic signal rather than a binary one. Eg if some user with zero reputation submits a single session saying “all good”, this would be a very weak signal.
If one of the Python contributors submits a batch of 100 reasoning traces all showing green, you’d be more inclined to trust that. And of course you would prefer to see multiple scans from different package managers, infra providers, and OS distributions.
No. That's not how this works.
It will have to involve identity (public key), reputation (white list?), and signing their commits and releases (private key). All the various package managers will need to be validating this stuff before installing anything.
Then your attestation can be a manifest "here is everything that went into my product, and all of those components are also okay.
See SLSA/SBOM -> https://slsa.dev
Risk gets managed, not eliminated. There is no one "correct" approach as risk is a sliding scale that depends on your project's risk appetite.
I have no "hard rules" on how to appraise a dependency. In addition to the above, I also like to skim the issue tracker, skim code for a moment to get a feel for quality, skim the docs, etc. I think that being able to quickly skim a project and get a feel for quality, as well as knowing when to dig deeper and how deep to dig are what makes someone a seasoned developer.
And beware of anyone who has opinions on right vs. wrong without knowing anything about your project and it's risk appetite. There's a whole range between "I'm making a microwave website" and "I'm making software that operates MRIs."
[1] https://david-gilbertson.medium.com/im-harvesting-credit-car...
[2] https://blog.qwertysecurity.com/Articles/blog3.html
https://lavamoat.github.io
https://hardenedjs.org
And I do sandbox everything, but its complicated
Many of these projects are set to compile only on the latest OS' which makes sandboxing even more difficult and impossible on VM, which is actually the red flag
So I sandbox but I don't get to the place of being able to run it
so they can just assume I'm incompetent and I can avoid having my computer and crypto messed up
https://github.com/skorokithakis/dox
You could have a command like "python3.14" that will run that version of Python in a Docker container, mounting the current directory, and exposing whatever ports you want.
This way you can specify the version of the OS you want, which should let you run things a bit more easily. I think these attacks rely largely on how much friction it is to sandbox something (even remembering the cli flags for Docker, for example) over just running one command that will sandbox by default.
I develop everything on Linux VMs, it has desktop, editors, build tools... It simplifies backups and management a lot. Host OS does not even have Browser or PDF viewer.
Storage and memory is cheap!
whether you'd be able to find the backdoor in those or not, might depend on your skills as a security expert.
Supply side attacks are real, and they're here. Attackers attack core developers, then get their code into repositories. As happened this year to the npm package eslint-config-prettier, and last year to the Cyberhaven Chrome extension. Attackers use social engineering to get developers to hand over control of lesser used packages, which they then compromise. As happened in 2021 with the npm package ua-parser-js, and separately with the Chrome extension The Great Suspender. (I'm picking on Chrome because I wanted examples that impact non-developers. I'm only picking on npm because it turned up quickly when I looked for examples.)
The exact social engineering attack described by the OP is also not new. https://www.csoonline.com/article/3479795/north-korean-cyber... was published last year, and describes this being used at scale by North Korea. Remember, even if you don't have direct access to anything important, a sophisticated attacker may still find you useful as part of a spearphishing campaign aimed at someone else. Because a phishing attack that actually comes from a legitimate friend's account may succeed, where a faked message would not. And a company whose LinkedIn shows real developers, is more compelling than one without.
I mean we had Shai-Hulud about a week ago - we don't need AI for this.
https://search.sunbiz.org/Inquiry/CorporationSearch/SearchRe...
~~Scammers probably got access to the guy's account.~~ (how to make strikethrough...)
He changed his LinkedIn to a different company. I guess check verifications when you get messages from "recruiters."
Unfortunately(?) you can't: https://news.ycombinator.com/formatdoc
also, got blocked by the 'Chief Blockchain Officer' when I asked for a comment.
did you try commenting under one of their posts
like this one here - https://www.linkedin.com/posts/symfa-global_sometimes-the-fi...
A real company wouldn't be scamming candidates.
It could be a real company where someone hijacked an e-mail account to pose as someone from the company, though.
This might be the forth or fifth time I've seen this type of post this week, is this now a new form of engagement farming?
I've never encountered an Indian IT worker who does that, but I'd say a majority of Chinese IT workers go by an English name.
Also I've gotten the impression that at least a few my coworkers in Bangalore with anglicized names are Christian. I haven't pried to confirm, but in a couple cases their names don't fit the pattern of being adopted for working with foreigners (e.g. their last name is biblical).
I get that the author might be self-conscious about his English writing skills, but I would still much rather read the original prompt that the author put into ChatGPT, instead of the slop that came out.
The story - if true - is very interesting of course. Big bummer therefore that the author decided to sloppify it.
David, could you share as a response to this comment the original prompt used? Thanks!
so I am not able to share the full chat because i used Claude with google docs integration. but hears the google doc i started with
https://docs.google.com/document/d/1of_uWXw-CppnFtWoehIrr1ir...
this and the following prompt
``` 'help me turn this into a blog post.
keep things interesting, also make sure you take a look at the images in the google doc' ```
with this system prompt
``` % INSTRUCTIONS - You are an AI Bot that is very good at mimicking an author writing style. - Your goal is to write content with the tone that is described below. - Do not go outside the tone instructions below - Do not use hashtags or emojis
% Description of the authors tone:
1. *Pace*: The examples generally have a brisk pace, quickly moving from one idea to the next without lingering too long on any single point.
2. *Mood*: The mood is often energetic and motivational, with a sense of urgency and excitement.
3. *Tone*: The tone is assertive and confident, often with a hint of humor or sarcasm. There's a strong sense of opinion and authority.
4. *Style*: The style is conversational and informal, using direct language and often incorporating lists or bullet points for emphasis.
5. *Voice*: The voice is distinctive and personal, often reflecting the author's personality and perspective with a touch of wit.
6. *Formality*: The formality is low, with a casual and approachable manner that feels like a conversation with a friend.
7. *Imagery*: Imagery is used sparingly but effectively, often through vivid metaphors or analogies that create strong mental pictures.
8. *Diction*: The diction is straightforward and accessible, with a mix of colloquial expressions and precise language to convey ideas clearly.
9. *Syntax*: The syntax is varied, with a mix of short, punchy sentences and longer, more complex structures to maintain interest and rhythm.
10. *Rhythm*: The rhythm is dynamic, with a lively beat that keeps the reader engaged and propels the narrative forward.
11. *Perspective*: The perspective is often first-person, providing a personal touch and direct connection with the audience.
12. *Tension*: Tension is present in the form of suspense or conflict, often through challenges or obstacles that need to be overcome.
13. *Clarity*: The clarity is high, with ideas presented in a straightforward manner that is easy to understand.
14. *Consistency*: The consistency is strong, maintaining a uniform style and tone throughout each piece.
15. *Emotion*: Emotion is expressed with intensity, often through passionate or enthusiastic language.
16. *Humor*: Humor is present, often through witty remarks or playful language that adds a light-hearted touch.
17. *Irony*: Irony is occasionally used to highlight contradictions or to add a layer of complexity to the narrative.
18. *Symbolism*: Symbolism is used subtly, often through metaphors or analogies that convey deeper meanings.
19. *Complexity*: The complexity is moderate, with ideas presented in a way that is engaging but not overly intricate.
20. *Cohesion*: The cohesion is strong, with different parts of the writing working together harmoniously to support the overall message.```
Looks like the commonality is that the second word in the pair is often one of (in, out, up, down).
Another way to look at it: the verb doesn't magically grow together and apart if you use it in different tenses (past, present, future). "I am setting up" (present) is two words - therefore the "set up" in "I set up a script yesterday" and "I did not set up" also needs to be two words.
Genuine question: does this formulation style work better than a plain, direct "Mimick my writing style. Use the tone that is described below"?
I haven't updated this prompt in like a year or so. I actually made it for Claude 3.
But the google doc is genuinely good stuff.
(The LLM output was more or less unreadable for me, but your original was very easy to follow and was to-the-point.)
Thanks for sharing your process. This is interesting to see
So much for AI improving efficiency.
You could have written a genuine article several times over. Or one article and proofread it.
The LLM doesn’t know anything you didn’t tell it about this scenario, so all it does is add more words to say the same thing, while losing your authorial voice in the process.
I guess to put it a bit too bluntly: if you can’t be bothered writing it, what makes you think people should bother reading it?
* You had the headline spot on. Then you explained what you thought might be the reason for it.
* Then you pondered about why the OP might have done it.
* Finally you challenged the op to all but admitting his sins, by asking him to share the incriminating prompt he used.
---
(my garbage wasn't written by AI, but I tried by best to imitate it's obnoxious style).
I've not been using much LLM output recently, and generally I ask it to STFU and just give me what I asked as concisely as possible. Apparently this means I've seriously gotten out of practice on spotting this stuff. This must be what it looks like to a lot of average people ... very scary.
Advice for bloggers:
Write too much, write whatever comes out of your fingers until you ran out of things to write. It shouldn't be too hard to just write whatever comes out, if you save your self-criticism for later.
If you're trying to explain something and you run out of things to write before you manage to succeed at your goal. Do a bit more research. Not being able to write too much about a topic is a good indication that you don't understand it well enough to explain it.
Once you have a mess which somehow gets to the point, cut it way down, think critically about any dead meat. Get rid of anything which isn't actually explaining the topic you want.
Then give it to an LLM, not to re-write, but to provide some editorial suggestions, fix the spelling mistakes, the clunky writing. Be very critical of any major suggestions! Be very critical of anything which no longer feels like it was written by _you_.
At this point, edit it again, scrutinise it. Maybe repeat a subset of the process a couple of times.
This is _enough_ you can post it.
If you want to write a book, get a real editor.
Do not get ChatGPT to write your post.
that's one of my key takeaways from all the comments here. a lot of people actually like the og - pre ai content I wrote more than the blog article it became. i just have to be confident in my own writing I guess.
btw, how do you have Arch in your name and have a Fiancee? sounds fishy :) /s
This "slop" reads perfectly fine to me, and obviously a lot of others, except those who have now been conditioned to watch out for it and react negatively about it.
Think about it, why react negatively? The text reads fine. It is clear, even with my usual lack of attention I found it engaging, and read to the end. In fact, it doesn't engage in the usual hubris style prose that a lot of people think makes them look smarter.
2. It's immediately recognized as AI Slop which makes people question its veracity, or intent
3. If the author can't take the time and effort to create a well-crafhed article, it's insulting to ask us to take the time and effort to read it.
4. Allowing this style of writing to become accepted and commonplace leads to a death of variety of styles over time and is not good for anyone. For multiple reasons.
5. A lot of people are cranking out shit just for money, so maybe they wrote this just for money and maybe it's not even true (related to point 3)
> If you think it reads fine, you don't read good prose.
Is reductive and not in good spirit.
Good day.
I found the AI version to be really clear and it helped me understand everything that was going on.
I found the Google doc version harder to read and slightly difficult to understand.
And when I read the Google doc, I understood, that I would have preferred the Google doc as well :-D
What's HN policy on obviously LLM written content -- Is it considered kosher?
> The Bottom Line"
Are there any moderators left at LinkedIn?
Don't expect LinkedIn to care much about policing messages or paid invitations; and many profiles are fake. At most, you report people and if they LI enough complaints they take the profile down. (Presumably the scammers just create another profile.) I think LI would care much more about being paid with a bad CC.
I suspect LI is doing AI moderation by this point. Maybe we could complain to their customer-service AI about their moderation AI...
You can report abuse and flag it for someone to review, though.
Click "More" button -> "About this profile", RED FLAGS ALL OVER.
-> Joined May 2025 -> Contact information Updated less than 6 months ago -> Profile photo Updated less than 6 months ago
Funny things, this profile has the LinkedIn Verified Checkmark and was verified by Persona ?!?! -> This might be a red flag for Persona service itself as it might contain serious flaws and security vulnerabilities that Cyber criminals are relying on that checkmark to scam more people.
Basically, don't trust any profile who's been less than 1yr history even though their work history dated way back, who has Personal checkmark, that should do it.
[1] https://www.linkedin.com/in/mykola-yanchii-430883368/overlay...
From an attacker standpoint, if an attacker gains access to any email address with @example.com, they could pretend to be the CEO of example.com even if they compromised the lowest level employee.
Apple / Google developer program uses Dun&Bradstreet to verify company and developer identities. That's another way. But LinkedIn doesn't have that feature (yet).
Bad idea.
I never had my work e-mail address on LinkedIn, but then I made the mistake of doing this, and LinkedIn sold my work e-mail address to several dozen companies that are still spamming me a year later.
Seasoned accounts are a positive heuristic in many domains, not just LinkedIn. For example, I some times use web.archive.org to check a company's domain to see how far back they've been on the web. Even here on HN, young accounts (green text) are more likely to be griefing, trolling, or spreading misinformation at a higher rate than someone who has been here for years.
Yep. This is how the 3 major credit bureaus is the United States to verify your identity. Your residence history and your presences on the distributed Internet is the HARDES to fake.
Instead you need: - five years of address history - a recent utility bill or a council tax bill that has your full address - maybe a bank statement - passport or driving license
It just so happens that Experian, etc. have all of that, and even background checking agencies will depend on it.
(Except maybe the sorts of idiots who write job descriptions requiring 10+years of experience with some tech that's only 2 years old, and the recruiters who blindly publish those job openings. "Mandatory requirements: 10+ years experience using ChatGPT. 5+ years experience deploying MCP servers.")
so.. most of them?
Anyway the problem is not a hiring person expecting it, it's systems written with not enough thought that will expect it for them, and flag the people as untrustworthy who do not match expectations.
Only if you don’t plan ahead. I can’t remember which book/movie/show it was from, but there was a character who spent decades building identities by registering for credit cards, signing up for services, signing leases, posting to social media, etc so that they could sell them in the future. Seems like it would be trivial to automate this for digital only things.
There are probably more ways this can fail.
https://en.wikipedia.org/wiki/Shelf_corporation
When I was 18 with little to no credit trying to do things. Financial institutions would often hit me with security questions like this.
But, I was incredibly confused because many of the questions had no valid answer. Somehow these institutions got the idea that I was my stepmother or something and started asking me about address and vehicles she owned before I ever knew her.
Though if step mom shares your name (not unlikely if OP is a girl with a common name) it isn't a surprise that they will mix you up.
I've found for the most part account age/usage is not considered at all in major online service providers.
I've straight up been told by Google, Ebay and Amazon that they do not care about account age/legitimacy/seasoning/usage at all and it is not even considered in various cases I've had with these companies.
They simply don't care about customers at all. They are only looking at various legal repercussions balanced against what makes them the most money and that is their real metric.
Ebay: Had a <30day old account make a dispute against me that I did not deliver a product that was over $200 when my account was in good standing for many years with zero disputes. Ebay told me to f-off, ebay rep said my account standing was not a consideration for judgement in the case.
Google: Corporate account in good standing for 8+ years, mid five figure monthly spending. One day locked the account for 32 days with no explanation or contact. At day 30 or so a CS rep in India told me they don't consider spending or account age in their mystery account lockout process.
Amazon: Do I even need to...
I'm considering going back to school to write a "Google Fi 2016-2023: A Case Study in Enshittification" thesis but I'm not sure what academic discipline it fits under.
(I'll say it again for those in the back, if you're looking for ideas, there's arbitrage in service.)
So, just hire one of those "account aging" services?
Because if you expect people to go there keeping everything up to date, posting new stuff, tracking interactions for 3 years and only after that they can hope to get any gain from the account... That's not reasonable.
What?
You only need to create an account once.
Update it when you're searching for a new job.
You don't need to log in or post regularly. Few people do that.
355 more comments available on Hacker News