A New Threat: Being Replaced by Someone Who Knows AI
Postedabout 2 months agoActiveabout 2 months ago
wsj.comTechstory
heatedmixed
Debate
80/100
AI AdoptionProductivityJob Security
Key topics
AI Adoption
Productivity
Job Security
A company is threatening to replace employees who don't use AI, sparking debate among HN commenters about the effectiveness and implications of AI adoption in the workplace.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2m
Peak period
25
0-2h
Avg / period
7.3
Comment distribution44 data points
Loading chart...
Based on 44 loaded comments
Key moments
- 01Story posted
Nov 7, 2025 at 7:15 PM EST
about 2 months ago
Step 01 - 02First comment
Nov 7, 2025 at 7:17 PM EST
2m after posting
Step 02 - 03Peak activity
25 comments in 0-2h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 8, 2025 at 7:31 PM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45852859Type: storyLast synced: 11/20/2025, 5:39:21 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
1: https://www.ft.com/content/68011c4a-8add-4ac5-b30b-b4127aee4...
If you are not leveraging the best existing tools for your job (and understanding their limitations) then your output will be lower than it should be and company leadership should care about that.
Claude reduces my delivery time at my job like 50%, not to mention things that get done that would never have been attempted before. LLMs do an excellent job seeding literature reviews and summarizing papers. Would be a pretty bad move for someone in my position to not use AI, and would be pretty unreasonable of leadership not to recognize this.
However, if you were leadership in this scenario, and you see people using various AI tools are systematically more productive then the people that aren’t, what would you do?
Most of the anti-AI people have conceded it sometimes works but they still say it is unreliable or has other problems (copyright etc). However there are still a few that say it doesn't work at all.
Cause I use things like computers, applications, search engines and websites that regularly return the wrong result or fail
But nuanced and effective AI use, even today with current models, is incredible for productivity in my experience
I do however find it useful for getting an overview of dense chunks of confusing code.
You have to get quite sophisticated to use AI for most higher-value tasks, and the ROI is much less clear than for just helping you write boilerplate. For example, using AI to help optimise GPU kernels by having it try lots of options autonomously is interesting to me, but not trivial to actually implement. Copilot is not gonna cut it.
For some reason, big companies often tolerate people being horribly inefficient doing their job. Maybe it is starting to change?
I have a prompt which opens scans of checks placed on a matching invoice (EDIT: Note that the account line is covered when the scan is made so as to preclude any Personal Identifying Information being in the scan) and writes a one line move command to rename the file to include the amount of the check and date, and the invoice ID# and various other information, allowing it to be used to track that the check was entered/deposited and copying a folder full of files as their filepath so that the text of that can be pasted into Notepad, find-replaced to convert the filenames into tab-separated text, then pasted into Excel to total up to check against the adding machine tape (and to check overall deposits).
On Monday, it worked to drag multiple files into Co-Pilot and run the prompt --- on Tuesday, Co-Pilot was updated so that processing multiple files was the bailiwick of "Co-Pilot Pages Mode", so it's necessary to get into that after launching it, requiring a prompt, then pressing a button, then only 20 files at a time can be processed --- even though the prompt removes the files after processing, it only allows running a couple of batches, so for reliability, I've found it necessary to quit after each batch and re-start. However, that only works five or six times, after that, Co-Pilot quits allowing files to upload and generates an error when one tries --- until it resets the next day and a few more can be processed.
I've been trying various LLM front-ends, but Jan.ai only has this on their roadmap for v0.8, and the other two I tried didn't pan out --- anyone have an LLM which will work for processing multiple files?
I know a startup founder whose company is going through a bit of a struggle - they hired too many engineers, they haven't gotten product-market fit yet, and they are down to <1 year of runway.
The founder needed to do a layoff (which sucks in every dimension) and made the decision to go all-in on AI-assisted coding. He basically said "if you're not willing to go along, we're going to have to let you go." Many engineers refused and left, and the ones that stayed are committed to giving it a shot with Claude, Codex, etc.
Their runway is now doubled (2 years), they've got a smaller team, and they're going to see if they can throw enough experiments at the wall over the next 18 months to find product-market fit.
If they fail, it's going to be another "bad CEO thought AI could fix his company's problems" story.
But if they succeed....
(Curious what you all would have done in this situation btw...!)
Not meaning to sound accusatory, just asking. Was it the tools provided that they didn’t like? Ideological reasons not to use AI? Was the CEO being too prescriptive with their day to day?
I guess I find it hard to imagine why someone would dig in so much on this issue that they’d leave a job because of it, but 1) I don’t know the specifics of that situation and 2) I like using AI tooling at work for stuff.
1) I don’t really like these AI tools I write better code anyway and they just slow me down
2) I like these tools they make me 10% faster but they’re more like spell check / autocomplete for me than life-changing and I don’t want to go all in on agentic coding, etc and I still want to hand write everything, and:
3) I am no longer writing code, I am using AI tools (often in parallel) to write code and I am acting like an engineering manager / PM instead of an IC.
For better or for worse, and there is much to debate about this, I think he wanted just the (3) folks and a handful of (2) folks to try and salvage things otherwise it wasn’t worth the burn :(
This especially so after I have seen someone trying to use AI after I had provided simple and clear manual steps. Instead trying to do something different with very unfitting scenario. Where also the AI really did not understand that the solution would not have even worked.
I use AI daily and frankly I love it while thinking of it from the context of "I write some rough instructions and it can autocomplete an idea for me to an extremely great degree". AI literally types faster than me and is my new typewriter.
However, if I had to use it for every little thing, I'd do it. The problem though is when it reaches a point where I have to use it to replace critical thinking for something I really don't know yet.
The problem here is that these LLMs can and will churn out absolute trash. If this was done under mandate, the only thing I'd be able to respond with when that trash is being questioned is "the AI did it" and "idk, I was using AI like I was told".
It literally falls into the "above my pay-grade" category when it comes down as a mandate.
I really hope there's more nuance to articles like these though. I really hope these companies mandating AI use are doing so in a way that considers the limitations.
This article does not really clue me the reader in to if that is the case or not though.
The best use cases are for code that’s clearly not an end product. You can just try way more ideas and get a sense of which are likely to pan out. That is tremendously valuable. When I start reading the code they produce, I quickly find many ways I would have written it differently though.