I Do Not Want to Be a Programmer Anymore
Posted3 months agoActive3 months ago
mindthenerd.comTechstory
heatedmixed
Debate
85/100
Artificial IntelligenceProgrammingCareer Development
Key topics
Artificial Intelligence
Programming
Career Development
The author expresses frustration with the increasing reliance on AI in programming, sparking a debate among commenters about the role of AI in software development and the skills required to effectively work with it.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
N/A
Peak period
111
Day 1
Avg / period
28.8
Comment distribution115 data points
Loading chart...
Based on 115 loaded comments
Key moments
- 01Story posted
Oct 5, 2025 at 9:53 AM EDT
3 months ago
Step 01 - 02First comment
Oct 5, 2025 at 9:53 AM EDT
0s after posting
Step 02 - 03Peak activity
111 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 16, 2025 at 10:58 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45481490Type: storyLast synced: 11/20/2025, 6:56:52 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
How are you handling this shift? Do you find yourself spending more time explaining “why not” than actually building?
You take the input, mostly ignore it, and move on. YMMV on that strategy, but if you are deft with it then you can dodge a lot of bullshit.
It does require that the things you do decide to do pan out though. You’ll need results to back it up.
I’m not really questioning that it happens, I’m stating how much energy it now takes to keep pushing back (or approving) and how easy it is to just agree with an AI output these days. Just my experience and two cents.
Really? That's your solution?
A better way to put it is don't run anything in production that you don't have the knowledge to understand yourself. You should be able to code review anything that is written by an LLM, and if you don't have sufficient knowledge to do this, don't feel tempted to run the code if you're not responsible enough to maintain it.
A "C" student.
An embedded engineer.
For example: Ed Nite says he doesn't want to be a programmer anymore. Who is Ed Nite? Is he even a programmer at all?
As far as I can tell, Ed Nite: Programmer doesn't really exist, must be a pen name. As far as his content, he mostly talks about being a writer and using AI. There's no real technical content to speak of. He doesn't link to a Github or work record. I found a youtube page of his with a single AI video on it from 6 months ago. As far as I can tell Ed Nite was invented 6 months ago to start blogging about blogging, self improvement and AI at mindthenerd.com.
So do I trust him? No. Assume AI and move on.
I could see this kind of AI astroturfing being a real problem communities face in the future, where you just scrape the top posts on a community and then generate blog content related to those, then post your content back at the community.
Rinse and repeat and you don't have to be a programmer anymore.
People would respect this more if it was content lovingly generated for years, and then the author went "hey, maybe I can promote this on HN?". But artificially promoting worthless, slop content, is going to generate this reaction.
Whereas really playing a piano or performing live or driving an F1 car or writing a long essay takes some real effort and talent. That's what makes it interesting.
And yes, anyone can generate this kind of AI content nowadays.
https://hnrankings.info/45481490/
He's Nick Heidfeld to Schumacher's 2006 win.
Usually this kind of content doesn't reach HN because the antibodies kill it sooner. If you're arguing the antibody-bypassing succeeded here, ok... but that's not a solid defense of AI slop.
Anyone can do slop.
Not anymore.
This makes me feel you are not being forthright, and you are trying to take us for fools. What's worse, you are trying to profit off it.
TFA you submitted today tries to hide it better, but there's no reason to be quoting Marcus Aurelius twice in one blog post until you realize they're affiliate links... which is like every link on your blog.
I'm not going to speak for everyone, but personally I'd prefer this style of content not be posted here.
Even the primary anecdote of the post simply doesn't seem realistic to me. Why would you ask an LLM whether a domain name idea is good or terrible? That's an entirely subjective opinion question with no right or wrong answer! And chatbots are widely known for being sycophantic anyway, so the response will just depend on how the question was framed.
OP, if you're actually writing this stuff entirely by hand, you've internalized AI writing style to a disturbing degree.
For me, AI is sitll a time-saver, like other grammar tools, so I can focus on the message.
I had hoped people would focus more on the message than the craft and tools, but I understand now. Lesson learned, and I’ll keep working on it.
I'm listening to my audience and improving my writing as I chronicle my life experiences.
I don’t use AI to push sales. In fact, I shut down AdSense within minutes of approval because monetizing isn’t my goal right now. Yes, i use affiliate links to books I’ve personally found useful (and love quoting them) and hope others will too, but I’ve been debating removing those as well, the same way I did with ads, if it hurts the reader’s experience. I'm thinking the affiliates don't readers, but I may be wrong.
I’m new to this space and still learning how to be authentic online. This community is actually the only place I share my writing, and as you can see i'm stumbling a lot, but I’m listening, learning, and I genuinely appreciate everyone’s feedback, yours included.
To be clear, there was always filler content on the internet, but with AI this is exploding.
https://docs.google.com/document/d/1MxGi273kK-8lKSIrgOQTPWYn...
The blog's direction is still unclear for me. For now, I just want to share experiences and ideas, and if even one person finds them useful, that’s enough.
I’m not looking to profit from it. In fact, I turned AdSense off almost as quickly as it got approved. (Point in case: When i got started in April, ChatGPT suggested I apply and i foolishly did) One morning I woke up to see my blog plastered with ads, forgetting I had applied. I nearly fell out of bed in horror and shame. I turned them off.
Absolute low-tier AI slop.
At first I leaned on AI pretty heavily as my “editor-in-chief” to save time. Later, it’s become more of an opinionated buddy I bounce ideas off. The narrative has always been mine, though, I’ve always understood what I (or it) was writing. I still use AI when it saves me time, but what matters most is the story and the message.
I’m learning as I go, and my newer posts are less AI-shaped and more in my own voice. It’s a process I don’t regret. Thanks for your comment.
Just write your thoughts. I don’t care if it has mistakes or bad grammar. We only have so much time on this planet for each other.
That's exactly what I think. If I wished to read AI, I would ask AI itself to give me something to read.
That’s really the point of this piece: whatever you get from AI, make sure it aligns with your own thinking instead of just surrendering to it. I came to write this piece because I’ve noticed I question it less than I did in the beginning, and that’s where I have to be careful.
Thank you for your comment.
Read a book about writing, think about writers whose writing touched you, discover the voice you want to have, the people you want to reach. Human connection is the point.
Hand edit a piece until you are satisfied, then run your default AI loop on the original. Observe with clear eyes what was lost in the process. What it missed that you discovered in the process of thinking deeply about your own thoughts.
That said, I do love writing. I still have boxes of old fiction drafts from before the internet was even a thing. For me, it’s the story that matters most. If dependingly heavily on AI early on came across as the wrong approach, I apologize. I’ve learned from it, and I’m working to improve. You’re absolutely right that time and words should be cherished.
I think it would improve your writing if you go off at tangents, use weird idioms, prefer obscure references to cliches, attempt implausible feats of lateral thinking, and yell at people and call them wrong. This may also make you unpopular, but that's a detail.
If your blog starts off as AI-slop, it will always be AI-slop.
In the case in the article the author believed they were the expert, and believed their wife should accept their argument on that basis alone. That isn't authority; that's ego. They were wrong so they clearly weren't drawing on their expertise or they weren't as much of an expert as they thought, which often happens if you're talking about a topic that's only adjacent to what you're an expert in. This is the "appeal to authority" logical fallacy. It's easy to believe you're the authority in question.
...we’ve allowed AI to become the authority word, leaving the rest of us either nodding along or spending our days explaining why the confident answer may not survive contact with reality.
The AI aspect is irrelevant. Anyone could have pointed out the flaw in the author's original argument, and if it was reasoned well enough he'd have changed his mind. That's commendable. We should all be like that instead of dogmatically holding on to an idea in the face of a strong argument against it. The fact that argument came from some silicon and a fancy random word generator just shows how cool AI is these days. You still have to question what it's saying though. The point is that sometimes it'll be right. And sometimes it won't. Deciding which it is lies entirely with us humans.
In my experience, motivated reasoning rules the day. People have an agenda beyond their reasoning, and if your proposal goes against that agenda, you'll never convince them with logic and evidence. At the end of the day, it's not a marketplace of ideas, but a war of conflicting interests. To convince someone requires not the better argument, but the better politics to make their interests align with yours. And in AI there are a lot of adverse interests you're going to be hard pressed to overcome.
“It is difficult to get a man to understand something, when his salary depends on his not understanding it.”
If my wife had made the same arguments in the same polished way, I probably would’ve caved just as fast. But she didn’t, AI did... and what struck me wasn’t the answer, it was how fast my own logic switched off, as if I’d been wrong all along.
That’s what feels new to me, sitting in a meeting for hours while a non-tech person confidently tells execs how “AI will solve everything”, and everyone nods along. The risk isn’t just being wrong, it’s when expertise gets silenced by convincing answers, and stops to ask the right questions.
Again, this is my own reflection and experience, others may not feel this way. Thanks for your comment.
That’s really what the piece was about, how quickly I found myself giving up my own judgment to AI.
what would be better is to ask: give me three good arguments for this, and then: give me three good arguments against this, and finally compare the arguments without asking the AI tool which is better.
I question that assertion. The other party has to be willing to engage too.
Don’t spend your time analyzing or justifying your position on an AI-written proposal (which by definition someone else did not spend time creating in the first place). Take the proposal, give it to YOUR AI, and ask it to refute it. Maybe nudge it in your desired direction based on a quick skim of the original proposal. I guarantee you the original submitter probably did something similar in the first place.
When people do this in their relationships, marriages fail, friendships are lost, children forget who you were without the veil.
There are already stories like this cropping up every day. Do you really not understand that connecting with other flawed, unpolished people is its own reward? There is beauty and value in those imperfections.
I’m 100% an AI skeptic but also I won’t invest time and emotion replying to communications where it’s clear the other side a priori also didn’t invest at all. Let machines deal with machines and let me deal with humans.
And believe you me, I reserve this for contexts where I’m not jeopardizing my real interaction with someone I care about.
> Do you really not understand that connecting with other flawed, unpolished people is its own reward?
Oh I understand that perfectly - the above sentence shows that it’s you who didn’t understand what I said in the first place.
That said, I do have the advantage of effectively absolute financial security, so I'm privileged enough to be choosy about who I interact with. I do understand that sometimes there's no real choice but to wade through slop in the pursuit of a paycheck.
Everyone wants to be a “programmer” but in reality, no-one wants to maintain the software and assume that an “AI” can do all of it i.e Vibe coding.
What they really are signing up for is the increased risk that someone more experienced will break your software left and right, costing you $$$ and you end up paying them to fix it.
A great time to break vibe coded apps for a bounty.
The domain name incident absolutely isn’t a strong enough case to justify pivoting a career.
The clients suggesting features and changes might be a reason to pivot a career, but towards programming and away from product/system development. I mean, let the client make the proposal, accept the commission at a lower rate that doesn’t include what you’d have charged to design it, and then build it. AI ought to help get through these things faster, anyway, and you’ve saved time on design by outsourcing to the client. In theory, you should have spare time for a break, a hobby, or to repeat this process with the next client that’s done the design work for you.
I agree with all the points about agency, confidence, experience (the author used “authority”). We must not let LLMs rob us of our agency and critical thinking.
The client will still blame you when it doesn’t meet their real needs. And rightfully so, as much as a doctor would still be blamed if he followed a cancer treatment plan the patient brought in from ChatGPT
Sorry that wasn’t clear.
Shit, if LLMs have solved that unsolved problem in computer science, naming things, our profession really is over.
Every article on this website looks to be almost wholly AI generated. Pure slop.
Trust me, put in the work and you'll thank yourself for it, you'll learn to enjoy the process and your content will be more interesting.
I quickly realized that wasn’t the best approach and that I needed to respect my readers’ time. If you look at my newer articles, you’ll see they’re shifting, becoming more me.
I’m lightly revising my earlier posts, but honestly I won’t change them much. I think it’s valuable for readers to see the progression, stumbles and all. Thanks for your comment, it might even become the subject of my next article.
What is the point of you?
Please don't bother copy-pasting my comment into your AI to prompt it for a response. I want to know what YOU value in your life, not some premasticated, overly-positive nonsense.
I think a lot of people are not in the habit of doing this (just look at politics) so they get easily fooled by a firm handshake and a bit of glazing.
I loved this article. It put in words a subliminal brewing angst I’ve been feeling as I happily use LLMs in multiple parts of my life.
Just yesterday, a debate between a colleague and I devolved into both of us quipping “well, the tool is saying…”, as we both tried to out-authoritate each other.
As an example and not the real name, but in true HN style, imagine losing sight in one eye, still learning to code like anyone else, and wanting to share that story. You might come up with something like TheCyclopsCoder.com. (Totally made up just now, no comments needed.)
I debated it, since I worried it could alienate or offend blind coders. She disagreed, felt it was genuine.
I hope that helps feed your curiosity and who knows, one day i might just promote her site if she moves forward with it.
What the future holds for 99.999% of humanity who isn't an owner or somehow locating a lucrative niche specialty is more or less globally flattening into similar states of declining real wages for almost everyone. Meanwhile, megacorp capital owners and their enabling corrupt government regimes are more and more resembling racketeering and organized crime syndicate aristocracies with extreme wealth distribution disparities that generally aren't getting any better.
The situation of greater desperation for income invariably drives people to non-ideal choices:
a. Find a new field of work that make less money
b. Sacrifice ethics to work at companies that cause greater harm in exchange for more money
c. Assume the on-going risks of launching a business or private consulting practice
d. Stay and agree to greater demands for productivity, inconvenience, bureaucracy, and micromanagement for less pay
e. Give up looking for work, semi-retire, and move to somewhere like another state or country where the cost-of-living cheaper
---
0. It's difficult to get a man to understand something when his salary depends on his not understanding it.
[1] https://en.wikipedia.org/wiki/Brandolini%27s_law
Don't explain. Don't argue. Simply confirm that the person fully understands what they're asking for despite using AI to generate it. 99% of the time the person doesn't. 50% of the time the person leaves the conversation better off. The other 50% the lazy bastards get upset and they can totally fuck off anyway and you've dodged a bullet.
I stopped reading after the first paragraph or two.
7 more comments available on Hacker News