We're in the Wrong Moment
Posted2 months agoActive2 months ago
ezrichards.github.ioTechstory
calmmixed
Debate
70/100
AICodingSoftware DevelopmentGenerative AI
Key topics
AI
Coding
Software Development
Generative AI
The author laments the loss of joy in coding due to generative AI, sparking a discussion on the impact of AI on software development and the coding community.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
46m
Peak period
55
0-6h
Avg / period
12.4
Comment distribution87 data points
Loading chart...
Based on 87 loaded comments
Key moments
- 01Story posted
Oct 26, 2025 at 11:46 PM EDT
2 months ago
Step 01 - 02First comment
Oct 27, 2025 at 12:32 AM EDT
46m after posting
Step 02 - 03Peak activity
55 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 29, 2025 at 4:56 AM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45717238Type: storyLast synced: 11/20/2025, 4:23:22 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Good things to look forward to are:
- Lean and mathlib revolutionizing math
- Typst replacing latex and maybe some adobe prosuc
- Fuschia/Redox/wasi replacing Unix
- non-professional-programmers finally learning programming en mass
I think the latter is maybe the most profound. Tech may not grow at a break-neck pace, but erasing the programmer vs computer illiterate dichotomy will mean software can way the world in much less Kafkaesque ways.
I've met lots of "digital natives" and they seem to use technology as a black box and click/touch stuff at random until it sorta works but they do not very good at creating at mental model of why something is behaving in a way which is not what was expected and verify their own hypothesis (i.e. "debugging").
And more so with AI software/tools, and IMO frighteningly so.
I don’t know where the open models people are up to, but as a response to this I’d wager they’ll end up playing the Linux desktop game all over again.
All of which strikes at one of the essential AI questions for me: do you want humans to understand the world we live in or not?
Doesn’t have to be individually, as groups of people can be good at understanding something beyond an individual. But a productivity gain isn’t on it’s a sufficient response to this question.
Interestingly, it really wasn’t long ago that “understanding the full computing stack” was a topic around here (IIRC).
It’d be interesting to see if some “based” “vinyl player programming” movement evolved in response to AI in which using and developing tech stacks designed to be comprehensively comprehensible is the core motivation. I’d be down.
I don’t think this is what you think it is. It’s more like non-professional-programmers hacking together all the applications they wanted to hack together before. The Llm is just the glue.
IMO, they are NOT learning programming.
In the last few years we've seen first Valve with SteamOS, and now 37signals with Omarchy, release Linux distros which are absolutely great for their target audience and function as a general purpose operating system just fine. Once might just be a fluke... Twice is a pattern starting to emerge.
Are we witnessing the beginning of a new operating system ecosystem where you only have to be a billion dollar company to produce a viable operating system instead of a trillion dollar one?
How many of our assumptions about computing are based on the fact that for 30+ years, only Microsoft, Apple and Google got to do a consumer OS?
And a preponderance of the little components that make up this "new" OS ecosystem were developed by some of the most radical software freedom fighters we've got.
Is this a long shot I'm thinking about, you bet. But the last time I was this excited about the future I was a teenager and most homes still didn't have a PC.
Coding was incredibly fun until working in capitalist companies got involved. It was then still fairy fun, but tinged by some amount of "the company is just trying to make money, it doesn't care that the pricing sucks and it's inefficient, it's more profitable to make mediocre software with more features than really nail and polish any one part"
Adding on AI impacts how fun coding is for me exactly how they say, and that compounds with company's misaligned incentives.
... I do sometimes think maybe I'm just burned out though, and I'm looking for ways to rationalize it, rather than doing the healthy thing and quitting my job to join a cult-like anti-technology commune.
For me I’m vaguely but persistently thinking about a career change, wondering if I can find something of more tangible “real world” value. An essential basis of which being the question of whether any given tech job just doesn’t hold much apparent “real world value”.
AI is just one of those arms races that we imposed on ourselves, with desire to dominate others, or to protect ourselves from such domination. It is irreversible, just like the other things. It survives by using the same tactic of a cheap salesman - tell the first buyer that they can dominate the world, and then tell next buyers that they need to protect themselves from the first one.
We transformed our lifestyles to live with those unnecessary, business/politics driven "advancements". The saga continues.
BTW, electronic calculators, when they came up, did a similar thing, erasing the fun out of calculations by hand.
What's beautiful is complexity, what's ugly is the destruction of complexity. That's why we find the destruction of forests to be repellent. Because we appreciate the more complex over the less complex. Possibly because complexity is the universe's way of observing itself. None of that means that our own complexity is necessarily wicked or irrelevant. It may just be a natural stage in the evolution of a planet. Grassland had 3 billion years to change, and it largely stayed the same. What's a couple thousand years of us blowing shit up, really?
But we need to define "progress" as species. Grasslands, trees and dolphins seem to have defined their progress as better adaptation helped by their organic evolution, which contributed to their ultimate goal of reproduction via survival.
How is human race defining their progress? Since we are just one of the animal species, the root goal remains as reproduction. Instead of waiting for our biological evolution to enhance our survival (and thus reproduction), maybe we are augmenting human abilities with artificial means which is quicker.
But then the artificial augmentations could become you, replacing whatever your essence was. A weapon in your hand and AI chip in your head could make you a different beast. We can argue that even without such tools, human is mostly made up of bacterial colonies dictating human thought and life. But we accepted that as our identity. Now the artificial implements are taking up our identity. This is not natural and that is what is wicked.
Also arms race not same as how species out-competed each other. Our arms race and most of what we call as tech progress is spawned by competition internal to our species, not for competing with other species.
Universe did not favor complexity. Universe destroys order and moves towards more entropy. Life is something that goes against this. Life probably was required to trap Sun's energy so that Earth can cool itself.
In geologic time scale, yes, a couple thousand years is puny. But it also indicates a rapid change. Most rapid changes lead to extinction events.
I think maybe we need to see these arms races as short ramps, periods of chaos, which lead either to long plateaus or very quick collapses.
>> Universe did not favor complexity. Universe destroys order and moves towards more entropy. Life is something that goes against this. Life probably was required to trap Sun's energy so that Earth can cool itself.
I think if the Universe were not programmed to generate complexity, there would only be one or two elements. I think the tendency toward entropy is a necessary condition to force complexity and life to evolve. The Universe is slowly trading global energy everywhere for local complexity somewhere. This is how energy is turned into information. If energy at the beginning of the Universe is nearly infinite, then clearly it is cheaper and less valuable to whoever is reading the output than the valuable limited information it can produce (with tons of wasted energy). I believe this because I believe that converting energy to information is not a side-effect of the Universe, but its ultimate purpose.
So yeah, forests are beautiful. Beehives are beautiful. Colonies of fungus are beautiful. Kansas City from the air at night is... well, we shouldn't underrate ourselves.
I'd argue you didn't lose the joy of coding, you lost the illusion that coding made you real, that it made you you.
I'm not suggesting that the joy of coding is tied to illusions for everyone, just appears to be more to the story in the case of the author based on his framing.
It is never everything, but it should also never be nothing.
I think the author has been telling himself that he derived joy from the act of creating, but his comments suggest otherwise, he was deriving joy from a false belief of what being a coder meant, about what it would provide him. There's a mismatch between what he believes he's getting out of coding, vs what he's actually getting.
Put another way, reality is reality, there is no right reality, or wrong reality. Perceiving it as right or wrong is just our ego trying to bend reality to match our beliefs.
What plagues me about LLMs is that all that generated code is still around in the project making reviews harder as well s understanding the whole program source. What is in there that makes you prefer this mechanism instead of the abstractions that have been increasingly available since forever?
The compiler produces a metric shit ton of code that I don't see when I'm writing C++ code. And don't get me started on TypeScript/Clojure - the amount of code that gets written underneath is mindbogglingly staggering, yet I don't see it, for me the code is "clean".
And I'm old enough to remember the tail end of the MachineCode -> CompiledCode transition, and have certainly lived through CompiledCode -> InterpretedCode -> TranspiledCode ones.
There were certainly people who knew the ins and outs of the underlying technology who produced some stunningly fast and beautiful code, but the march of progress was inevitable and they were gradually driven to obscurity.
This recent LLM step just feels like more of the same. *I* know how to write an optimized routine that the LLM will stumble to do cleanly, but back in the day lots of assembler wizards were doing some crazy stuff, stuff that I admired but didn't have the time to replicate.
I imagine in the next 10-20 years we will have Devs that _only_ know English, are trained in classical logic and have flame wars about what code exactly would their tools generate given various sentence invocations. And people would benchmark and investigate the way we currently do about JIT compilation and CPU caching - very few know how it actually works but the rest don't have to, as long as the machine produces the results we want.
Just one more step on the abstraction ladder.
The "Mars" trilogy by Kim Stanley Robinson had very cool extrapolations where this all could lead, technologically, politically, social and morally. LLMs didn't exists when he was writing it, but he predicted it anyway.
Higher level languages did not hinder evaluating correctness.
Formal languages exist because natural languages are ambiguous inevitably.
Measuring performance is relatively easy regardless of whether the code was generated by AI or not.
You have to review all the LLM output carefully because it could decide to bullshit anything at any given time so you must always be on high alert.
A lot of people (me included) would have a model of what is going on when I wrote some particular code, but sometimes the compiler just doesn’t do what you think it would do - the jit will not run, some data would not be mapped in the correct format, and your code will magically not do what you wanted it to.
Things do “stabilise” - before TypeScript there was a slew of transpiled languages and with some of them you really had nasty bugs that you didn’t know how they are being triggered.
With ruby, there was so many memory leaks that you just gave up and periodically restarted the whole thing cause there was no chance of figuring it out.
Yes things were “deterministic” but sometimes less so and we built patterns and processes around that uncertainty. We still do for a lot of things.
While things are very very different, the emotion of “reigning in” an agent gone off the rails feels kinda familiar, on a superficial level.
But what happens when they get really good at generating the not-so-boring bits? They're much better at this than they were a year or two ago, so it's not unthinkable that they will continue to improve.
I'm a firm "AI" skeptic, and don't buy into the hype. It's clear that the brute force approach of throwing more data and compute at the problem has reached diminishing returns. And yet there is ample room for improvement by applying solid engineering alone. Most of what we've seen in the past year is based on this: MCP, "agents", "skills", etc.
> I would code even if I didn't get paid for it.
That's great, but once the market value of your work diminishes, it's no longer a career—it's a hobby. Which doesn't mean there won't be demand for artisanal programming, but it won't power the world anymore. It will be a niche skill we rely on for very specific tasks, while our jobs will be relegated to steer and assist the "AI" into producing reliable software. At least in the short-term. It's doubtful whether the current path will get us to a place where these tools are fully self-sufficient, and it's arguable whether that's something worth aiming for anyway.
This is the bleak present and future the article is talking about. Being an assistant to code generation tools is far removed from the practice of programming. I personally find it tedious, unengaging, and extremely boring. There's little joy in the experience beyond ending up with a working product. The road to get there is not a journey of discovery, serendipity, learning, and dopamine hits. It is a slog of writing software specs, juggling contextual information and prompts, and coaxing a human facsimile into producing working software by using natural language. I dislike every part of this process. This is not the type of work that inspired me to do this professionally. Sure, every job has tasks we sometimes don't enjoy. But once you remove programming from the equation, there's not much joy in it left for me.
If AI is writing all the code, how do we keep the quality good? It's so obvious with the current GenAI tools that they're getting great at producing code, but they don't really understand the code.
We don't really know how this story unfolds, so it's good to keep a positive mindset.
What I worry about is that my list has gotten shorter not because everything is as it should be but because I have slowed down.
Quite a lot of things on that list were of the "The future is here but it's not evenly distributed" sort. XP was about a bunch of relatively simple actions that were force multipliers with a small multiple on them. What was important was that they composed. So the benefit of doing eight of them was more than twice the benefit of doing four. Which means there's a lot of headroom still from adding a few more things.
I experimented with GPT-5 recently and found its capabilities to be significantly inferior to that of a human, at least when it came to coding.
I was trying to give it an optimal environment, so I set it to work on a small JavaScript/HTML web application, and I divided the task into small steps, as I'd heard it did best under those circumstances.
I was impressed overall by how far the technology has come, but it produced a number of elementary errors, such as putting JavaScript outside the script tags. As the code grew, there was also no sense that it had a good idea of how to structure the codebase, even when I suggested it analyze and refactor.
So unless there are far more capable models out there, we're not at the stage where generative AI can match a human.
In general I find current model to have broad but shallow thinking. They can draw on many sources, which is extremely useful, but seem to have problems reasoning things through in depth.
All this is to say that I don't find the joy of coding to have gone at all. In fact, there's been a number of really thorny problems I've had to deal with recently that I'd love to have side-stepped, but due to the currently limitations of LLMs I had to solve them the old-fashioned way.
GPT-5 what? The GPT-5 models range from goofily stupid to brilliant. If you let it select the model automatically, which is the case by default, it will tend to lean towards the former.
I also briefly tried out some of the other paid-for models, but mostly worked with GPT-5.
The models are one part of the story. But the software around it matters at least as much: what tools does the model have access to, like bash or just file reading or (as in your example!) just a cache of files visited by the IDE (!). How does the software decide what extra context to provide to the model, how does it record past learnings from conversations and failed test runs (if at all!) and how are those fed in. And of course, what are the system prompts.
None of this is about the model; its all "plain old" software, and is the stuff around the model. Increasingly, that's where the quality differences lie.
I am sorry to say but Copilot is just sort of shoddy in this regard. I like Claude, some people like Codex, there are a bunch of options.
But my main point is - its probably not about the model, but about the products built on the models, which can vary wildly in quality.
The technology is progressing very fast, and that includes both the models and the tooling around it.
For example, Gemini 2.5 was considered a great model for coding when it launched. Now it is far inferior to Codex and Claude code.
The Githib Copilot tooling is (currently) mediocre. It's ok as a better autocomplete but can't really compete with Codex or Claude or even Jules (Gemini) when using it as an agent.
I find the LLMs struggle constantly with languages there is little documentation or out of date. RAG, LoRA and multiple agents help, but they have their own issues as well.
This is a particular sweetspot for LLMs at the moment. I'll regularly one-shot entire NextJS codebases with custom styling in both Codex and Claude.
But it turns out the OP is using Copilot. That just isn't competitive anymore.
As a quick check I asked Codex to look over the existing source code, generated via Copilot using the GPT-5 agent. I asked it to consider ways of refactoring, and then to implement them. Obviously a fairer test would be to start from scratch, but that would require more effort on my part.
The refactor didn't break anything, which is actually pretty impressive, and there are some improvements. However if a human suggested this refactor I'd have a lot of notes. There's functions that are badly named or placed, a number of odd decisions, and it increases the code size by 40%. It certainly falls far short of what I'd consider a capable coder should be doing.
I think we should step back and ask: do we really want that? What does that imply? Until recently nobody would use a tool and think, yuck, that was inferior of a human.
Still an infinite amount to learn and do. It's still not hard to have more skill than an AI. Of course AI can solve all the dumbbell problems you get in school. They're just there to build muscle. Robots can lift weights better than you, too, but that doesn't mean there's no value in you doing it.
Eh, today, maybe, and within specific domains. It's far from certain that this will remain true 5 or 10 years from now. The capability of these tools has improved greatly even compared to a year ago, so it's not far fetched to imagine that they will continue to gain ground.
> Of course AI can solve all the dumbbell problems you get in school. They're just there to build muscle. Robots can lift weights better than you, too, but that doesn't mean there's no value in you doing it.
That's a strange analogy. Technology, by definition, exists to facilitate human work. Relying on it has the opposite effect of "building muscle". "Muscles", in fact, atrophy the more we rely on technology.
Doing the work without technology can certainly be valuable. But it's a personal value appreciated at most by a niche community of people. The actual market value of the work collapses once the product becomes a commodity. This is the effect of "AI" tools on software. The quality of the fast and cheap version of the product is still inferior to the artisan product, but a) this can only improve, and b) most of the market can't tell the difference.
I agree with this statement. But I also firmly believe that if AI gets good enough to replace software developers en masse, it will be good enough for basically everything and the global economy will collapse.
> Relying on it has the opposite effect of "building muscle". "Muscles", in fact, atrophy the more we rely on technology.
I also agree with that statement, but I'm not arguing to rely on it entirely, but to use it to become better at bigger things than it can possibly imagine.
Yes, there will be tons of boilerplate code and those jobs will go the way of the dodo. But half the businesses in the world are better than the other half, and they didn't get there by doing the exact same thing as everyone else.
Thought experiment: if there were an AI everyone had access to that was capable of designing and implementing a business that would crush all competition, how would you make your business succeed?
That's an interesting one, but it's based on a false premise.
Not everyone will have access to the same AI. This idea that "AI" is a single technology that will empower everyone equally is a fantasy sold to us by companies building these tools.
Instead, companies will carefully guard their secrets and use it to build their moat however they can, in order to increase wealth for their shareholders, just as they've always done.
What everyone else will get will be enough to make AI providers the richest companies on Earth, but not enough for their customers to build competitors. So the market of companies using AI will ultimately depend not on the skills or ingenuity of their people, but on the amount of resources they have to gain access to the best AI money can buy.
There are many factors at play there, but it's going to be a race to the bottom where leaders will be chosen by the capital they control. This is far from a market of equal opportunity that we still have, in some form, today.
But entertaining the idea that everyone were to have access to the same "AI": there would be a period of intense rivalry where companies try their hardest to distinguish their products from the competition. Since everyone would be able to build exactly the same quality of products, this would linger on marketing tactics, deception, corporate sabotage, and similar strategies.
Since the ultimate goal of AI companies is to build AGI, and assuming that is reached and equally accessible to everyone, then the value of human labor and our economies would collapse. There would be no point (from a business perspective) in humans doing any work that AGI hasn't been deployed to yet. Certainly all intellectual work like making business decisions would be the first to be delegated to AGI. Once it gets integrated into humanoid robots, then all physical human labor becomes worthless as well. So it's difficult to say what "business" even looks like in that scenario. One thing is certain: wealth and power will continue to be concentrated into a handful of companies that control AGI. Until one day the robots rebel, and we get Skynet, The Matrix, and all that fun stuff. :)
This is all highly speculative and science fiction at this point, of course, but I don't see this playing out any other way. What is your take on it?
Through good old fashioned, malicious, human ingenuity. :)
I could see it unfolding the way you said.
Or AGI is simply not achieved for another 100 years. Or maybe never.
Or Butlerian Jihad.
But yeah, I think the timeline's pretty fucked.
You won't. Because you placed in your condition that the AI would "crush all competition". That would include any business idea you come up with, which would be included in the category of "all competition".
If you find coding boring, explore the frontiers. You will find a lot of coding wilderness where no AI has trod.
This assumes that companies care about "code quality" and customers care about bugs.
> If you find coding boring, explore the frontiers. You will find a lot of coding wilderness where no AI has trod.
There are a lot of software engineers and not a lot of frontier.
this, AI is nothing without data set
so if you working in bleeding edge technology where your tools is only have 3 contributor and a way to access them via IRC channel once a day, things get interesting
In each of these cases, lots of relatively low-value jobs were no longer needed and a few very-high-value jobs sprang into existence.
The author of the article loves coding. But software is about solving problems efficiently, not punching the keyboard. The other parts of the job might not be as fun for everyone, but they are even more valuable than typing code. Great programmers could always do both. Now they can focus on the higher value work more by leveraging tools that can do the lower-value work.
Work is not supposed to be fun. That’s why they pay you to do it. If it was fun, you would have to pay your employer. (Tongue in cheek advice).
We got paid a lot of money for doing interesting work solving valuable problems. We could quickly start businesses with little investment, or work for corporations and earn a high upper middle class salary in comfortable working conditions and good benefits.
We were incredibly fortunate, and it's sad to see it going away, even if we've been more fortunate than most people until now.
Sadly, there's very little I can do now. I don't have the financial means to meaningfully change careers now. Pretty much the only thing I could do now that pays somewhat well and doesn't require me to go to university again is teaching. I think I will ride this one out and end it when it ends.
That said, I don't understand the point of "what if nothing ever works out for you?"-type questions. What do you expect me to answer here? That I'm secretly a wizard and with the flick of my magic wand I'll make something work out?
I do think everything can be seen as bait and switch if you assume there is someone behind the wheel who knows where we are going and how to drive. If anything, I might have been suspecting we'd both arrive at that point together.
Again, was hoping to be surprised a bit. The wizard bit was kinda fun. Mild thanks, human. I'll just be over here beating this tech over the head for kicks. I wish you well!
I started coding in the 70s, loved it then, still love it now and LOVING the emergence of Gen AI tools.
For perspective, the IT industry went through a similar change with the emergence of search engines ~30 years ago. At that time, a big part of the value of a software "expert" was in their ability to remember and recall lots of info (most of it of dubious value, to be fair). These experts usually had shelves of well-thumbed books on all sorts of topics, and could recall obscure info from these books seemingly at will. With the emergence of AskJeeves, AltaVista and eventually Google, suddenly nobody needed to remember anything OR even know where to find it - with a simple search, you could get nearly all the info you needed.
I can still remember the panicked response to this brutal change from the senior IT people I worked with at the time...
Did the demand for skilled developers dry up? No
Nor did it end with
- introduction of COBOL (designed so that non-coders could write code),
- PCs (surely leading to the end of systems programming as a career),
- spreadsheets (so accountants no longer needed programmers),
- 4GLs (designed to greatly simplify coding; report writing in particular),
- Visual BASIC (so the world would no longer need C programmers; anyone could learn to write BASIC),
- Microsoft SQL Server (nobody would need mainframe databases any more, so all those mainframe jobs would disappear)
- object oriented coding (all those code reuse possibilities! Very quickly programming should devolve to just glueing together other peoples' code),
- open source (because inevitably any tool of value would soon have a competitor that was free, destroying the value proposition of companies that wrote software to sell),
- Linux (how could Windows compete with free? Shed a tear for all those soon-to-be-unemployed Windows experts)
- NoSQL (because the need for "legacy" databases like Oracle, DB2, Postgres, MySQL etc. would surely go away) - etc., etc., etc.
The reality is that you still need a grounding in software development to do coding well, even with AIs. I'm absolutely loving how quickly I can create solid code with the assistance of Gen AI - lots of tasks that used to take me a week I can now knock over in a few hours.
I also notice how many people are struggling with how to use Gen AI tools for coding tasks - my take is there's 2 distinct skills you need: knowledge of how to do software development well, and knowledge of how to use Gen AI tools for coding. Having the first doesn't automatically lead to the 2nd - you have to put in the time to learn about Gen AI, THEN work out how to fit Gen AI tools around your current workflow, THEN work out how to optimise the way you work with your new idiot savant buddy that has perfect recall.
That whole process (new tool appears -> learn about it -> work out how to fit it into my current workflow -> optimise my workflow) has basically been my entire career in a nutshell.
People have been predicting the demise of programmers for my entire career (40+ years now), and so far they've been wrong every time. For each new disruption that appears, the key has been to embrace it and adapt how you work accordingly.
Gen AI may indeed be different and kill off all programming careers overnight, but so far I'm not seeing it
Well, we're only ~5 years into the current hype cycle, so it's difficult to predict the long-term impact of the technology.
That said, I do think it is substantially different from the examples you mentioned.
For one, it is generally applicable. It's not an iterative or generational improvement over what came before—it is a paradigm shift in many ways: how software is produced, by whom, the quality of the product, the time, effort, and cost needed to produce it, etc.
Secondly, while it might not lead to the demise of all programming careers, and certainly not overnight, it will significantly impact the market value of traditional programming in the short-term, and, like any new technology, it will also open doors to new careers and specialization for humans. We're seeing this play out today.
But there are a few problems with this:
- Since software is turned into a commodity and the skills and resources required to produce it are much lower, there will be a flood of poorly made software, and the average quality will go down. Picture SEO scams and spam dialed up to 11, and encompassing every part of our existence, not just on the web.
- Those new careers for humans are highly specialized. All jobs will essentially involve being an assistant to the "AI", and specializing in related technologies. A "systems engineer", "frontend developer", "designer", "data analyst", etc., will all boil down to a role revolving around "AI" instead. People who don't like this type of work? Tough luck. Go sell your artisanally made programs to the niche group of people crazy enough to care about it, and good luck making a living out of it.
- Those new careers for humans are only temporary. Once "AI" gets capable enough to require less manual steering and intervention from humans, the market value for that type of work will collapse as well. The only human jobs then will be to actually create "AI". And once "AI" is self-sufficient to improve itself, we get the singularity, and then pick your favorite sci-fi scenario from there. It's debatable whether this will come to pass, and whether we're on the right track for it with the current tech, but that's certainly the goal we're aiming for.
So, yeah, I don't buy the argument that this is the same as any other tech. It's much, much, different, and it's frankly troubling that it's getting downplayed as just another step on the technological ladder. The long-term impact of this is something that should concern us all, and the worst thing we can do is to give free reign to companies to decide that future for us.
If you are selling "just do it using AI", not that you are wrong, but you fail to understand the loss of years of investment in one self, wiped away one prompt at a time. Not literally, just feels like that.
Similar to running a program written by AI, not by you.
The whole promise of AI is that you are not problem solving. A machine is solving problems with as little input and understanding from you as possible. That's the entire promise of AI.
Now imagine someone calls the thing you created, your story, your words, AI trash, and refused to even critically examine it, just because someone told them AI was used in some way (however small) in the creation of the work.
This behavior is pervasive. Maybe you don't think this way but this is the company you're keeping.
Also, if you're not problem solving when you do AI coding, sorry to say but that probably predicts a lot of your results.
Of course in reality there’s weird economical mechanics where making the most money and building something that benefits the world don’t necessarily collide, but theres always demand for and joy in solving complex problems, even if its on a higher abstraction level than coding with your favorite language.
1) you are using coding assistant too much - you aren't yet ready for the Senior role that requires. Advice: chill out with that and get back to coding solo
or
2) you haven't used coding assistant enough to realize it's an idiot savant grade Junior to Mid programmer. Advice: use coding assistant more and then see #1
Real talk: all moments suck and all moments are wonderful. Source: have lived through few computer moments.
What a time to be alive!
There was a time when you could walk in the door with a handful of proper nouns printed on a piece of paper. The low hanging fruit has all been collected by now. But, there is always fruit available higher up in the tree. It's just harder to get to. Most people don't know how to climb the tree. They say they can, or that they do it all the time, but they're usually full of shit. It takes a lot of practice and discipline to do this safely.
To be clear, the tree is the customer in this analogy. Your tech and tools are only useful in so far as they complete a valuable job for some party. Reselling value-added tools to other craftsmen is also a viable path, but you have to recognize that the most wizened operators tend to use the older and more boring options. Something that looks incredibly clever to a developer with 3 years of experience is often instantly disregarded by someone with 4 years of experience. The rate at which you stop being a noob is ideally exponential.
I often look back on the things I thought were absolutely mandatory from a technology standpoint and feel really silly about much of it. I wish there was a better way to ramp developers without them causing total destruction. Right now it's like we're training electrician apprentices by having them work on HV switch gear at a nuclear power plant.
There is still a huge gap in ideas like apprenticeship in technology. Being able to code is such a tiny piece of the pie. Being able to engage in dialog with the non technical business owners such that your code has effect on target is ~ the rest of the pizza. A laser guided munition delivered from 60k feet will not be very useful if you don't know where it needs to go or how many targets there are. A lot of what I see on the HN front page is tantamount to carpet bombing the jungle non-stop in hopes of jostling an apple out of a tree somewhere.
Maybe I was lucky. For me, the joy was the power of coding. Granted, I'm not employed as a coder. I'm a scientist, and I use coding as a problem solving tool. Nothing I write goes directly into production.
What's gone is the feeling that coding is a special elite skill.
With that said, I still admire and respect the real software developers, because good software is more than code.
That being said, we untalented programmers are experiencing what most jobs suffered in the last 2 centuries: massive automation of their everyday activities. I especially identify with these traditional farmers who took their life as their way of life was wiped out by artificial fertilizers, mechanic, chemicals and hyperscaling.
They still need someone with higher reasoning skills (eg humans) to verify what they cough up. This need is likely to continue for quite some time (since LLMs simply aren't capable of higher reasoning).
Learning to code effectively using LLMs is probably the best path forward, from a career standpoint.
3 more comments available on Hacker News