AI Coding Made Me Faster, but I Can't Code to Music Anymore
Key topics
The rise of AI-assisted coding has brought about an unexpected casualty: the ability to code to music. As one developer lamented, their newfound reliance on AI tools has made background noise unbearable, sparking a lively debate about the trade-offs of coding with AI. While some commenters commiserated about the loss of musical coding companionship, others shared clever workarounds, such as using system mixers to balance music and meeting audio or opting for "brutal" music to maintain focus. The discussion also veered into humorous tangents, with one commenter joking that the ultimate plan is to turn humans into music-hating robots.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
74
48-60h
Avg / period
17.8
Based on 160 loaded comments
Key moments
- 01Story posted
Aug 27, 2025 at 1:10 AM EDT
4 months ago
Step 01 - 02First comment
Aug 27, 2025 at 2:15 AM EDT
1h after posting
Step 02 - 03Peak activity
74 comments in 48-60h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 31, 2025 at 12:44 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
EQing the music played sounds interesting. I’ll look at the options. I tend to just have it at a level low enough that I can hear all speakers on the call.
I also wonder what type of simple CRUD apps people build that have such a performance gain? They must be building well understood projects or be incredible slow developers for LLMs to have such an impact, as I cant relate to this at all.
But I certainly wouldn’t assume that other people’s jobs are simple or boring just because they don’t look like yours.
Which is absolutely no shame. But people shouldn't expect these gains for their job if they work in less understood environments.
Most non CRUD AI code implementations are flawed/horrendous.
But for the rest of us, who have a mix of common/boring and uncommon/interesting tasks, accelerating the common ones means spending more time (proportionally) on less common tasks.
Unfortunately we don’t seem to great at classifying tasks as common or uncommon, and there are bored engineers who make complex solutions just to keep their brain occupied.
In my experience, it seems the people who have bad results have been trying to get the AI to do the reasoning. I feel like if I do the reasoning, I can offload menial tasks to the AI, and little annoying things that would take one or two hours start to take a few minutes.
That very quickly adds up to some real savings.
The ones who know what they want to do, how it should be done, but can't really be arsed to read the man pages or API docs of all the tools required.
These people can craft a prompt (prompt engineering :P) for the LLM that gets good results pretty much directly.
LLMs are garbage in garbage out. Sometimes the statistical average is enough, sometimes you need to give it more details to use the available tools correctly.
Like the fact that `fd` has the `-exec` and `--exec-batch` parameters, there's no need to use xargs or pipes with it.
90% of what the average (or median) coder does isn't in any way novel or innovative. It's just API Glue in one form or another.
The AI knows the patterns and can replicate the same endpoints and simple queries easily.
Now you have more time to focus on the 10% that isn't just rehashing the same CRUD pattern.
I hear this from people extolling the virtue of AI a lot, but I have a very hard time believing it. I certainly wouldn't describe 90% of my coding work as boilerplate or API glue. If you're dealing with that volume of boilerplate/glue, isn't it incumbent upon you to try and find a way to remove that? Certainly sometimes it isn't feasible, but that seems like the exception encountered by people working on giant codebases with a very large number of contributors.
I don't think the work I do is innovative or even novel, but it is nuanced in a way I've seen Claude struggle with.
It's the connectors that are 90-95% AI chow, just set it to task with a few examples and it'll have a full CRUD interface for your data done while you get more snacks.
Then you can spend _more_ of your limited time on the 10% of code that matters.
That said, less than 50% of my actual time spent on the clock is spent writing code. That's the easiest part of the job. The rest is coordinating and planning and designing.
I assumed you were only talking about the actual code. It still seems really odd. Why is there so much unavoidable boilerplate?
It has a specific amount of code I need to write just to add the basic boilerplate of receiving the data and returning a result from the endpoint before I can get to the meat of it.
IIRC there are no languages where I can just open an empty file and write "put: <business logic>" and it magically knows how to handle everything correctly.
If it doesn't, then I feel like even in the JavaScript world of 2015 you could write "app.put("/mypath", business_logic)" and that would do the trick, and that was a very immature language ecosystem.
> IIRC there are no languages where I can just open an empty file and write "put: <business logic>" and it magically knows how to handle everything correctly.
Are you sure it's done correctly? Take something like timestamps, or validations: It's easy to get those wrongs.
I set up a model in DBT that has 100 columns. I need to generate a schema for it (old tools could do this) with appropriate tests and likely data types (old tools struggled with this). AI is really good at this sort of thing.
Then you have to QA it for ages to discover the bugs it wrote, but the initial perception of speed never leaves you.
I think I'm overall slower with AI, but I could be faster if I had it write simple functions that I could review one by one, and have the AI compose them the way I wanted. Unfortunately, I'm too lazy to be faster.
Of course you need to check their work, but also the better your initial project plan and specifications are, the better the result.
For stuff with deterministic outputs it's easy to verify without reading every single line of code.
With that it can see any errors in the console, click through the UI and take screenshots to analyse how it looks giving it an independent feedback loop.
[0] https://github.com/microsoft/playwright-mcp
Their job is to do meetings, and occasionally add a couple of items to the HTML, which has been mostly unchanged for the past 10 years, save for changing the CSS and updating the js framework they use.
in the olden days, id imagine getting that right to take about a week and a half and something everyone hated about spinning up a new service
with the LLM, i gave it a feedback loop of being able to do an initial sign in, integration test running steps with log reading on the client side, and a deploy and log reading mechanism for the server side.
i was going to write out an over-seer-y script for another LLM to trigger the trial and error script, but i ended up just doing that myself. What i skipped was the needing to run any one of the steps, and instead i got nicely parsed errors, so i could go look for wikis on what parts of the auth process i was missing and feed in those wiki links and such to the trial and error bot. i skipped all the log reading/parsing to get to what the next actionable chunk is, and instead, i got to hang around in the sun for a bit while the LLM churned on test calls and edits.
im now on a cleanup step to turn the working code into nicely written code that id actually want commited, but getting to the working code stage took very little of my own effort; only the problem solving and learning about how the auth works
I had Claude Code build me a Playwright+python -based scraper that goes through their movie section and stores the data locally to an sqlite database + a web UI for me to watchlist specific movies + add price ranges to be alerted when it changes.
Took me maybe a total of 30 minutes of "active" time (4-5 hours real-time, I was doing other shit at the same time) to get it to a point where I can actually use it.
Basically small utilities for limited release (personal, team, company-internal) is what AI coding excels at.
Like grabbing results from a survey tool, adding them to a google sheet, summarising the data to another tab with formulas. Maybe calling an LLM for sentiment analysis on the free text fields.
Half a day max from zero to Good Enough. I didn't even have to open the API docs.
Is it perfect? Of course not. But the previous state was one person spending half a day for _each_ survey doing that manually. Now the automation runs in a minute or so, depending on whether Google Sheets API is having a day or not =)
Agents finish, I queue them up with new low hanging fruits, while I architect the much bigger tasks, then fire that off -> Review smaller tasks. It really is a dance, but flow is much easier achieved when I do get into it; hours really just melt together. The important thing to do is to put my phone away, and block all and any social media or sites I frequent, because its easy to get distracted when agents aren just producing code and you're sitting on the sidelines.
While programming, it's possible to get into a trance-like state where the program's logic is fully loaded and visible in your mind, and your fingers become an extension of your mind that wire you directly to the machine. This allows you to modify the program essentially at the speed of thought, with practically zero chance of producing buggy code. The programmer effectively becomes a self-correcting human interpreter.
Interrupting someone in this state is incredibly disruptive, since all the context and momentum is lost, and getting back into the state takes time and focus.
What you're describing is a general workflow. You can be focused on what you're doing, but there's no state loaded into memory that makes you more efficient. Interruptions are not disruptive, and you can pick up exactly where you left off with ease. In fact, you're constantly being interrupted by those agents running in the background, when they finish and you give them more work. This is a multitasking state, not flow.
So the article is correct. It's not possible to get into a flow state while working with ML tools. This is because it is an entirely different activity from programming that triggers different neural pathways.
When using ML tools you have no deep understanding of the behavior of the program, since you don't understand the generated code. If you bother to review the code, that is a huge context switch from anything you were doing previously. This doesn't happen during deeply focused programming sessions.
You may have been a software engineer for decades without ever experiencing the programming flow state. I'm not passing judgement.
Good nugget. Effective prompting, aside from context curation, is about providing the LLM with an approximation of your world model and theory, not just a local task description. This includes all your unstated assumptions, interaction between system and world, open questions, edge cases, intents, best practices, and so on. Basically distill the shape of the problem from all possible perspectives, so there's an all-domain robustness to the understanding of what you want. A simple stream of thoughts in xml tags that you type out in a quasi-delirium over 2 minutes can be sufficient. I find this especially important with gpt-5, which is good at following instructions to the point of pedantry. Without it, the model can tunnel vision on a particular part of the task request.
Without this it defaults to being ignorant about the trade-offs that you care about, or the relevant assumptions you're making which you think are obvious but really aren't.
The "simple stream" aspect is that each task I give to the LLM is narrowly scoped, and I don't want to put all aspects of the relevant theory that pertains just to that one narrow task into a more formal centralized doc. It's better off as an ephemeral part of the prompt that I can delete after the task is done. But I also do have more formal docs that describe the shared parts of the theory that every prompt will need access to, which is fed in as part of the normal context.
---
### The Fragility of Digital Memory in the Age of Infinite Storage
We live inside an archive that never closes. Hard drives hum like cathedrals of perfect recall, cloud servers drift like silent librarians in orbit, and every keystroke is another bone set in amber. Memory, once a trembling candle subject to drafts and time, now runs on battery backups and redundant arrays. Forgetting has been engineered out of the system.
And yet, the paradox: by remembering everything, we begin to lose the art of remembering at all. Human memory is a cracked mirror—crooked, selective, shimmering with distortions that make us who we are. Its gaps are the negative space where meaning lives. The story of a childhood is not its complete inventory but its torn edges, the blurred photograph, the half-forgotten lullaby whose missing notes we hum into being.
Digital memory, by contrast, is a hoarder’s attic with perfect climate control. Nothing rots, nothing fades, nothing dares to slip away. Every draft email unsent, every unflattering selfie, every midnight search query—all preserved in pristine sterility. The archive is so complete that it ceases to be a story. It becomes a warehouse of moments that never learned how to decay.
But memory without decay is less than human. Nostalgia itself might be the first compression algorithm: a lossy filter that turns clutter into resonance. Without the soft erasures of time, experience calcifies into raw data, and raw data has no mercy.
Perhaps what we need are systems that misremember. Databases that dream. Algorithms that allow certain files to fray at the edges, to grow fuzzy like old film reels, to tint themselves with the sepia of emotion rather than the fluorescence of metadata. A kind of deliberate forgetting—not loss as failure, but loss as design.
Because what is fragility if not the pulse behind memory’s worth? The hard drive never gasps, never sighs; only the human mind knows the ache of absence, the sweetness of something slipping away. If the future is an archive without end, perhaps our task is to reintroduce the possibility of disappearance. To let silence seep between the entries. To remind the machines that to truly remember, one must first learn how to forget.
---
Do you want me to lean this more toward *meditation* (open-ended, drifting) or *argument* (provoking design questions like “can forgetting be engineered”)?
> But memory without decay is less than human.
How is it less than human? By definition, the undecayed memory is more complete.
> Nostalgia itself might be the first compression algorithm: a lossy filter that turns clutter into resonance.
What is this even supposed to mean? I guess the idea is something here like "fuzzy" memory == "compression" but nostalgia is an emotional response - we're often nostalgic about clear, vivid memories, experiences that didn't lose their texture to time.
> Without the soft erasures of time, experience calcifies into raw data, and raw data has no mercy.
Eh... kinda. Calcifies is the wrong word here. Raw data doesn't have mercy, but lossily-compressed data is merciful? Is memory itself merciful? Or is it a mercy for the rememberer to be spared their past shames?
So much AI slop is like this: it's just words words words words without ideas behind them.
I suspect it doesn't matter how we feel about it mind you. If it's going to happen it will, whether we enjoy the gains first or not.
* setting aside whether this is currently possible, or whether we're actually trading away more quality that we realise.
That dumb attitude (which I understand you’re criticising) of “more more more” always reminds me of Lenny from the Simpsons moving fast through the yellow light, with nowhere to go.
https://www.youtube.com/watch?v=QR10t-B9nYY
> I suspect it doesn't matter how we feel about it mind you. If it's going to happen it will, whether we enjoy the gains first or not.
That is quite the defeatist attitude. Society becoming shittier isn’t inevitable, though inaction and giving up certainly helps that along.
The hypothetical that we're 8x as productive but the work isn't as fun isn't "society becoming shittier".
We are very well paid for very cushy work. It's not good for anyone's work to get worse, but it's not a huge hit to society if a well-paid cushy job gets less cushy.
And presumably people buy our work because it's valuable to them. Multiplying that by 8 would be a pretty big benefit to society.
I don't want my job to get less fun, but I would press that button without hesitation. It would be an incredible trade for society at large.
So I mean... Yeah
Is software more comfortable generally than many other lines of work? Yes probably
Is it always soft and cushy? No, not at all. It is often high pressure and high stress
All I can suggest is see a doctor as soon as possible and talk to them about it
I cannot remember events, conversations, or details about important things. I have partially lost my ability to code, because I get partway through implementing a feature and forget what pieces I've done and which pieces still need to be done
I can still write it, but the quality of my work has plummeted, which is part of why I'm off on leave now
1. 1 tablespoon of cold extracted cod liver oil EVERY MORNING
2. 30 min of running 3-4 times a week
3. 2-3 weight lifting sessions every week
4. regular walks.
5. cross train on different intellectually stimulating subjects. doing the same cognitive tasks over and over is like repetive motion on your muscles
6. regularly scheduled "fallow mind time." I set aside an 30 min to an hour everyday to just sit in a chair and let my mind wander. its not meditation. I jsut sit and let my mind drift to whatever it wants.
7. while it should be avoided, in the event that you have to crunch, RESPECT THE COOLDOWN. take downtime after. don't let your nontechnical leads talk you out of it. thinking hard for extended periods of time requires appropriate recovery.
the human brain is a complex system and while we think of our mind as abstract and immaterial, it is in reality a whole lot of physical systems that grow, change and use resources the same way any other physical system in your body does. just like muscles need to recover after a workout to get stronger, so too does your brain after extended periods of deep thinking.
But I am struggling to remember things I did not used to struggle with
Going to an event on a weekend with my wife and completely forgetting that we ran into a friend there. Not just "oh yeah I forgot we saw them", like feeling my wife is lying to me when she tells me we saw them. Texting them to ask and they agree we saw each other
These are people I trust with my life so I believe they would not gaslight me, my own memory has just failed
Many examples like this, just completely blacking out things. Not forgetting everything, but blacking out large pieces of my daily life. Even big things
Disclaimer: talk to your doctor. I don’t know if your doctor can tell you whether this is a good idea, but it might help in some countries with good medical systems.
I've seen plenty enough people try, really try, to get into software development; but they just can't do it.
Software devs jobs getting less cushy is no biggie. We can afford to amp up the efficiency. Teachers jobs got "less cushy" -> not great for users/consumers or the ppl in those jobs. Doctors jobs got "less cushy" -> not great for users/consumers or the ppl in those jobs. Even waiters/ check-out staff, stockists jobs at restaurants, groceries and AMZ got "less cushy" -> not great for users/consumers or the ppl in those jobs. at least not when you need to call someone for help.
These things are not as disconnected as they seem. Businesses are in fact made up of people.
1. Efficiency measures as written to benchmark this coupling with economic productivity overall
2. Monetary assessments of value in the context of businesses spending money corresponding with social value
3. The gains of economic productivity being distributed across society to any degree, or the effect of this disparity itself being negligible
4. The negative externalities of these processes scaling less quickly than whatever we're measuring in our productivity metric
5. Aforementioned externalities being able to scale to even a lesser degree in lockstep with productivity without crashing all of the above, if not causing even more catastrophic outcomes
I have very little faith in any of these assumptions
You're right in general, but I don't think that'll save you/us from OP's statement. This is simple economic incentives at play. If AI-coding is even marginally more economically efficient (i.e. more for less) the "old way" will be swept aside at breathtaking pace (as we're seeing). The "from my cold dead hands" hand-coding crowd may be right, but they're a transitional historical footnote. Coding was always blue-collar white-collar work. No-one outside of coders will weep for what was lost.
I suspect we'll find that the amount of technical debt and loss of institutional knowledge incured by misuse of these tools was initially underappreciated.
I don't doubt that the industry will be transformed, but that doesn't mean that these tools are a panacea.
I also specifically used the term "misuse" to significantly weaken my claim. I mean only to say that the risks and downsides are often poorly understood, not that there are no good uses.
On the scale I’ve been doing this (20 years), that hasn’t been the case.
Rails was massively more efficient for what 90% of companies where building. But it never had anywhere near a 90% market share.
It doesn’t take 1000 engineers to build CRUD apps, but companies are driven to grow by forces other than pure market efficiency.
There are still plenty of people using simple text editors when full IDEs have offered measurable productivity boosts for decades.
>(as we’re seeing)
I work at a big tech company. Productivity per person hasn’t noticeably increased. The speed that we ship hasn’t noticeably increased. All that’s happening is an economic downturn.
But AI seems to be different in that it claims to replace programmers, instead of augment them. Yes, higher productivity means you don't have to hire as many people, but with AI tools there's specifically the promise that you can get rid of a bunch of your developers, and regardless of truth, clueless execs buy the marketing.
Stupid MBAs at big companies see this as a cost reduction - so regardless of the utility of AI code-generation tools (which may be high!), or of the fact that there are many other ways to get productivity benefits, they'll still try to deploy these systems everywhere.
That's my projection, at least. I'd love to be wrong.
But no matter how hard cost cutters wanted to, they were never able to actually reduce the total number of devs outside of major economic downturns.
This feels like kicking someone when they’re down! Given the current state of corporate and political America, it doesn’t look likely there will be any pressure for anything but enshittification to me. Telling people at the coal face to stay cheerful seems unlikely to help. What mechanism do you see for not giving up to actually change the experience of people in 10 ish years time?
That isn't what they said tho. They said you have to do something, not that you should just be happy. Doing something can involve things that historically had a big impact in improving working conditions, like collective action and forming unions.
The opposite advice would be: "Everything's fucked, nothing you can do will change it, so just give up." Needless to say that is bad advice unless you are a targeted individual in a murderous regime or similar.
Realizing that attitude in myself at times has given me so much more peace of mind. Just in general, not everything needs to be as fast and efficient as possible.
Not to mention the times where in the end I spend a lot of time and energy in trying to be faster only to end up with this xkcd: https://xkcd.com/1319/
As far as LLM use goes, I don't need moar velocity! so I don't try to min max my agentic workflow just to squeeze out X amount more lines code.
In fact, I don't really work with agentic workflows to begin with. I more or less still use LLMs as tools external to the process. Using them as interactive rubber duckies. Things like deciphering spaghetti code, do a sanity check on code I wrote (and being very critical of the suggestions they come up with), getting a quick jump start on stuff I haven't used again (how do I get started with X of Y again?), that sort of stuff.
Using LLMs in the IDE and other agentic use is something I have worked with. But to me it falls under what I call "lazy use" where you are further and further removed from the creation of code, the reasoning behind it, etc. I know it is a highly popular approach with many people on HN. But in my experience, it is an approach that makes skills of experienced developers atrophy and makes sure junior developers are less likely to pick them up. Making both overly reliant on tools that have been shown to be less than reliable when the output isn't properly reviewed.
I get the firm feeling that velocity crowd works in environments where they are judged by the amount of tickets closed. Basically "feature complete, test green, merged, move on!". In that context, it isn't really "important" that the tests that are green are also refactored by the thing itself, just that they are green. It is a symptom of a corporate environment where the focus is on these "productivity" metrics. From that perspective I can fully see the appeal for LLM heavy workflows as they most certainly will have an impact on metrics like "tickets closed" or "lines of code written".
It does when you are competing for getting and keeping employment opportunities.
If you're a salaried or hourly employee, you aren't paid for your output, you are paid for your time, with minimum expectations of productivity.
If you complete all your work in an hour... you still owe seven hours based on the expectations of your employment agreement, in order to earn your salary and benefits.
If you'd rather work in an output based capacity, you'll want to move to running your own contacting business in a fixed-bid type capacity.
There's legal distinctions between part time and full time employment. Hence, you are expected to put in a minimum number of hours. However, there's nothing to say that the minimum expectation is the minimum for classification for full time employment.
If AI lets you get the job done in 1 hour when you otherwise would have worked overtime, you're still technically being paid to work more than that one hour, and I don't know of any employer that'll pay you to do nothing.
If the structures and systems that are in-place only facilitate life getting more difficult in some way, then it probably will, unless it doesn't.
Housing getting nearly unownable is a symptom of that. Climate change is another.
Correct. But it becoming shittier is the strong default, with forces that you constantly have to fight against.
And the reason is very simple: Someone profits from it being shittier and they have a lot of patience and resources.
My company has been preparing for this for a while now, I guess, as my backlog clearly has years' worth of work in it and positions of people who have left the org remain unfilled. My colleagues at other companies are in a similar situation. Considering round after round of layoffs, if I got ahead a little bit and found that I had nothing else to do, I'd be worried for my job.
Society becoming shittier isn’t inevitable
Yes, I agree, but the deck is usually stacked against the worker, especially in America. I doubt this will be the issue that promotes any sort of collectivism.
That's stupid and detrimental to your mental health.
You do it in an hour, spend maybe 1-2 hours to make it even better and prettier and then relax. Do all that menial shit you've got lined up anyway.
I wish that the hype crowd would do that. It would make a for a much more enjoyable and sane experience on platforms like this. It's extremely difficult to have actual conversations subjects when there are crowds of fans involved who don't want to hear anything negative.
Yes, I also do realize there are people completely on the other side as well. But to honest, I can see why they are annoyed by the fan crowd.
Exactly, IME the hype crowd is really the worst at this. They will spend 8h doing 8 different 1h tries at getting the right context for the LLM and then claim they did it in 1h.
They claim to be faster than they are. There's a lot of mechanical turking going about as soon as you ask a few probing questions.
Long term the craftsperson writing excellent code will win. It is now easier than ever to write excellent code, for those that are able to choose their pace.
If anything we'll see disposable systems (or parts) and the job of an SE will become even more like a plumber, connecting prebuilt business logic to prebuilt systems libraries. When one of those fails, have AI whip up a brand new one instead of troubleshooting the existing one(s). After all, for business leader it's the output that matters, not the code.
For 20+ years business leaders have been eager to shed the high overhead of developers via any means necessary while ignoring their most expensive employees' input. Anyone remember Dilbert? It was funny as a kid, and is now tragic in its timeless accuracy a generation later.
An earlier iteration of your reply said "Is that really winning?" The answer is no. I don't think any class of SE end up a winner here.
Maybe. I'm seeing the opposite - yes, the big ships take time to turn, but with the rise of ransomware and increasing privacy regulation around the world, companies are putting more and more emphasis on quality and avoiding incidents.
>In the capitalist mode of production, the generation of products (goods and services) is accomplished with an endless sequence of discrete, repetitive motions that offer the worker little psychological satisfaction for "a job well done." By means of commodification, the labour power of the worker is reduced to wages (an exchange value); the psychological estrangement (Entfremdung) of the worker results from the unmediated relation between his productive labour and the wages paid to him for the labour.
Less often discussed is Marx's view of social alienation in this context: i.e., workers used to take pride in who they are based on their occupation. 'I am the best blacksmith in town.' Automation destroyed that for workers, and it'll come for you/us, too.
AI slop code doesn't even work beyond toy examples.
Low quality software kills people.
Both will stay manual / require high level of review they're not what's being disrupted (at-least in near term) - it's the rest.
What was automated was the production of raw cloth.
This phenomenon is a general one… chainsaws vs hand saws, bread slicers vs hand slicing, mechanical harvesters vs manual harvesting, etc.
A large enough GDPR or SOX violation is the boogeyman that CEO's see in their nightmares.
The machines we’re talking about made raw cloth not clothing and it was actually higher quality in many respects because of accuracy and repeatability.
Almost all clothing is still made by hand one piece at a time with sewing machines still very manually operated.
“…the output of power looms was certainly greater than that of the handlooms, but the handloom weavers produced higher quality cloths with greater profit margins.” [1]
The same can be said about machines like the water frame. It was great at spinning coarse thread, but for high quality/luxury textile (ie. fine fabric), skilled (human) spinners did a much better job. You can read the book Blood in the Machine for even more context.
[0] https://en.wikipedia.org/wiki/Industrial_Revolution
[1] https://en.wikipedia.org/wiki/Dandy_loom
If your goal is to make 1000 of the exact same dress, having a completely consistent raw material is synonymous with high quality.
It’s not fair to say that machines produced some kind of inferior imitation of the handmade product, that only won through sheer speed and cost to manufacture.
When it is eventually made, though… it’s either aligned or we’re in trouble. Job cushiness will be P2 or P3 in a world where a computer can do everything economically viable better than any human.
They are brains. I think it's on you to prove they're the same, rather than assuming they're the same and then demanding proof they aren't!
But in fairness to human devs, most are still writing software that is leagues better than the dog shit AI is producing right now
That's why we should be against it but hey, we can provide more value to shareholders!
At my first job in Silicon Valley, I used to code right on the production floor totally oblivious to what was going on.
https://www.youtube.com/watch?v=DrA8Pi6nol8
Can we see this frontend code? For research purposes, of course.
96 more comments available on Hacker News