The Suck Is Why We're Here
Key topics
As the online discourse swirls around the notion that people are "cosplaying their lives," commenters are weighing in on the blurred lines between authenticity and performance in the digital age. Some, like roughly, are pointing out that this phenomenon extends far beyond the use of Large Language Models (LLMs), with others like jackyinger and trashburger chiming in that it's a timeless concept where people emulate the appearance of success without the substance. The debate takes an interesting turn with akoboldfrying's observation that while tools like Notion can be a "brain-multiplying lever," LLMs might be a "harmful brain-rotting exercise," sparking a discussion on the value of AI-assisted thinking versus genuine cognitive effort. With analogpixel's assertion that writing daily is a means to "remember how to think," the conversation is highlighting the tension between leveraging technology and maintaining intellectual autonomy.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
94
0-12h
Avg / period
26.7
Based on 160 loaded comments
Key moments
- 01Story posted
Jan 3, 2026 at 6:24 PM EST
5d ago
Step 01 - 02First comment
Jan 3, 2026 at 7:47 PM EST
1h after posting
Step 02 - 03Peak activity
94 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Jan 9, 2026 at 10:59 AM EST
6h ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I'm always surprised when people say they use LLMs to do stuff in their Journal/Obsidian/Notion. The whole point of those systems is to make you think better, and then you just offload all of that to a computer.
A tale as old as time.
EDIT: Trying to stay on topic and score some po--, cargo I mean...
In your opinion, what is the differentiating factor?
I think AI is a great tool in certain circumstances, but this sounds like one of the clearest examples where it is brain rot.
I've found LLMs work reasonably well to just copy-paste that blob of thoughts into to have them summarize the key points back to me in a more coherent form.
If I understand something well, I can write something coherent easily.
What you describe feels to me along the lines of studying for an exam by photocopying a textbook over and over.
To imagine LLMs have no use case here seems dishonest. If I don't understand a particularly hard part of the subject matter and the textbook doesn't expand on it enough you can tell the LLM to break it down further with sources. I know this works because I've been doing it with Google (slowly, very slowly) for decades. Now it's just way more convenient to get to the ideas you want to learn about and expand them as far as you want to go.
In some cases yes I'll synthesize that myself into something more coherent. In other cases an LLM can offer a summary of certain themes I'm coming back to, or offer a pseudo-outsider's take on what the core themes being explored are.
If something is important to me I'll spend the time to understand it well enough to frame my own coherent argument, but if I'm doing extremely explorative thinking I'm OK with having a rapid process with an LLM in the loop.
I'm not using LLMs for my notes but "think better" has never been a goal for me.
Anyway, I casually mentioned he did a lot of his thinking in an oven and her curiousity was really piqued by that idea. Which is funny because every time I mention it to someone, that's the bit that is most interesting to them. I'm not convinced that an AI would necessarily pick up on that detail being of note as much as a human would.
Best of luck to your mother-in-law in finding a way to deal with her voices, though. <3
I feel like this will get missed by the general public. What’s the point in generating writing or generating art if it gives next to zero feelings of accomplishment?
I could generate some weird 70s sci fi art, make an Instagram profile around that, barrage the algorithm with my posts and rack up likes. The likes will give that instant dopamine but it will never fill that need of accomplishing something.
I like LLMs to get me to reword something, since I struggle with that. But just like in programming I focus it on a specific sentence or two. Otherwise why am I doing it?
Getting promoted, getting a better job, generating sales leads, things of that nature. A depressing number of blogs or LinkedIn posts exist only because the author is under some vague belief that it’s part of what they’re supposed to be doing to get ahead in their career.
Did you see the NYC ball drop by any chance? It was plastered with ads. Ads on screen, ads on people, giant KIA ad below the ball that ruined the shot on purpose. Everything is a money grab now, because we are just eyeballs that see shit and buy it.
If you think it's just the old me remembered things differently, here is 2002: https://www.youtube.com/watch?v=iB6OzLUQE3I
US podcasts have a total valuation of $8-9B with a revenue of $1.9B; total K-12 spending is $950B a year (about 500x higher). Education receives nearly three orders of magnitude more money per year.
Most people sitting on a couch smoking weed on camera make little to nothing, while 3.8M teachers are paid an average of $65,000 per year.
You’re comparing one-in-a-million outlier podcasts to the average case teacher in order to reverse the overwhelming amount more we put into education, both in total and on average.
I watch a few DJs on Twitch. They seem like they are having fun but I'm pretty confident many of them would stop if there was no money in it. Maybe after they don't need the money they'd still do it. They need the money now and it's a fun way to get it.
Similarly, I watch Veritasium and Kurzgestat. They both put lots of work in with teams of people. I think they both enjoy it, but some of that enjoyment comes from "making a living at it". If that disappeared, if they didn't need the living from it I wonder if they'd continue.
And the industry is gone. No one could produce figurines like that at any worldly price, probably for the last 100 years. The world is less for it, but it doesn’t matter, art follows different more efficient technologies and methods.
I sympathize with these artisans of the written word. But they’re all wrong, they’re dinosaurs who don’t know it. I myself was one, churning on high-value bespoke written work. The economic model is wrong, we’re the expert 1850s figurine crafters, adapt or … burn out I guess.
Art for its own sake. Say something. Experience having said something.
Economic value is the least of it. i get why economic value is the only thing that matters. we made the world this way. i get it.
but also: art for its own sake. say something. experience the saying of the something.
[1]: https://www.inprnt.com/gallery/canadianturtle/pacman-ukiyoe/
If you are talking about the background colour of a slide, that is not "art", it's a simple choice.
The portrait for your d&d character - if you used AI to generate just because you need any image and you don't care, you need a representation, then it may be difficult to classify that as "art". If you drew it, regardless of how bad it is, and you like and appreciate and connect with it, that is "art".
Of course, we may all have our own definitions of "art"
The existence of a sea of AI slop making it impossible to find or publicize writing is what will kill it.
It's purely a loss.
We've had LLMs for years. Image models and coding agents have gotten remarkably good, and their output is all over the place. So where is the AI writing? Outside of automated summaries, formulaic essays, and overly verbose LinkedIn posts, nowhere.
In Taiwan I've met indigenous woodworking artists. They sell stuff in markets all the time, plenty of it incredibly intricate. Incidentally, many temples here are also covered in beautifully layered granite carvings.
Writing is also peculiar in that it is easily referenceable with a deep history, so it serves as a way to compare one's own ideas to others. Memes are similar in principle, but tend towards esotericism and ephemerality in a balkanized internet.
What will actually happen, likely, is a complete death of writing. Not just that the craft is gone, but that art is gone.
What is the point of creating anything if it has no meaning? And likewise, there is no economic value to it either.
So there will simply be no art, and paradoxically any true art will simply be so ridiculously expensive and unaffordable that nearly nobody will benefit from it any more...
As the internet fills with slop, it'll only get harder to find the people who actually care about what they're putting online and not just the views or the ad revenue which is a shame because those are the types of people who make the internet interesting.
That's how I feel with programming, and sometimes I feel like I'm taking crazy pills when I see so many of my colleagues using AI not only for their job, but even for their week-end programming projects. Don't they miss the feeling of..... programming? Am I the weird one here?
And when I ask them about it, they answer something like "oh but programming is the boring part, now I can focus on the problem solving" or something like that, even though that's precisely what they delegate to the AI.
Weird times.
It's either taking away the most important (or rewarding) thing I need to do (think) and just causing me more work, or it has replaced me.
AI. Is. Not. Useful.
AI is like delegating to a junior programmer that never learns or gets better.
>AI. Is. Not. Useful.
Why waste time writing things like this? What's the point?
We can agree all day long about the pitfalls of the technology, but you’ve never used it so you don’t know if it’s causing you more work or replacing you.
Take game programming: it takes an immense amount of work to produce a game, problems at multiple levels of abstraction. Programming is only one aspect of it.
Even web apps are much, much more than the code backing them. UIUX runs deep.
I'm having trouble understanding why you think programming is the entirety of the problem space when it comes to software. I largely agree with your colleagues; the fun part for me, at this point in my career, is the architecture, the interface, the thing that is getting solved for. It's nice for once to have line of sight on designs and be able to delegate that work instead of writing variations on functions I've written thousands if not tens of thousands of times. Often for projects that are fundamentally flawed or low impact in the grand scheme of things.
I don't know why people build houses with nail guns, I like my hammer... Whats the point of building a house if you're not going to pound the nails in yourself.
AI tooling is great at getting all the boiler plate and bootstrapping out of the way... One still has to have a thoughtful design for a solution, to leave those gaps where you see things evolving rather than writing something so concrete that you're scrapping it to add new features.
If the boilerplate is that obvious why not just have a blueprint for that and copy and paste it over using a parrot?
Also I dont have a nail gun subscription and the nail gun vendor doesnt get to see what I am doing with it.
LLMs are nothing like that. Using a LLM is more akin to management of outsource software development. On the surface, it might look like you get ready-made components by outsourcing it to them, but there is no contract about any standard, so you have to check everything.
Now if people would present it like "I rather manage an outsourcing process than doing the creative thing" we would have no discussion. But hammers and nails aren't the right analogies.
You're going to have to tell us your definition of 'Using a LLM' because it is not akin to outsourcing (As I use it).
When I use clause, I tell it the architecture, the libraries, the data flows, everything. It just puts the code down which is the boring part and happens fast.
The time is spent mostly on testing, finding edge cases. The exact same thing if I wrote it all myself.
I don't see how this is hard for people to grasp?
This is a straw man argument. You have described one potential way to use an LLM and presented it as the only possible way. Even people who use LLMs will agree with you that your weak argument is easy to cut down.
You’re comparing a deterministic method of quickly installing a fastener with something that nondeterministically designs and builds the whole building.
I recently wrote a 17x3 reed-solomon encoder which is substantially faster on my 10yo laptop than the latest and greatest solution from Backblaze on their fancy schmancy servers. The fun parts for me were:
1. Finally learning how RS works
2. Diving in sufficiently far to figure out how to apply tricks like the AVX2 16-element LUT instruction
3. Having a working, provably better solution
The programming between (2) and (3) was ... fine ... but I have literally hundreds of other projects I've never shipped because the problem solving process is more enjoyable and/or more rewarding. If AI were good enough yet to write that code for me then I absolutely would have used it to have more time to focus on the fun bits.
It's not that I don't enjoy coding -- some of those other unshipped projects are compilers, tensor frameworks, and other things which exist purely for the benefit of programmer ergonomics. It's just that coding isn't the _only_ thing I enjoy, and it often takes a back seat.
I most often see people with (what I can read into) your perspective when they "think" by programming. They need to be able to probe the existing structure and inject their ideas into the solution space to come up with something satisfactory.
There's absolutely nothing wrong with that (apologies if I'm assuming to much about the way you work), but some people work differently.
I personally tend to prefer working through the hard problems in a notebook. By the time the problem is solved, its ideal form in code is obvious. An LLM capable of turning that obvious description into working code is a game changer (it still only works like 30% of the time, and even then only with a lot of heavy lifting from prompt/context/agent structure, so it's not quite a game changer yet, but it has potential).
https://www.usenix.org/system/files/fast19-zhou.pdf is a more modern paper that goes into some related problems of trying to reduce the number of XOR operations needed to encode data.
"I love complicated mathematical questions, and love doing the basic multiplication and division calculations myself without a calculator. I don't understand why people would use a calculator for this."
"I love programming, and don't understand why people would use C++ instead of using machine lamguage. You get deep down close to the hardware, such a good feeling, people are missing out. Even assembly language is too much of a cheat."
In the other hand - people still knit, I assume for the enjoyment of it.
How many people would still enjoy the process of knitting vs just inputting a pattern and a bunch of yarn in the Knit-o-matic?
But again my projects are more research than product, so maybe it’s different.
I suspect you've found a new hobby, not improved the existing one.
This I think I can explain, because I'm one of these people.
I'm not a programmer professionally for the most part, but have been programming for decades.
AI coding allows me to build tools that solve real world problems for me much faster.
At the same time, I can still take pride and find intellectual challenges in producing a high quality design and in implementing interesting ideas that improve things in the real world.
As an example, I've been working on an app to rapidly create Anki flashcards from Kindle clippings.
I simply wouldn't have done this over the limited holiday time if not for AI tools, and I do feel that the high level decisions of how this should work were intellectual interesting.
That said, I do feel for the people who really enjoyed the act of coding line by line. That's just not me.
This phrase betrays a profoundly different view of coding to that of most people I know who actively enjoy doing it. Even when it comes to the typing it's debatable whether I do that "line by line", but typing out the code is a very small part of the process. The majority of my programming work, even on small personal projects, is coming up with ideas and solving problems rather than writing lines of code. In my case, I prefer to do most of it away from the keyboard.
If AI were a thing that could reliably pluck the abstract ideas from my head and turn them into the corresponding lines of code, i.e. automate the "line by line" part, I would use it enthusiastically. It is not.
The joy of writing code is turning abstract ideas into solid, useful things. Whether you do most of it in your head or not, when you sit down to write you will find you know how you want to treat bills - is it an object under payroll or clients or employees or is it a separate system?
LLMs suck at conceptualizing schema (and so do pseudocoders and vibe coders). Our job is turning business models into schemata and then coding the fuck out of them into something original, beautiful, and useful.
Let them have their fun. They will tire of their plastic toy lawnmowers, and the tools they use won't replace actual thought. The sad thing is: They'll never learn how to think.
Drawing a sense of superiority out of personal choices or preferences is a really unfortunate human trait; particularly so in this case since it prevents you from seeing developments around you with clarity.
Think back on assembly programmers scoffing at c programmers.
Same arguments, probably same outcomes.
That's not the only way to use an LLM. One can instead write a piece of code and then ask the tool for analysis, but that's not the scenario that people like me are criticizing or concerned about -- and it's not how most people imagine LLMs will be used in the future, if models and tools continue to improve. People are predicting that the models will write the software. That's what people like me and the person I agreed with are criticizing and concerned about.
I'm uncomfortable with the idea not because it's outside of my area of comfort but because people don't understand code they read the way they understand code they write. This isn't a skill issue, in that people need to become better at reading and understanding code. Rather, it's how the human brain works. Writing the code familiarizes the writer with the problem space (the pitfalls, for instance). When you haven't written it, and you've instead just read it, then you haven't worked through the problems. You don't know the problem space or the reasons for the choices that the author made.
To put this another way: you can learn to read a language or understand it by ear without learning to speak it. The skills are related, but they're separate. People acquire and develop the skills they practice. Junior engineers and young people who learn to code with AI, and don't write code themselves, will learn, in essence, how to read, not how to write or speak; they'll learn how to talk to the AI models, and maybe how to read code. They will not learn how to write software.
So I take it you don't let coding agents write your boilerplate code? Do you instead spend any amount of time figuring out a nice way to reduce boilerplate so you have less to type? If that is the case, and as intellectually stimulating as that activity may be, it probably doesn't solve any business problems you have.
If there is one piece of wisdom I could impart, it's that you can continue enjoying the same problem solving you are already doing and have the machine automate the monotonous part. The trick is that the machine doesn't absorb abstract ideas by osmosis. You must be a clear communicator capable of articulating complex ideas.
Be the architect, let the construction workers do the building. (And don't get me started, I know some workers are just plain bad at their jobs. But bad workmanship is good enough for the buildings you work in, live in, and frequent in the real world. It's probably good enough for your programming projects.)
> If AI were a thing that could reliably pluck the abstract ideas from my head and turn them into the corresponding lines of code, i.e. automate the "line by line" part, I would use it enthusiastically. It is not.
... is exactly how this often works for me.
If you don't get any value out of this at all, and have worked with SOTA tools, we must simply be working in very different problem domains.
That said I have used this workflow successfully in many different problem domains, from simple CRUD style apps to advanced data processing.
Two recent examples to make it more concrete:
1) Write a function with parameter deckName that uses AnkiConnect to return a list of dataclasses with fields (...) representing all cards in the deck.
Here, it one-shots it perfectly and saves me a lot of time sifting through crufty, incomplete docs.
2) Implement a function that does trilinear interpolation on 3d instance segmentation. Input is a jnp array and resampling factor, output is another array. Write it in Jax. Ensure that no new instance IDs are created by resampling, i.e. the trilinear weights are used for weighted voting between instance IDs on each output voxel.
This one I actually worked out on paper first, but it was my first time using Jax and I didn't know the API and many of the parallelization tricks yet. The LLM output was close, but too complex.
I worked through it line by line to verify it, and ended up learning a lot about how to parallelize things like this on the GPU.
At the end of the day it came out better than I could have done it myself because of all the tricks it has memorized and because I didn't have to waste time looking up trivial details, which causes a lot of friction for me with this type of coding.
If you can't / won't / don't read and write the code yourself, can I ask how you know that the code written for you is working correctly?
BTW, if it doesn't take you hours to test the failure modes, you're not thinking of enough failure modes.
The time savings in writing it myself has a lot to do with this. Plus I get to understand exactly why each line was written, with comments I wrote, not having to read its comments and determine why it did something and whether changing that will have other ramifications.
If you're doing anything larger than a sample React site, it's worth taking the time to do it yourself.
The main key in steering Claude this month (YMMV), is basically giving tasks that are localized, can be tested out and not too general. Then you kinda connect the dots in your head. Not always, but you can kinda get gist of what works and what doesn’t.
Also, as I said, I've been coding for a long time. The ability to read the code relatively quickly is important, and this won't work for early novices.
The time saving comes almost entirely from having to type less, having to Google around for documentation or examples less, and not having to do long debugging sessions to find brainfart-type errors.
I could imagine that there's a subset of ultra experienced coders who have basically memorized nearly all relevant docs and who don't brainfart anymore... For them this would indeed be useless.
I have not memorized all the docs to JS, TS, PHP, Python, SCSS, C++, and flavors of SQL. I have an intuition about what the question I need to ask is, if I can't figure something out, and occasionally an LLM will surface the answer to that faster than I can find it elsewhere... but they are nowhere near being able to write code that you could confidently deploy in a professional environment.
It was probably 2-3 hours work of screwing around figuring out issue fields, python libraries, etc that was low priority for my team but causing issues on another team who were struggling with some missing information. We never would have actually tasked this out, written a ticket for it, and prioritised it in normal development, but this way it just got done.
I’ve had this experience about 20 times this year for various “little” things that are attention sinks but not hard work - that’s actually quite valuable to us
const somecolor='#ff2222'; /* oh wait, the user asked for it to be yellow. Let's change the code below to increase the green and red /
/
hold on, I made somecolor a const. I should either rewrite it as a var or wait, even better maybe a scoped variable! /hah. Sorry I'm just making this shit up, but okay. I don't hire coders because I just write it myself. If I did, I would assign them
all* kinds of annoying small projects. But how the fuck would I deal with it if they were this bad?If it did save me time, would I want that going into my codebase?
> If it did save me time, would I want that going into my codebase?
Depends - and that's the judgement call. I've managed outsourcers in the pre-LLM days who if you leave them unattended will spew out unimaginable amounts of pure and utter garbage that is just as bad as looping an AI agent with "that's great, please make it more verbose and add more design patterns". I don't use it for anything that I don't want to, but for so many things that just require you to write some code that is just getting in the way of solving the problem you want to solve it's been a boon for me.
How do you know AI did the right thing then? Why would this take you 2-3 hours? If you’re using AI to speed up your understanding that makes sense - I do that all the time and find it enormously useful.
But it sounds like you’re letting AI do the thinking and just checking the final result. This is fine for throwaway work, but if you have to put your name behind it that’s pretty risky, since you don’t actually understand why AI did what it did.
Because I tested it, and I read the code. It was only like 40 lines of python.
> Why would this take you 2-3 hours?
It's multiple systems that I am a _user_ of, not a professional developer of. I know how to use Jira, I'm not able to offhand tell you how to update specific fields using python - and then repeat for Jenkins, perforce, slack. Getting credentials in (Claude saw how the credentials were being read in other scripts and mirrored that) is another thing.
> This is fine for throwaway work, but if you have to put your name behind it that’s pretty risky, since you don’t actually understand why AI did what it did.
As I said above, it's 30 lines of code. I did put my name beind it, it's been running on our codebase on every single checkin for 6 months, and has failed 0 times in that time (we have a separate report that we check in a weekly meeting for issues that were being missed by this process). Again, this isn't some massive complicated system - it's just glueing together 3/4 APIs in a tiny script in 1/10 of the time that it took me to do it. Worst case scenario is it does exactly what it did before - nothing.
If that's most of what you do, I can see how you'd not be that impressed.
I'd say though that even in such an environment, you'll probably still be able to extract tasks that are relatively self contained, to use the LLM as a search engine ("where is the code that does X") or to have it assist with writing tests and docs.
"Convert the comments in this DOCX file into a markdown table" was an example task that came up with a colleague of mine yesterday. And with that table as a baseline, they wrote a tool to automate the task. It's a perfect example of a tool that isn't fun to write and it isn't a fun problem to solve, but it has an important business function (in the domain of contract negotiation).
I am under the impression that the people you are arguing with see themselves as artisans who meticulously control every bit of minutiae for the good of the business. When a manager does that, it's pessimistically called micromanagement. But when a programmer does that, it's craftsmanship worthy of great praise.
Not sure how this is so hard to understand. If you have closed source software, how do you know its's working?
But it can't actually generate working code.
I gave it a go over the Christmas holidays, using Copilot to try to write a simple program, and after four very frustrating hours I had six lines of code that didn't work.
The problem was very very simple - write a bit of code to listen for MIDI messages and convert sysex data to control changes, and it simply couldn't even get started.
I recently used Claude for a personal project and it was a fairly smooth process. Everyone I know who does a lot of programming with AI uses Claude mostly.
I've let Claude run around my code and asked it for help, etc. Once in awhile it's able to diagnose some weird issues - like last month, it actually helped me figure out why PixiJS was creating undefined behavior after textures were destroyed on the GPU, in a very specific case. But the truth is, I wouldn't hire an intern or an employee to write my code because they won't be able to execute exactly what I have in mind.
Ironically, in my line of work, I spend 5x as many hours thinking about what to build and how to build it as I do coding it. The fun part is coding it. And, that's the only time I charge for. I may spend 10 hours thinking about how to do something, drawing diagrams, making phone calls to managers and CEOs, and I won't charge any of that time. When I'm ready to sit down and write the code:
I go to a bar.
I turn my phone off.
I work for 6 hours, have 4 drinks, and bill $300 per hour.
I don't suspect that the kind of coding I'm doing, which includes all the preparation and thought that went into it, and having considered all edge cases in advance, is going to be replaced by LLMs. Or by the children who use LLMs. They didn't have much of a purchase on taking my job before, anyway... but sadly the ones who are using this technology now have almost no hope of ever becoming proficient at their profession.
Coding is not making a thing that appears to work. It's craftsmanship. It's quite difficult to convince a client that something which appears to work as a demo is not yet suitable or ready for production. It may take 20 more hours before it's actually ready to fly. Managing their expectations on that score is a major part of the work as well.
However, these two things are different: the kind of work that feels fulfilling, meaningful and even beautiful, versus: delivering the needed/wanted product.
A vibe coded solution that basically works, for a quarter of the cost, has advantages.
https://x.com/lillybilly299/status/1865133434839990601
I feel kind of the same when I read about people wanting self-driving cars. What's the advantage of them? Why would it be helpful?
For recent ones, it is a interactive visualiazation of StarCraft 2 (https://github.com/stared/sc2-balance-timeline). Here I could do it myself (and spend way more time than I want to admit on refactoring, so code looks OK-ish), but unlikely I would have enough time to do so. I had the very idea a few years ago, but it was just too much work for a side project. Now I did it - my focus was high-level on WHAT I want to do and constant feedback on how it looks, tweaking it a lot.
Another recent is "a project for one" of a Doom WAD launcher (https://github.com/stared/rusted-doom-launcher). Here I wouldn't be able to do it, as I am not nearly as proficient in Rust, Tauri, WADs, etc. But I wanted to create a tool that makes it easy to to launch custom Doom maps with ease of installing a game on Steam.
In both cases the pattern is the same - I care more on the result itself that its inner workings (OK, for viz I DO care). Yes, it takes away a lot of experience of coding oneself. But it is not something entirely different - people have had the same "why use a framework instead of writing it yourself", "why use Python when you could have used C++", "why visiting StackOverflow when you could have spend 2 days finding solution yourself".
With side projects it is OUR focus on what we value. For someone it is writing low-level machine code by hand, even it it won't be that useful. For some other, making cute visual. For someone else, having an MVP that "just works" to test a business idea.
Yes, balance updates make the game live.
For watching current games, I cannot recommend better than Lowko (https://www.youtube.com/@LowkoTV) - he covers the main matches, and make a commentary in a style I like.
In a sense it's like SQL or MiniZinc: you define the goal, and the engine takes care of how to achieve it.
Or maybe it's like driving: we don't worry about spark advance, or often manual clutches, anymore, but LLMs are like Waymo where your hands aren't even on the steering wheel and all you do is specify the destination, not even the route to get there.
it's outsourcing to an unreliable body shop where they barely speak English and the weekly attrition rate is 300%
YES, and in motorsports where all kinds of driver-assistance have become available in recent decades, from anti-lock brakes to traction control and obviously far more now, the rulemakers work hard to distinguish exactly what constitutes a necessary safety advance to allow, and what removes too much of the component of driver skill.
Take away too many elements of driver skill and it becomes an entirely different sport, autonomous car racing, and indeed there are starting to be such races, e.g.,[0].
For the question of using AI vs skill, maybe the best analogy is lifting weights — if the goal is to lift the most iron, bring a forklift, but if the goal is to exercise your muscles to stimulate strength gain, bring your gym clothes and chalk.
Except with writing, the 'AI forklifts' often fail to lift all the iron, and if you don't watch closely, they go lift the cars in the parking lot instead, as in the article, failing to make the right connections, being confident when the author would be tentative and vice-versa, and generally failing to actually work with any real understanding.
[0] https://www.smithsonianmag.com/smart-news/worlds-first-race-...
I mean we're programmers. Even though it's much more popular these days the very nature of what we do makes us "weird". At least compared to the average person. But weird isn't bad.
(Why people doing it if they find it so boring? And why side projects?! I know it pays well but there are plenty of jobs that do. I mean my cousin makes more as a salesman and spends his days at golf courses. He's very skilled, but his job is definitely easier)
I also can't comprehend people when they say this.For starters it's like saying "I want you learn an instrument so I listen to scales, that way I can focus on playing songs." The fun part can't really happen without the hard part.
Second, how the fuck do you do the actual engineering when you're not writing the code? I mean sure, I can do a lot at the high level but 90% of the thinking happens while writing. Hell, 90% of my debugging happens while writing. It feels like people are trying to tell me that LLMs are useful because "typing speed is the bottleneck". So I'm left thinking "how the fuck do you even program?" All the actual engineering work, discovering issues, refining the formulation, and all that happens because I'm in the weeds.
The boring stuff is where the best learning and great ideas come from. Isn't a good programmer a lazy one? I'd have never learned about something like functors and template metaprogramming if I didn't ever do the boring stuff like write a bunch of repetitive functions thinking "there's got to be a better way!" No way is an LLM going to do something like that because it's a dumb solution until a critical mass is reached and it becomes a great solution. There's little pressure for that kind of progress when you can generate those functions so fast (I say little because there's still pressure from an optimization standpoint but who knows if an LLM will learn that unprompted)
Honestly coding with LLMs feels like trying to learn math by solely watching 3Blue1Brown videos. Yeah, you'll learn something but you'll feel like you learned more than you actually did. The struggle is part of the learning process. Those types of videos can complement the hard work but they don't replace it.
AI tools allow me to do a lot of stuff within a short time, which is really motivating. They also automatically keep a log of what I was doing, so if I don't manage to work on something for weeks, I can quite easily get back in and read my previous thinking.
It can also get very demotivating to read 10 stackoverflow discussions from a Google searches that don't solve my problem. This can cause me to get out of 'the zone' and makes it extremely hard to continue. With AI tools, I can rephrase my question if the answer isn't exactly what I was looking for and steer towards a working solution. I can even easily get in depth explanations of provided solutions to figure out why something doesn't work.
I also have random questions pop up in my brain throughout the day. These distract me from my task at hand. I can now pop this question into an AI tool and have it research the answer, in stead of being distracted for an hour reading up on brake pads or cake recipes or the influence of nicotine on driving ability.
Yes. I think it depends on one's goals.
You can ask, in the same vein, why use Python instead of C? Isn't the real joy of programming in writing effective code with manual memory management and pointers? Isn't the real joy in exploring 10 different libraries for JSON parsing? Or in learning how to write a makefile? Or figuring out a mysterious failure of your algorithm due to an integer overflow?
TBH I am not sure AI is better either (see https://youtube.com/shorts/QZCHax14ImA), but it's probably gonna get figured out.
Not to say it’s useless garbage, there is some value for sure, but it’s nowhere near as good as some people represent it to be. It’s not an original observation, but people end up in a “folie a deux” with a chatbot and churn out a bunch of mediocre stuff while imagining they’re breaking new ground and doing some amazing thing.
I like programming. Quite a bit. But the modern bureaucratic morass of web technologies is usually only inspiring in the small. I do not like the fact that I have to balance so many different languages and paradigms to get to my end result.
It would be a bit like a playwright aficionado saying “I really love telling stories through stage play” only to discover that all verbs used in dialogue had to be in Japanese, nouns are a mix of Portuguese and German, and connecting words in English. And talking to others to put your play on, all had to be communicated in Faroese and Quechua.
If it's a language I don't particularly enjoy, though, so much the better that the AI types more of it than me. Today I decided to fix a dumb youtube behavior that has been bugging me for a while, I figured it would be a simple matter of making a Greasemonkey script that does a fetch() request formed from dynamic page data, grabs out some text from the response, and replaces some other text with that. After validating the first part in the console, I told ChatGTP to code it up and also make sure to cache the results. Out comes a nice little 80 lines or so of JS similar to how I would have written it setting up the MutationObserver and handling the cache map and a promises map. It works except in one case where it just needs to wait longer before setting things up, so I have it write that setTimeout loop part too, another several lines, and now it's all working. I still feel a little bit of accomplishment because my problem has been solved (until youtube breaks things again anyway), the core code flow idea I already had in mind worked (no need for API shenanigans), and I didn't have to type much JavaScript. It's almost like using a much higher level language. Life is too short to write much code in x86 assembly, or JavaScript for that matter, and I've already written enough of the latter that I feel like I'm good.
The amount of people who apparently just want the end result and don't care about the process at all has really surprised me. And it makes me unfathomably sad, because (extremely long story short) a lot of my growth in life can be summed up as "learning to love the process" -- staying present, caring about the details, enjoying the journey, etc. I'm convinced that all that is essential to truly loving one's own life, and it hurts and scares me to both know just how common the opposite mindset is and to feel pressured to let go of such a huge part of my identity and dare-I-say soul just to remain "competitive."
When writing code in exchange for money the goal is not to write code, it's to solve a problem. Care about the code if you want but care about solving the problem quickly and effectively more. If LLMs help with that you should be using them.
On personal projects it depends on your goal. I usually want the tool more than whatever I get from writing code. I always read whatever an LLM spits out to make sure I understand it and confirm it's correct but why wouldn't I accelerate my personal tool development as well?
But, I almost never do something "for the programming". Programming is just an ingredient to make the thing I actually want. This is why I use Solidworks and not OpenSCAD for most 3D modeling, for example. I've learned many things from it but I can't honestly say I'm in it for the programming.
I've played with using LLMs for code generation in my own projects, and whilst it has sometimes been able to solve an issue - I've never felt like I've learned anything from it. I'm very reluctant to use them for programming more as I wouldn't want my own skills to stagnate.
Our company is "encouraging" use of LLMs through various carrots and sticks; mostly sticks. They put out a survey recently asking us how we used it, how it's helped, etc. I'll probably get fired for this (I'm already on the short list for RIFs due to being remote in a pathological RTO environment and being easily the eldest developer here, but...), but I wrote something like:
"Most of us coders, especially older ones, are coders because we like coding. The amount of time and money being put spent to make coders NOT CODE is incredible."
It depends on what you're trying to do. I mean if the point of doing anything is a "feeling of accomplishment" why hire anyone to do anything you could do yourself. Why hire a builder to build your home? Why hire a mechanic to fix your car? Why pay a neighborhood kid to mow your lawn? Why hire a photographer for your wedding? Why hire a cook to make a meal? People hire others because even if they could do it themselves, they don't enjoy it but they need or still want the outcome for some reason or another.
Would you want to hire someone to write your blog for you? No you probably wouldn't if its a personal blog, so likewise you probably wouldn't want to use an AI for it either. But if it's a marketing blog like almost every business seems to have on their website these days full of listicles and vague "did you know" marketing? Sure, it's probably already outsourced anyway, so why not use an AI.
You probably don't want to be using an AI to generate artwork if you're aiming to make a painting that expresses your inner feelings. But if you're making a game and you suck at painting or drawing, you might hire it out, using an AI in that case isn't any different.
But precisely, "AI" is _NOT_ fixing my car or building my home or photographing my wedding!! It's writing a sludge of plausible-looking but empty slop that contaminates everything on the web, it's attempting to automate the visual arts, it's generating fake video that's getting harder and harder to distinguish from real one. It's automating things that SHOULD NOT be automated, and it's NOT automating things that should!
We've been down this road before. We're in the phase of new technology where it's good enough, and available enough for anyone to do anything with it. And like most things when we hit that phase, most of what people do with it is going to be pretty awful. CGI in general, drum machines, and electronic music generation are examples. And just like eventually people found new ways to use these things to create new forms of art that people enjoyed, people will find ways to use AI in the same way.
If you're into 3d modeling and blender, the suck of painstakingly crafting reality from digital star dust is why you're here, and you don't understand why someone would want to automate that away. But likewise, if someone else is into physical modeling, the suck of creating life and reality out of sometimes literal garbage is why they're there, and they don't understand why you would want to automate that away.
Yes AI allows trash makers and grifters to dump out more trash and grift, but it also allows more people who were previously prevented from expressing themselves in a way they enjoyed by a lack of time, money or skills to express themselves anew.
[1]: https://www.youtube.com/watch?v=DSRrSO7QhXY
These are negative externalities, indeed, but the producer of the "goods" here does not feel those effects.
"photocamera gives no feelings of accomplishment of creating a picture"
and yet photography is an art of its own, and painting also has not disappeared
---
or heck, "taking digital photos gives zero feelings of accomplishment because you didn't do developing in a redroom"
$$$
For me, the fun in programming also depends a lot on the task. Recently, I wanted to have Python configuration classes that can serialize to yaml, but I also wanted to automatically create an ArgumentParser that fills some of the fields. `hydra` from meta does that but I wanted something simpler. I asked an agent for a design but I did not like the convoluted parsing logic it created. I finally designed something by hand by abusing the metadata fields of the dataclass.field calls. It was deeply satisfying to get it to work the way I wanted.
But after that, do I really want to create every config class and fill every field by myself for the several scripts/classes that I planned to use? Once the initial template was there, I was happy to just guide the agent to fill in the boilerplate.
I agree that we should keep the fun in programming/art, but how we do that depends on the what, the who, and the when.
The issue is the loss of control and intimate knowledge about my own work.
The value, I expect, to some people, is that if they can monetize that, then it's worthwhile to them, while letting them spend less time on it than if they had to do it themselves (or maybe they aren't artists and couldn't do it themselves, period).
I personally find this kinda dishonest, uncreative, and not something I'd care to look at, but that's just me.
The other day, my wife needed to divide something, and rather than get up and walk to the next room to grab her phone, she did it on pen and paper longhand.
At first I was amazed that she bothered instead of grabbing her phone to do it.
Then it occurred to me that, while more people than I expect probably remember how to divide by hand correctly, I don't think I've actually seen someone do it in years, perhaps since my school days.
I do agree with the author that art is a human endeavor and mastery requires practice... But I'm less optimistic that mass adoption of the easy way will let masters stand out. More likely, they'll just be buried under the deluge of slop the public craves.
111 more comments available on Hacker News