The Deadline Isn't When AI Outsmarts Us – It's When We Stop Using Our Own Minds
Posted3 months agoActive3 months ago
theargumentmag.comTechstoryHigh profile
calmmixed
Debate
80/100
Artificial IntelligenceCognitive AtrophyTechnology Impact
Key topics
Artificial Intelligence
Cognitive Atrophy
Technology Impact
The article discusses how AI may lead to cognitive atrophy if humans rely too heavily on it, and the HN discussion explores the nuances of this issue, with some commenters sharing their experiences and concerns.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
3h
Peak period
69
0-6h
Avg / period
17.8
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 5, 2025 at 7:08 AM EDT
3 months ago
Step 01 - 02First comment
Oct 5, 2025 at 9:53 AM EDT
3h after posting
Step 02 - 03Peak activity
69 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 8, 2025 at 3:46 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45480622Type: storyLast synced: 11/20/2025, 8:18:36 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Stepping back - the way fundamental technology gets adopted by populations always has a distribution between those that leverage it as a tool, and those that enjoy it as a luxury.
When the internet blew up, the population of people that consumed web services dwarfed the population of people that became web developers. Before that when the microcomputer revolution was happening, there were once again an order of magnitude more users than developers.
Even old tech - such as written language - has this property. The number of readers dwarfs the number of writers. And even within the set of all "writers", if you were to investigate most text produced, you'd find that the vast majority of it is falls into that long tail of insipid banter, gossip, diaries, fanfiction, grocery lists, overwrought teenage love letters, etc.
The ultimate consequences of this tech will depend on the interplay between those two groups - the tool wielders and the product enjoyers - and how that manifests for this particular technology in this particular set of world circumstances.
That's a great observation!
'Literacy' is defined as the ability to both read and write. People as a rule can write, even if it isn't a novel worth publishing they do have the ability to encode a text on a piece of paper. It's a matter of quality rather than ability (at least, in most developed countries, though even there there are still people who can not read or write).
So think that you could fine-tune that observation to 'there is a limited number of people that provide most of the writings'. Observing for instance Wikipedia or any bookstore would seem to confirm that. If you take HN as your sample base then there too it holds true. If this goes for one of our oldest technologies it should not be surprising that on a forum dedicated to creating businesses and writing the ability to both read and write are taken for granted. But they shouldn't be.
The same goes for any other tech: the number of people using electronics dwarfs the number of circuit designers, the number of people using buildings dwarfs architects and so on, all the way down to food consumption and farmers or fishers.
Effectively this says: 'we tend to specialize' because specialization allows each to do what they are best at. Heinlein's universal person ('specialization is for insects') is an outlier, not the norm, and probably sucks at most of the things they claim to have ability for.
This is quoted elsewhere in this thread (https://news.ycombinator.com/item?id=45482479). Most of the things are stuff that you will be doing at some point in your life, that are socially expected from every human at part of human life or things you do daily. It also only says you should be able to do it, it does not need to be good; but should the case arise, that you are required to do it, you should be able to deal with it.
Well the current vision right now seems to be for the readers to scroll AI TikTok and for writers to produce AI memes. I'm not sure who really benefits here.
That's my primary problem as of now. It's not necessary used as some luxury tool or some means of entertainment. It's effectively trying to outsource knowledge itself. Using ChatGPT as a Google substitute has consequences to readers, and using it to cut corners for writers has even worse consequences. I don't think we've had tech like this that can argued as dangerous on both sides of the aisle simultaneously.
On the contrary, all tech is like this. It is just the first time that the knowledge workers producing the tech are directly affected so they see first hand the effects of their labor. That really is the only thing that is different.
So let's not just handwave it as "nothing special" and actually demonstrate why this isn't special. Most other forms of technological progress have shown obvious benefits to producers and consumers. Someone is always harmed in the short term, yes. But society's given them ways to either retire or seek new work if needed. I'm not seeing that here.
Sorry, but my comment wasn't about you in particular. It was about the tech domain in general. I know absolutely nothing about you so I would not presume to make any statements about you in that sense.
> But society's given them ways to either retire or seek new work if needed. I'm not seeing that here.
No, not really. For the most part they became destitute and at some point they died.
What you are not seeing is that this is the end stage of technological progress, the point at which suddenly a large fraction of the people is superfluous to the people in charge. Historically such excess has been dealt with by wars.
Having opportunity doesn't mean they will seize it. I will concede that if you are disrupted and in your 50's (not old enough to retire, and where it becomes difficult to be re-hired unless you're management) you get hit especially hard.
But it's hard to see the current landscape of jobs now and suggest that boomers/older GenX had nothing to fall back on when these things happen. These generations chided millennials and Gen Z for being "too proud to work a grill". Nowadays you're not even getting an interview at McDonald's after submitting hundreds of applications. That's not an environment that let's you "bounce back" after a setback.
>Historically such excess has been dealt with by wars.
Indeed. We seem to be approaching that point, and it's already broken out in several places. When all other channels are exhausted, humans simply seek to overthrow the ones orchestrating their oppression.
In this case that isn't AI. At least not yet. But it's a symptom of how they've done this.
Well, in that sense everybody has opportunity. But I know quite a few people who definitely would not survive their line of employment shutting down. A lot of them have invested decades in their careers and have life complications, responsibilities and expenses that stop them from simply 'seizing opportunity'. For them it would be the end of the line, hopefully social security would catch them but if not then I have no idea how they would make it.
But speaking in macroeconomics, most people have the capacity to readjust if needed. I had to do so these last few years (and yes, am thankful I am "able bodied" and have a family/friend network to help me out when at my lowest points). And the market really sucks, but I eventually found some things. Some related to my career, some not.
But I'm 30. In the worst worst cases, I have time and energy to pivot. The opportunities out there are dreadful all around, though.
Some people benefit from the relaxing effects of a little bit. It helped humanity get through ages of unsafe hygiene by acting as a sanitizer and preservative.
For some people, it is a crutch that inhibits developing safe coping mechanisms for anxiety.
For others it becomes an addiction so severe, they literally risk death if they don't get some due to withdrawal, and death by cirrhosis if they keep up with their consumption. They literally cannot live without it or with it, unless they gradually taper off over days.
My point isn't that AI addiction will kill you, but that what might be beneficial might also become a debilitating mental crutch.
Better analogy is processed food.
It makes calories cheaper, it’s tasty, and in some circumstances (e.g. endurance sports or backpacking) is materially enhances what an ordinary person can achieve. But if you raise a child on it, to where it’s what they reach for by default, they’re fucked.
I was building a little roguelike-ish sort of game for myself to test my understanding of Raylib. I was using as few external resources as possible outside of the cheatsheet for functions, including avoiding AI initially.
I ran into my first issue when trying to determine line of sight. I was naively simply calculating a line along the grid and tagging cells for vision if they didn't hit a solid object, but this caused very inconsistent sight. I tried a number of things on my own and realized I had to research.
All of the search results I found used Raycasting, but I wanted to see if my original idea had merit, and didn't want to do Raycasting. Finally, I gave up my search and gave copilot a function to fill in, and it used Bresenham's Line Algorithm. It was exactly what I was looking for, and also, taught me why my approach didn't work consistently because there's a small margin of error when calculating a line across a grid that Bresenham accounts for.
Most people, however, won't take interest in why the AI answer might work. So while it can be a great learning tool, it can definitely be used in a brainless sort of way.
- the code
- your improvement in knowledge
would have been if you had skipped copilot and described your problem and asked for algorithmic help?
The value isn't objective and very much depends on end goals.People seem to trounce out the "make games, not engines" without realizing that engine programmers still do exist.
That system, of course, doesn't rely on generative AI at all: all contributions to the system are appropriately attributed, etc. I wonder if a similar system could be designed for software?
It would have been nice if the author had not overgeneralized so much:
https://claude.ai/share/27ff0bb4-a71e-483f-a59e-bf36aaa86918
I’ll let you decide whether my use of Claude to analyze that article made me smarter or stupider.
Addendum: In my prompt to Claude, I seem to have misgendered the author of the article. That may answer the question about the effect of AI use on me.
And then:
> It would have been nice if the author had not overgeneralized so much
But you just fell into the exact same trap. The effect on any individual is a reflection of that person's ability in many ways and on an individual level it may be all of those things depending on context. That's what is so problematic: you don't know to a fine degree what level of competence you have relative to the AI you are interacting with so for any given level of competence there are things that you will miss when processing an AI's output. The more competent you are the better you are able to use it. But people turn to AI when they are not competent and that is the problem, not that when they are competent they can use it effectively. And despite all of the disclaimers that is exactly the dream that the AI peddlers are selling you. 'Your brain on steroids'. But with the caveat that they don't know anything about your brain other than what can be inferred from your prompts.
A good teacher will be able to spot their own errors, here the pupil is supposed to be continuously on the looking for utter nonsense the teacher utters with great confidence. And the closer it gets to being good at some stuff the more leeway it will get for the nonsense as well.
> Would I recall it all without my new crutch? Maybe not
This just seems like you’ve shifted your definition of “learning” to no longer include being able to remember things. Like “outsourcing your thinking isn’t bad if you simply expect less from your brain” isn’t a ringing endorsement for language models
We've turned out okay.
"For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem [275b] to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise."
Second, Socrates was generally arrogant in stories. That attitude you see there was not special disdain to reading, it was more off his general "I am better then everyone else anyway" attitude.
Some of the best thinking across history - euclid,newton,einstein - happened in the pre-computer era. So, let alone AI, even computers are not necessary. Pen, paper and some imagination/experimentation were sufficient.
Some in tech are fear mongering to seek attention.
> A few weeks ago, The Argument Editor-in-Chief Jerusalem Demsas asked me to write an essay about the claim that AI systems would take all of our jobs within 18 months. My initial reaction was … no?
[...]
> The problem is whether we will degrade our own capabilities in the presence of new machines. We are so fixated on how technology will outskill us that we miss the many ways that we can deskill ourselves.
[..]
> Students, scientists, and anyone else who lets AI do the writing for them will find their screens full of words and their minds emptied of thought.
Ypu can still make the tests and exams checking what students know. There will be less "generate bs at home" tasks and more "come back with a knowledge" which will likely be an improvement.
They are the same picture!
I've been coding without my LLM for 2 hours and it's just more productive...yes it's good for getting things "working" but yeah, we still need to think and understand to solve harder problems.
My initial impressions blew me away because generating new things is a lot simpler than fixing old things, yes it's still useful, but only when you know what you're doing in the first place.
I don't disagree in general but I've had a lot of success asking the LLMs to specifically fix these things and make things more maintainable when specifically prompted to do so. I agree debugging and getting things working it often needs supervision, guidance and advice. And architecture it often gets a little wrong and needs nudges to see the light.
I'm not great at this stuff and I got tired of reviewing things and generating suggestions to improve implementations (it seemed to be repetitive a lot) but I am having good results with my latest project using simulated ecosystems with adversarial sub-projects. So there's the core project I care about with a maintainer agent/persona, an extension with an extension developer agent/persona (the extensions provide common features built upon the core with the perspective of being a third-party developer), and an application developer that uses both the core and extensions. I have them all write reports about challenges and reviewing the sub-projects they consume, complaining about awkwardness and ways the other parties could be improved. Then the "owner" of each component reviews the feedback to develop plans for me to evaluate and approve. Very often the "user" components end up complaining about complexity and inconsistency. The "owner" developers tend to add a lot multiple ways of doing things when asked for new features until specifically prompted to review their own code to reduce redundancy, streamline use and improve maintainability. But they will do it when prompted and I've been pretty happy with the code and documentation it's generating.
But my point remains specifically about the crappy code AI writes. In my experience, it will clean it up if you tell it to. There's the simple angle of complexity and it does an okay job with that. But there's the API design side also and that's what the second part is about. LLM will just add dumb ass hacks all over the place when a new feature is needed and that leads to a huge confusing integration mess. Whereas with this setup when I want to build an extension the API has been mostly worked out and when I want to build an application the API has mostly been worked out. That's the way it would work if I ran into a project on github I wanted to depend on.
That's probably why they're not doing that. The core premise-- we will rely on AI so much that we will de-skill ourselves-- requires acknowledging that AI works.
No it doesn't require that because the vast majority of people aren't rational actors and they don't optimize for the quality of their work - they optimize for their own comfort and emotional experience.
They'll happily produce and defend low quality work if they get to avoid the discomfort of having to engage in cognitively strenuous work, in the same way people rationalize every other choice they make that's bad for them, the society, the environment, and anyone else.
But what does it matter? After the game of semantics is all said and done, the work is still being done to a lower standard than before, and people are letting their skills atrophy.
People are rational, and the example you give actually shows that; they prefer reduced workload, so they optimize for their own comfort and emotional experience. What isn't rational about that?
If people made decisions the way you described, by carefully considering and accepting trade-offs then I would agree that they are rational actors.
But people don't do that, they pick an outcome they prefer and try to rationalize it afterwards by claiming that trade-offs don't exist.
The article acknowledges that AI progress to date has worked. It snidely disagrees without argument that AI the field could work from here on out.
I've seen many other people who have essentially become meatspace analogues for AI applications. It's sad to watch this happen while listening to many of the same people complain about how AI will take their jobs, without realizing that they've _given_ AI their jobs by ensuring that they add nothing to the process. I don't really understand yet why people don't see that they are doing this to themselves.
2. https://pubmed.ncbi.nlm.nih.gov/25509828/
3. https://www.researchgate.net/publication/392560878_Your_Brai...
I’m pretty convinced it should be used to do things humans can’t do instead of things humans can do well with practice. However, I’m also convinced that Capital will always rely on Labor to use it on their behalf.
They're going to be rather surprised when this doesn't work as planned, for reasons that are both very obvious and not obvious at all. (Yet.)
Compare this body of work to the body of work that has consistently showed social media is bad for you and has done so for many years. You will see a difference. Or if you prefer to focus on something more physical, anthropogenic climate change, the evidence for the standard model of particle physics, the evidence for plate tectonics, etc.
I'm not saying we shouldn't be skeptical that these technologies might make us lazy or inable to perform critical functions of technical work. I think there is a great danger that these technologies essentially fulfill the promise of data science across industries, that is, a completely individualized experience to guide your choices across digital environments. That is not the world that I want to live in. But, I also don't think that my mind is turning to mush because I asked Claude Code to write some code to make a catboost model that would have taken me a few hours to try out some idea.
We don't practice much using the assembler either, or the slide ruler. I also lost the skill to start an old Renault 12 which I owned 30 years ago, it is a complex process believe me, there were some owners reaching artist level at it.
In an interview setting, while in a meeting, if you're idling on a problem while traveling or doing other work, while you are in an area with weak reception, if your phone is dead?
There are plenty of situations where my problem solving does not involve being directly at my work station. I figured out a solution to a recent problem while at the doctor's office and after deciding to check the API docs more closely instead of bashing my head on the compiler.
>We don't practice much using the assembler either, or the slide ruler.
Treating your ability to research and critically think as yet another tool is exactly why I'm pessimistic about the discipline of the populace using AI. These aren't skills you use 9-5 then turn off as you head back home.
Sad truth is future will most likely invalidate all „knowledge” beside critical thinking.
When you're unemployed, homeless, or cash strapped for other reasons, as has happened to more than a few HNers in the current downturn, and can't make your LLM payments.
And that doesn't even account for the potential of inequality, where the well-off can afford premium LLM services but the poor or unemployed can only afford the lowest grades of LLM service.
When computers were new, they were on occasion referred to as "electronic brains" due to their capacity for arithmetic.
Humans can, with practice, do arithmetic well.
A Raspberry Pi Zero can do arithmetic faster than the combined efforts of every single living human even if we all trained hard and reached the level of the current record holder.
Should we stop using computers to do arithmetic just because we can also do arithmetic, or should we allow ourselves to benefit from the way that "quantity has a quality all of its own"*?
* Like so many quotes, attributed to lots of different people. Funny how unreliable humans can be with information :P
Didn't we?
Geesh, doorbell again. Last mile problem? C'mon. Whoever solves the last 30 feet problem is the real hero.
Paintball gun, except with cream fillings instead of paint, and softer shells so they can be fired straight into people's mouths.
I still use StackOverflow and LLMs, but if those things were available when I was learning I would probably not have learnt as much.
The change with LLM is I can now just ask my hare brained questions first and figure out why it was a stupid question later
(Mostly this ended up being weird Ubuntu things relating to usecases specific to robots... not normal programming stuff)
I can’t speak to whether this is a good approach for anyone else (or even for myself ~15 years later) but it served to ingrain in me the habit of questioning everything and poking at things from multiple angles to make sure I had a good mental model.
All that is to say, there is something to be said for answering “stupid” questions yourself (your own or other people’s).
Way back in "Ye olden days" (Apple ][ era) my first "computer teacher" was a teacher's assistant who had wrangled me (and a few other students) an hour a day each on the school's mostly otherwise un-used Apple ][e. He plopped us down in front of the thing with a stack of manuals, magazines, and floppy discs and let us have at it. "You wanna learn computer programming? You're gonna have to read..." :)
SO is infamous for treating question-askers badly.
I have used LLMs for some time, and have no intentions of ever going back to SO. I get tired of being insulted.
> The only stupid question is the one you don't ask.
- A poster on my old art teacher's studio wall.
StackOverflow intimidated me for reasons you say. What is it with the power trip that some of these forum mods have?
“Breaking the chain” is quite difficult, because it means not behaving in the manner that every cell in your body demands.
One of the reasons that I strive to behave, hereabouts, is that I feel the need to atone.
It can be quite difficult to hold my tongue/keyboard, though. I feel as if it’s good exercise.
A second major issue with SO is that answers decay over time. So a good answer back in 2014 is a junk answer today. Thus, I would get drive by downvotes on years old discussions, which is simply irritating.
So I quit SO and never bothered to answer another single question ever again.
SO has suffered from enshitification, and though I despise that term, it does sort of capture how sites like SO went from excellent resources into cesspools of filth and fools.
That LLMs are trained on that garbage is amusing.
With LLMs you can start with first principles, confirm basic knowledge (of course, it hallucinates but I find it's not that hard to verify things most of the time) or just get pointers where to dive deeper.
The problem with Stack Overflow is not that it makes you do the work—that’s a good thing—but that it’s too often too pedantic and too inattentive to the question to realise the asker did put in the work, explained the problem well, and the question is not a duplicate. The reason it became such a curmudgeonly place is precisely due to a constant torrent of people treating it like you described it, not putting in the effort.
It was horrible. Because it wasn't about "figuring things out for yourself." I mean, if the answer was available in a programming language or library manual, then debugging was easy.
No, the problem was you spent 95% of your debugging time working around bugs and unspecified behavior in the libraries and API's. Bugs in Windows, bugs in DLL's, bugs in everything.
Very frequently something just wouldn't work even though it was supposed to, you'd waste an entire day trying to get the library call to work (what if you called it with less data? what if you used different flags?), and then another day rewriting your code to use a different library call, and praying that worked instead. The amount of time utterly wasted was just massive. You didn't learn anything. You just suffered.
In contrast, today you just search from the problem you're encountering and find StackOverflow answers and GitHub issues describing your exact problem, why it's happening, and what the solution is.
I'm so happy people today don't suffer the way we used to suffer. When I look back, it seems positively masochistic.
TBF Ali bugs in some framework you're using still happens. The problem wasn't eliminated, just moved to the next layer.
Those debugging skills are the most important part of working with legacy software (which is what nearly all industry workers work in). It sucks but is necessary for success in this metric.
My point is that I can frequently figure out how to work around them in 5 minutes rather than 2 days, because someone else already did that work. I can find out that a different function call is the answer, or a weird flag that doesn't do what the documentation says, or whatever it is.
And my problem of it taking two days to debug something is eliminated, usually.
Guess I'm just dumb then. I'm still taking days to work around some esoteric, underdocumented API issues in my day-to-day work.
The thing is these API's are probably just as massive as old school OS cosebases, so I'm always tripping into new landmines. I can be doing high level gameplay stuff one week. Then the next week I need to figure out how authoring assets works, and then the next week I'm performing physics queries to manage character state. All in the same API that must span 10s of millions of lines of code at this point.
There are tons of obfuscated Java jar libraries out there that are not upgradeable that companies have built mission critical systems around only to find out they can't easily move to JVM 17 or 25 or whatever and they don't like hearing that.
That way of life is gone for me. I've got a smartphone and I doomscroll to my detriment. What's new and fascinating to me is the models themselves. https://fi-le.net/oss/ currently trending here is the tip of a whole new area of study and work.
I have not, but at the beginner level you don't really need it, there are tons of tutorials and language documentation that is easier to understand. Also beginners feel absolutely discouraged to ask anything, because even if the question is not a real duplicate, you use all terms wrong and thus get downvoted to hell and then your question is marked as a duplicate to something, that doesn't even answer your question.
Later it's quite nice to ask for clarifications of e.g. the meaning of something specific in a protocol or the behaviour of a particular program. But quite quickly you don't actually get any satisfying answers, so you revert to just read the source code of the actual program and are surprised how easy that actually was. (I mean it's still hard every time you start with a new unknown program, but it's easier than expected.)
Also when you implement a protocol, asking questions on StackOverflow doesn't scale. Not because the time you need to wait for answers; even if that were zero, it still takes to long time and is deeply unsatisfying to develop a holistic enough understanding to write the code. So you start reading the RFCs and quickly appreciate how logically and understandable they are. You first curse how unstructured anything is and then you recognize that the order follows what you need to write and you can just trust the text and write the algorithm down. Then you see that the order in which the protocol is described actually works quite well for async and wonder what the legacy code did, because not deviating from the standard is actually easier.
At some point you don't understand the standard, there will be no answer on StackOverflow, the LLM just agrees with you for every conflicting interpretation you suggest, so you hate everything and start reading other implementations. So no, you still need to figure out a lot for yourself.
(1)https://archive.org/details/borland-turbo-pascal-6.0-1990
However, on the intrinsic value of these new tools when developing habits and skills, I just think about the irreplaceable role that the (early) internet played in my own development and how incredible an LLM would have been. People are still always impressed whenever a precocious high schooler YouTubes his way to an MVP SaaS launch -- I hope and expect the first batch of LLM-accompanied youth to emerge will have set their sights higher.
I don't know about that. Early internet taught me I still need to put in work to to find answers. I still had to read human input (even though I lurked) and realize a lot of information was lies and trolls (if not outright scams). I couldn't just rely on a few sites to tell me everything and had to figure out how to refine my search queries. The early internet was like being thrown into a wilderness in many ways, you pick up survival skills as you go along even if no one teaches you.
I feel an LLM would temper all the curiosity I gained in those times. I wouldn't have the discipline to use an LLM the "right way". Clearly many adults today don't either.
Maybe it has something to do with the purveyors of these products
- claiming they will take the jobs
- designing them to be habit-forming
- advertising them as digital mentats
- failing to advertise risks of using them
Sure, having a real-time data source is nice for avoiding construction/traffic, and I'd use a real-time map, but going beyond that to be spoon fed your next action over and over leads to dependency.
Then I can pretty quickly see whether my idea was a good one or not. It's so easy and quick to build tiny bespoke tools now, that I'm building them left and right.
Some stay with me and I use them regularly, the others I forget. But I didn't have to spend hours and hours building them so the time-cost is not an issue.
Not to say that apps aren't useful in replacing the paper map, or doing things like adding up the times required (which isn't new - there used to be tables in the back of many maps with distances and durations between major locations).
I always feel like they aren't even trying. Like you just make a point were you are, a point were you want to do, draw a straight line, take the nearest streets, and then you can optimize ad libitum.
https://www.npr.org/2011/07/26/137646147/the-gps-a-fatally-m...
and for the way this mindset erodes values and skills:
https://www.marinecorpstimes.com/news/your-marine-corps/2018...
(And of course, idiotic behaviour... but GPS doesn't cause that.)
Overall GPS has been an absolutely enormous benefit for society with barely any downside other than nostalgia for map reading.
The ballad of John Henry was written in the 1840s
“Does the engine get rewarded for its steam?” That was the anti-automation line back then
If you gave up anything that was previously called “AI” we would not have computers, cars, airplanes or any type of technology whatsoever anywhere
Sure, and it was wrong because it turns out the conductor does get rewarded. Given train strikes that had to be denied as recently as a few years ago, it's clear that's an essential role 150 years later.
With how they want to frame AI as replacing labor, who's being rewarded long term for its thinking? Who's really being serviced?
Humans haven’t figured out how to include all humans and ecological systems into the same “tribe” and therefore the infighting between artificially segregated human groups, disconnected with ecological “externalities” which prevents a sustainable cooperative solution.
So most likely it will continue to be a small number of humans dominating the rest with increasingly powerful tools that reduce the number of humans required to act in active domination or displacement roles.
Humanity long ago decided that its everyone for themselves and to encode “might makes right” into ritual, mythology and organizational formation-operations.
The tool itself doesn't matter, but the people are falling into the same cycle once again. I can see LLM's used ethically and carefully managed to assist the populace. That's clearly not what's happening and is the entire reason I'm against them. It's tiring being dismissed as a luddite just because I don't want big tech to yet again recklessly ransack the populace with no oversight.
How do you think you can break the cycle?
Do you have a suggestion for what you are going to do about it?
I’ve blitzed through the formerly famous Tokyo subway system mindlessly without a clue.
I have utterly no idea what the different us highways in my area are but it’s never really affected me besides being unable to join in mundane discussions of traffic on 95 or whatever
More or less at the same time I found “Human Being: Reclaim 12 Vital Skills We’re Losing to Technology”, and the chapter on navigation hit me so hard I put the book down and refused to read any more until my navigation skills improved.
They're quite good now. I sit at the toilet staring at the map of my city, which I now know quite well. No longer navigate with my phone.
I'm scared about the chapter on communication, which I'm going through right now.
I do think we're losing those skills, and offloading more thinking to technology will further erode your own abilities. Perhaps you think you'll spend more time in high-cognition activities, but will you? Will all of us?
Don't leave as hanging, what were they saying?
When I can get a full time job again, I plan to. I was trying to learn how to 3d model before the tech scene exploded 3 years ago. I'm probably not trying to take back all 12 factors (I'm fine with where my writing is as of now, even if it is subpar), but I am trying to focus on what parts are important to me as a person and not take any shortcuts out of them.
I grew up with a glove box full of atlases in my car. On one job, I probably spent 30 minutes a day planning the ~4h of driving I'd do daily to different sites. Looking up roads in indexes, locating grid numbers, finding connecting roads spanning pages 22-23, 42-43, 62-63, and 64-65. Marking them and trying not to get confused with other markings I'd made over the past months. Getting to intersections and having no idea which way to turn because the angles were completely different from on the map (yes this is a thing with paper maps) and you couldn't see any road signs and the car behind you is honking.
What a waste of time. It didn't make me stronger or smarter or a better person. I don't miss it the same way I don't miss long division.
Yes, it did.
Planning routes isn't exactly rocket science. There's not much to learn. It just takes a lot of time. It's busywork.
And people can't go that far to begin with. SThat's the scary part.
These little things we think of as insignificant add up and give us our ability to think. Change how we perceive and navigate (no pun intended) the world. Letting one or two of these factors rust probably won't cost us, but how far off are we really from the WALL-E future of we automate all our cognition, our spatial reasoning, and our curiosity?
I think we're nowhere close to WALL-E, nor are we headed in that direction. For everything that becomes easier, new harder skills become more important.
I'll ask point blank, then: what new "hard skills" are becoming more important in the short and mid terms that you see on the horizon? My biggest fears are that the technocrats very much want to raise a generation of "sheep" dependent on them to think. They don't need thinkers, only consumers.
And then communication, management, and people skills become more important each year. That's not stopping. It's only becoming more valuable, and a lot of people need to get a lot better at it.
Being an effective software developer is going to get much more challenging, skills-wise, over the next couple decades as productivity expectations rise exponentially.
And this is going to be the same in every knowledge work field. People will be using AI to orchestrate and supervise 20x the amount of work, and that's an incredibly demanding skill set.
I've heard this a decade ago as well (replace Ai agents with distributed cloud clusters). Instead it seems like industry wants to kick out all the expertise and outsource as much grunt work as possible to maintain what is already there. So I not too optimistic that the industry will be looking for proper architects. We're pushing more CRUD than ever under the guisd of cutting edge tech.
We're not working smarter, we're trying to work cheaper. We'd need a huge cultural shift to really show me that this won't be even more true on 10 years. That's why I'm slowly trying to pivot to a role not reliant on such industry practices
I'm just lucky in that I've always had a sense of direction ever since I was little. It's not a skill I've ever had to develop. There's nothing to "get better at". Some people just seem to be born with it, and I got lucky.
Maybe the place to draw the line is different for each individual and depends on if they're really spending their freed-up time doing something useful or wasting it doing something unproductive and self-destructive.
I can't say that I'm totally unaffected by contemporary technology, and my attention span seems to have suffered a little, but I think I'm mostly still intact. I read most days for pleasure and have started a book club. I deliberately take long walks without bringing my smartphone; it's a great feeling of freedom, almost like going back to a simpler time.
164 more comments available on Hacker News