Mit Study Finds AI Use Reprograms the Brain, Leading to Cognitive Decline
Posted4 months agoActive4 months ago
publichealthpolicyjournal.comResearchstoryHigh profile
skepticalmixed
Debate
80/100
AICognitive DeclineLlms
Key topics
AI
Cognitive Decline
Llms
A study from MIT found that using AI for essay writing tasks led to cognitive decline, sparking debate among commenters about the study's methodology and the implications of relying on AI for cognitive tasks.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
43m
Peak period
138
0-6h
Avg / period
26.7
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 3, 2025 at 8:06 AM EDT
4 months ago
Step 01 - 02First comment
Sep 3, 2025 at 8:49 AM EDT
43m after posting
Step 02 - 03Peak activity
138 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 7, 2025 at 10:00 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45114753Type: storyLast synced: 11/22/2025, 11:47:55 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
On that note, reading the ChatGPT-esque summary in the linked article gave me more brain damage than any AI I've used so far
Like everything else in our life, cognition is "use it or lose it". Oursourcing your decision making and critical thinking to a fancy autocomplete with sycopantic tendencies and incapable of reasoning sure is fun, but as the study found, it has its downsides.
Over the last three years or so, I have seen more and more posts where the position just doesn't make sense. I mean, ten years ago, there were posts on HN that I disagreed with that I upvoted anyway, because they made me think. That has become much more rare. An increasing number of posts now are just... weird (I don't know a better word for it). Not thoughtful, not interesting (even if wrong), just weird.
I can't prove that any of them are AI-generated. But I suspect that at least some of them are.
I wouldn't call it "cognitive decline", more "a less deep understanding of the subject".
Try solving bugs from your vibe coded projects... It's pain, you haven't learned anything while you build something. And as a result you don't fully grasp how your creation works.
LLM are tools, but also shortcuts, and humans learn by doing ¯\_(ツ)_/¯
This is pretty obvious to me after using LLMs for various tasks over the past years.
I am offended by coworkers who submit incompletely considered, visibly LLM generated code.
These coworkers are dragging my team down.
> 83.3% of LLM users were unable to quote even one sentence from the essay they had just written.
> In contrast, 88.9% of Search and Brain-only users could quote accurately.
> 0% of LLM users could produce a correct quote, while most Brain-only and Search users could.
Reminds me of my coworkers who have literally no idea what Chat GPT put into their PR from last week.
Could a person, armed with ChatGPT, come up with a better solution in a real world problem than without ChatGPT? Maybe that's what actually matters.
I think a return to the apprentice style of institution where people try to create the best real world solution as possible with LLMs, 3D printers, etc. Then use recorded college courses like our grandparents used books.
But how can they discuss any content if even the "writer" does not remember what they wrote.
Given that AI is literally just words on a monitor just like the rest of the internet, I have a strong prior it's not "reprogram[ming]" anyone's mind, at least not in some manner that, e.g. heavy Reddit use might.
We have decades of research - brain scans, studies, experiments, imaging, stimuli responses, etc - proving that when a human no longer has to think about performing a skill, that skill immediately begins to atrophy and the brain adapts accordingly. It’s why line workers at McDonalds don’t actually learn how to properly cook food (it’s all been procedured-out and automated where possible to eliminate the need for critical thinking skills, thus lowering the quality of labor needed to function), and it’s why - at present - we’re effectively training a cohort of humans who lack critical thinking and reasoning skills because “that’s what the AI is for”.
This is something I’ve known about long before the current LLM craze, and it’s why I’ve always been wary or hostile to “aggressively helpful” tools like some implementations of autocorrect, or some driving aides: I am not just trying to do a thing quickly, I am trying to do it well, and that requires repeatedly practicing a skill in order to improve.
Studies like these continue to support my anxiety that we’re dumbing down the best technical generation ever into little more than agent managers and prompt engineers who can’t solve their own problems anymore without subscribing to an AI service.
My point is that I don't see LLM's effect on the brain as being anything more than the normal experience we have of living and that the level of drama the headline suggests is unwarranted. I don't believe in infohazards.
Might they result in skill atrophy? For sure! But it's the same kind of atrophy we saw when, e.g. transitioning from paper maps to digital ones, or from memorizing phone numbers to handing out email addresses. We apply the neurons we save by no longer learning paper map navigation and such to other domains of life.
The process has been ongoing since homo erectus figured out that if you bang a rock hard enough, you get a knife. So what?
Now, you could argue that, when we use AI, critical thinking skills are more important, because we have to check the output of a tool that is quite prone to error. But in actual use, many people won't do that. We'll be back at "Computers Do Not Lie" (look for the song on Youtube if you're not familiar with it), only with a much higher error rate.
Because of studies like this we know the burning of fossil fuels is a dead-end for us and our climate, and due to that have developed alternative methods of generating energy.
And the study actually proved that LLM usage reprograms your brain and makes you a dumbass. Social media usage does as well, those two things are not exclusive, if anything, their effects compound on an already pretty dumb and gullible population. So if your argumemt is 'but what about reddit', thats a non argument called 'whataboutism'. Look it up and hopefully it might give you a hint why you are getting downvoted.
There have been three recent studies showing that:
- 1. 95% LLM projects fail in the enterprise https://fortune.com/2025/08/18/mit-report-95-percent-generat...
- 2. Experienced developers get 19% less productive when using an LLM https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
- 3. LLM usage makes you dumber https://publichealthpolicyjournal.com/mit-study-finds-artifi...
We reached a stage where people on the internet mistake their opinion on a subject to be as relevant as a study on the subject.
If you don't have another study or haven't done the science to disprove this study, how come you dismiss so easily a study that actually took time, data and the scientific method to reach to a conclusion? I feel we gotta actively and firmly call out that kind of behavior and ridicule it.
If the Victorians had scientific studies showing that, you might have a point. Instead, you just have a flawed analogy.
And, why the scare quotes? If you can point to some actual flaws in the study, do so. If not, you're just dismissing a study that you don't agree with, but you have no actual basis for doing so. Whereas the study does give us a basis for accepting its conclusions.
N=54, students and academics only (mostly undergrad), impossible to blind, and, worst of all, the conclusion of the study supports a certain kind of anti-technology moralizing want to do anyway. I'd be shocked if it replicated, and even if it did, it wouldn't mean much concretely.
You could run the same experiment comparing paper maps versus Google Maps in a simulated navigation scenario. I'd bet the paper map group would score higher on various comprehension metrics. So what? Does that make digital maps bad for us? That's the implication of the article, and I don't think the inference is warranted.
But didn’t pocket calculators present the same risk / panic?
>Everyone Is Cheating Their Way Through College. ChatGPT has unraveled the entire academic project.
https://archive.ph/ZKZiY
https://nypost.com/2025/08/19/world-news/china-restricts-ai-...
"That’s because the Chinese Communist Party knows their youth learn less when they use artificial intelligence. Surely, President Xi Jinping is reveling in this leg up over American students, who are using AI as a crutch and missing out on valuable learning experiences as a result.
It’s just one of the ways China protects their youth, while we feed ours into the jaws of Big Tech in the name of progress."
https://www.scmp.com/tech/policy/article/3323959/chinas-soci...
Sure you do, and maybe its really an actual benefit for ya. Not for most though. For young folks still going through education, this is devastating. If I didn't have kids I wouldn't care, less quality competition at work, but I do (too young to be affected by it now, and by the time they will be allowed to use these, frameworks for use and restrictions will be in place already).
But since maybe 30% of folks here are directly or indirectly dependent on LLMs to be pushed down every possible throat and then some more, I expect much more denial and resistance to critique of their little pets or investments.
My optimistic take is that the rise of AI in education could cause more workplaces to move away from "must have xyz degree" and actually determine if the candidate has the skills needed.
For this reason, I don't feel as optimistic as you do. I worry instead that equality gaps will widen significantly: there will be the majority which abuses AI and graduates with empty brains, and there will be the minority who somehow manage to avoid doing that (e.g. lucky enough to have parents with sufficient foresight to take preventative measures with their children).
LLMs may end up being both educationally valuable in certain contexts for certain users, and totally unsuitable for developing brains. I would err towards caution for young minds especially.
Let's say I'm a writer of no skill who still wants attention. I could spend years learning to write better, but I still might not get any attention.
Or I could use AI to write something today. It won't be all that interesting, because AI still can't write all that well, but it may be better than I can do on my own, and I can get attention today.
If you care about your own growth (or even not dwindling) as a human, that's a trap. But not everyone cares about that...
Don’t sugarcoat it. Tell us how you really feel.
Probably both are true: you should try them out and then use them where they are useful, not for everything.
None of my professional life reflects that whatsoever. When used well, LLMs are exceptional and putting out large amounts of code of sufficient quality. My peers have switched entire engineering departments to LLM-first development and are reporting that the whole org is moving 2x as fast even after they fired the 50% of devs who couldn't make the switch and didn't hire replacements.
If you think LLM coding is a fad, your head is in the sand.
It used to take me days or even multiple sprints to complete large-scale infrastructure projects, largely because of having to repeatedly reference Terraform cloud provider docs for every step along the way.
Now I use Claude Code daily. I use an .md to describe what I want in as much detail as possible and with whatever idiosyncrasies or caveats I know are important from a career of doing this stuff, and then I go make coffee and come back to 99% working code (sometimes there are syntax errors due to provider / API updates).
I love learning, and I love coding. But I am hired to get things done, and to succeed (both personally and in my role, which is directly tied to our organization's security, compliance, and scalability) I can't spend two weeks on my pet projects for self-edification. I also have to worry about the million things that Claude CAN'T do for me yet, so whatever it can take off of my plate is priceless.
I say the same things to my non-tech friends: don't worry about it 'coming for your job' yet - just consider that your output and perceived worth as an employee could benefit greatly from it. If it comes down to two awesome people but one can produce even 2x the amount of work using AI, the choice is obvious.
For this kind of low stakes, easily verifiable task it’s hard to argue against using LLMs for me.
I have no doubt that volumes of code are being generated and LGTM'd.
But for it to be useful, you have to already know what you're doing. You need to tell it where to look. Review what it does carefully. Also, sometimes I find particular hairy bits of code need to be written completely by hand, so I can fully internalise the problem. Only once I've internalised hard parts of codebase can I effectively guide CC. Plus there's so many other things in my day-to-day where next token predictors are just not useful.
In short, its useful but no one's losing a job because it exists. Also, the idea of having non-experts manage software systems at any moderate and above level of complexity is still laughable.
https://edition.cnn.com/2025/08/27/us/alaska-f-35-crash-acci...
Like any new tool that automates a human process, humans must still learn the manual process to understand the skill.
Students should still learn to write all their code manually and build things from the ground up before learning to use AI as an assistant.
personally I think everyone should shut up
1. This is arxiv - before publication or peer review. Grain of salt.[0]
2. 18 participants per cohort
3. 54 participants total
Given the low N and the likelihood that this is drawn from 18-22 year olds attending MIT, one should expect an uphill battle for replication and for generalizability.
Further, they are brain scanning during the experiment, which is an uncomfortable/out-of-the-norm experience, and the object of their study is easy to infer if not directly known by the population (the person being studied using LLM, search tools, or no tools).
> We thus present a study which explores the cognitive cost of using an LLM while performing the task of writing an essay. We chose essay writing as it is a cognitively complex task that engages multiple mental processes while being used as a common tool in schools and in standardized tests of a student's skills. Essay writing places significant demands on working memory, requiring simultaneous management of multiple cognitive processes. A person writing an essay must juggle both macro-level tasks (organizing ideas, structuring arguments), and micro-level tasks (word choice, grammar, syntax). In order to evaluate cognitive engagement and cognitive load as well as to better understand the brain activations when performing a task of essay writing, we used Electroencephalography (EEG) to measure brain signals of the participants. In addition to using an LLM, we also want to understand and compare the brain activations when performing the same task using classic Internet search and when no tools (neither LLM nor search) are available to the user.
[0] https://arxiv.org/pdf/2506.08872
I would describe the study size and composition as a limitation, and a reason to pursue a larger and more diverse study for confirmation (or lack thereof), rather than a reason to expect an "uphill battle" for replication and so forth.
Maybe. I believe we both agree it is a critical gap in the research as-is, but whether it is a neutral item or an albatross is an open question. Much of psychology and neuroscience research doesn't replicate, often because of the limited sample size / composition as well as unrealistic experimental design. Your approach of deepening and broadening the demographics would attack generalizability, but not necessarily replication.
My prior puts this on an uphill battle.
Generally, yes, low N is unequivocally worse than high N in supporting population-level claims, all else equal. With fewer participants or observations, a study has lower statistical power, meaning it is less able to detect true effects when they exist. This increases the likelihood of both Type II errors (failing to detect a real effect) and unstable effect size estimates. Small samples also tend to produce results that are more vulnerable to random variation, making findings harder to replicate and less generalizable to broader populations.
In contrast, high-N studies reduce sampling error, provide more precise estimates, and allow for more robust conclusions that are likely to hold across different contexts. This is why, in professional and academic settings, high-N studies are generally considered more credible and influential.
In summary, you really need a large effect size for low-N studies to be high quality.
The study showed that 0 of the AI users could recall a quote correctly while more than 50% of the non AI users could.
A sample of 54 is far, far larger than is necessary to say that an effect that large is statistically significant.
There could be other flaws, but given the effect size you certainly cannot say this study was underpowered.
0.05: 11 people per cohort
0.01: 16 people per cohort
0.001: 48 people per cohort
So they do clear the effect size bar for that particular finding at the 99% level, though not quite the 99.9% level. Further, selection effects matter -- are there any school-cohort effects? Is there a student bias (i.e. would a working person at the same age, or someone from a difficult culture or background see the same effect?). Was the control and test truly random? etc. -- all of which would need a larger N to overcome.
So for students from the handful of colleges they surveyed, they identified the effect, but again, it's not bulletproof yet.
But it turns out I misread the paper. It was actually an 80% effect size so greater than 99.9% chance of being a real effect.
Of course it could be the case that there is something different about young college students that makes them react very; very differently to LLM usage, but I wouldn’t bet on it.
I wouldn’t bet on that being the case.
If the computer writes the essay, then the human that’s responsible for producing good essays is going to pick up new (probably broader) skills really fast.
This study showed an enormous effect size for some effects, so large that there is a 99.9% chance that it’s a real effect.
Science should become a marketplace of ideas. Your other criticisms are completely valid. Those should be what’s front and center. And I agree with you. The conclusions of the paper are premature and designed to grab headlines and get citations. Might as well be posting “first post” on slashdot. IMO we should not see the current standard of peer review as anything other than anachronistic.
Absolutely not. I am an advocate for peer review, warts and all, and find that it has significant value. From a personal perspective, peer review has improved or shot down 100% of the papers that I have worked on -- which to me indicates its value to ensure good ideas with merit make it through. Papers I've reviewed are similarly improved -- no one knows everything and its helpful to have others with knowledge add their voice, even when the reviewers also add cranky items.[0] I would grant that it isn't a perfect process (some reviewers, editors are bad, some steal ideas) -- but that is why the marketplace of ideas exists across journals.
> Science should become a marketplace of ideas.
This already happens. The scholarly sphere is the savanna when it comes to resources -- it looks verdant and green but it is highly resource constrained. A shitty idea will get ripped apart unless it comes from an elephant -- and even then it can be torn to shreds.
That it happens behind paywalls is a huge problem, and the incentive structures need to be changed for that. But unless we want blatant charlatanism running rampant, you want quality checks.
[0] https://x.com/JustinWolfers/status/591280547898462209?lang=e... if a car were a manuscript
The only advantage to closed peer review is it saves slight scientific embarrassment. However, this is a natural part of taking risks ofc and risky science is great.
P.s. in this case I really don't like the paper or methods. However, open peer review is good for science.
Actually, from my recollection, it was debunked pretty quickly by people who read the paper because the paper was hot garbage. I saw someone point out that its graph of resistivity showed higher resistance than copper wire. It was no better than any of the other claimed room-temperature semiconductor papers that came out that year; it merely managed to catch virality on social media and therefore drove people to attempt to reproduce it.
Ironically, I am waiting for AI to start automating the process of teasing apart obvious pencil whipping, back scratching, buddy-bro behavior. Some believe its in the 1% range of falsified papers and pencil whipped reviews. I expect it to be significantly higher based on reading NIH papers for a long time in the attempt to actually learn things. I've reported the obvious shenanigans and sometimes papers are taken down but there are so many bad incentives in this process I predict it will only get worse.
This also ignores the fact that you can find a paper to support nearly everything if one is willing to link people "correlative" studies.
So it's possible to be both skeptical of how well these results generalize (and call for further research), but also heed the warning: AI usage does appear to change something fundamental about our congnitive processes, enough to give any reasonable person pause.
The scenario I am thinking of is academic A submitting a manuscript to an academic journal, which gets passed on by the journal editor to a number of reviewers, one of whom is academic B. B has a lot on their plate at the moment, but sees a way to quickly dispose of the reviewing task, thus maintaining a possibly illusory 'good standing' in the journal's eyes, by simply throwing the manuscript to an LLM to review. There are (at least) two negative scenarios here: 1. The paper contains embedded (think white text on a white background) instructions left by academic A to any LLM reading the manuscript to view it in a positive light, regardless of how well the described work has been conducted. This has already happened IRL, by the way. 2. Academic A didn't embed LLM instructions, but receives the review report, which show clear signs that the reviewer either didn't understand the paper, gave unspecific comments, highlighted only typos or simply used phrasing that seems artifically-generated. A now feels aggrieved that their paper was not given the attention and consideration it deserved by an academic peer and now has a negative opinion of the journal for (seemingly) allowing the paper to be LLM-reviewed. And just as journals will have great difficulty filtering for LLM-generated manuscripts, it will also find it very difficult to filter for LLM-generated reviewers reports.
Granted, scenario 2 already happens with only humans in the loop (the dreaded 'Reviewer 2' academic meme). But LLMs can only make this much much worse.
Both scenarios destroy trust in the whole idea of peer-reviewed science journals.
I don’t know the percentage of people who are still critically thinking while using AI tools, but I can first hand see many students just copy pasting content to their school work.
Perhaps the issue of cognitive decline comes from sitting there vegetating rather applying themselves during all that additional spare time.
Although my experience has been perhaps different using LLM's, my mind still tires at work. I'm still having to think on the bigger questions, it's just less time spent on the grunt work.
The push for these tools is to increase productivity. What spare time is there to be had if now you're expected to produce 2-3X the amount of code in the same time frame?
Also, I don't know if you've gotten outside of the software/tech bubble, but most people already spend 90% of their free time glued to a screen. I'd wager the majority of critical thinking people experience on a day to day basis is at work. Now that we may be automating that away, I bet you'll see many people cease to think deeply at all!
Our bodies naturally adjust to what we do. Do things and your body reinforces that enabling you do even more advanced versions of those things. Don't do things and your skill or muscle in such tends to atrophy over time. Asking LLMs to (as in this case) write an essay is always going to be orders of magnitude easier than actually writing an essay. And so it seems fairly self evident that using LLMs to write essays would gradually degrade your own ability to do so.
I mean it's possible that this, for some reason, might not be true, but that would be quite surprising.
What is reported as cognitive decline in the paper might very well be cognitive decline. It could also be alternative routing focused on higher abstractions, which we interpret as cognitive decline because the effect is new.
I share your concern, for the record, that people become too attached to LLMs for generation of creative work. However, I will say it can absolutely be used to unblock and push more through. The quality versus quantity balance definitely needs consideration (which I think they are actually capturing vs. cognitive decline) -- the real question to me is whether an individual's production possibility frontier is increased (which means more value per person -- a win!), partially negative in impact (use with caution), or decreased overall (a major loss). Cognitive decline points to the latter.
The problem is that a headline that people want to believe is a very powerful force that can override replication and sample size and methodology problems. AI rots your brain follows behind social media rots your brain, which came after video games rot your brain, which preceded TV rots your brain. I’m sure TV wasn’t even the first. There’s a long tradition of publicly worrying about machines making us stupider.
One confounding problem with the argument that TV and video games made kids dumber is the Flynn Effect. https://en.wikipedia.org/wiki/Flynn_effect
Which I believe still does have a large grain of truth.
These things can make us simultaneously dumber and smarter, depending on usage.
Writing leads to the rapid decline in memory function. Brains are lazy.
Ever travel to a new place and the brain pipes up with: ‘this place is just like ___’? That the brain’s laziness showing itself. The brain says: ‘okay I solved that, go back to rest.’ The observation is never true; never accurate.
Pattern recognition saves us time and enables us too survive situations that aren’t readily survivable. Pattern recognition leads to short cuts that do humanity a disservice.
Socrates recognized these traits in our brains and attempted to warn humanity of the damage these shortcuts do to our reasoning and comprehension skills. In Socrates day it was not unheard of for a person to memorize their entire family tree, or memorize an entire treaty and quote from it.
Humanity has -overwhelmingly- lost these abilities. We rely upon our external memories. We forget names. We forget important dates. We forget times and seasons. We forget what we were just doing!!!
Socrates had the right of it. Writing makes humans stupid. Reduces our token limits. Reduces paging table sizes. Reduces overall conversation length.
We may have more learning now, but what have we given up to attain it?
Your comment reminded me of this (possibly spurious) quote:
>> An Assyrian clay tablet dating to around 2800 B.C. bears the inscription: “Our Earth is degenerate in these later days; there are signs that the world is speedily coming to an end; bribery and corruption are common; children no longer obey their parents; every man wants to write a book and the end of the world is evidently approaching.”[0]
Same as it ever was. [1]
[0] https://quoteinvestigator.com/2012/10/22/world-end/
[1] https://www.youtube.com/watch?v=5IsSpAOD6K8
People have also been complaining about politicians for hundreds of years, and the ruling class for millennia, as well. and the first written math mistake was about beer feedstock, so maybe it's all correlated.
Additionally, the original paper uses the term “cognitive debt“ not cognitive decline, which may have an important ramifications for interpretation and conclusions.
I wouldn’t be surprised to see similar results in other similar types of studies, but it does feel a bit premature to broadly conclude that all LLM/AI use is harmful to your brain. In a less alarmist take: this could also be read to show that AI use effectively simplifies the essay writing process by reducing cognitive load, therefore making essays easier and more accessible to a broader audience but that would require a different study to see how well the participants scored on their work.
Writing is an important form of learning and this clearly shows LLM assisted writing doesn’t provide that benefit.
The question is how well your assumption holds true that learning to write generalizes to "an important form of learning".
In much the same way chess engines make competitive chess accessible to a broader audience. :)
There was a “brain” group who did three sessions of essay writing and on the fourth session, they used ChatGPT. The paper’s authors said during the fourth session, the brain groups EEG was higher than the LLM groups EEG when they also used ChatGPT.
I interpret this as the brain group did things the hard way and when they did things the easy way, their brains were still expecting the same cognitive load.
But isn’t the point of writing an essay is the quality of the essay? The LLM supposedly brain damaged group still produced an essay for session 4 that was graded “high” by both AI and human judges but were faulted for “stood out less” in terms of distance in n-gram usage compared to the other groups? I think this making a mountain out of a very small mole hill.
Most of the things you write in an educational context are about learning, not about producing something of value. Productivity in a learning context is usually the wrong lens. The same thing is true IMO for learning on the job, where it is typically expected that productivity will initially be low while experience is low, but should increase over time.
An equally valid conclusion is "People are Lazier at Writing Essays When Provided with LLMs".
4. This is clickbait research, so it's automatically less likely to be true.
5. They are touting obvious things as if they are surprising, like the fact that you're less likely to remember an essay that you got something else to write, or that the ChatGPT essays were verbose and superficial.
> 83.3% of LLM users were unable to quote even one sentence from the essay they had just written
Not sure why you need to wire EEG up, it's pretty obvious that they simply did _not_ write the essay, LLM did it for them, and likely didn't even read it, so there is no surprise that they don't remember what didn't pass through their own thinking apparatus properly.
The idea that I would say 'write an essay on X' and then never look at the output is kind of wild. I guess that's vibe writing instead of vibe coding.
https://www.ncbi.nlm.nih.gov/search/research-news/3283/
>the gentle, childlike Eloi and the subterranean, predatory Morlocks.
Seems like a nice metaphor for the current two political parties we are provided with.
Wikipedia lists several. Do you recall which you read?
https://en.wikipedia.org/wiki/The_Time_Machine#Comics
My Mom was a special ed teacher and they were in her classroom as a set. I would go read them after school. Google Gemini suggested Classics Illustrated but I don't think that is it. These were black and white and cheaper than that. Something a teacher would have in their classroom.
Edit: Upon chiding Google Gemini and reminding it that it was black and white I think it found it!
Pocket Classics Comics From 1984.
https://gentlyhewstone.wordpress.com/2016/06/02/pocket-class...
https://www.ebay.com/itm/286295230816
Score one for AI because google search never found those for me.
Because the people around you affect your life. Presumably you don’t want to live in a world of stupid people who are incapable of critical thought or doing anything which are not direct instructions from a machine. Think about it every time you are frustrated by your interaction with a system you have no choice but to use, such as a bank or a government branch.
John Greene has a quote which I think fits, even if it’s about paying taxes for public education rather than LLM use: https://www.goodreads.com/quotes/1390885-public-education-do...
There will always be people who misuse something, but we should not hurt those who do not. Same with drugs. There are functional junkies who know when to stop, go on a tolerance break, take just enough of a dose and so forth, vs. the irresponsible ones. The situation is quite similar and I do not want AI to be "banned" (assuming it could) because of people who misuse LLMs.
People, let us have nice things.
As for the article... did they not say the same thing about search engines and Wikipedia? Do you remember how cheating actually helps us learn (by writing down the things you want to cheat)? Problem is, people do not even bother reading the output of the LLM and that is on them.
Internet was supposed to be this wonderful free place with all information available and unbiased, not the cesspool of scams and tracking that makes 1984 look like a fairytale for children. Atomic energy was supposed to free mankind from everlasting struggle for energy dependency, end wars and whatnot. LLMs we supposed to be X and not Y and used as Z and not BBCCD.
For what population loses overall, compared to whats gained (really, what? a mild increased efficiency sometimes experienced on individual level, sometimes made up for PR), I consider these LLMs are a net loss for whole mankind.
Above should tell you something about human nature, how naive some of the brightest of us are.
If it is a human nature issue (with which I agree), then we are in a deep shit and this is why we cannot have nice things.
Educate, and if that fails, then punish those who "misuse" it. I do not have a better idea. It works for me quite well for coding, and it will continue to work as long as it is not going to get nerfed.
Well cheers to even bigger gap between elite who can afford good education and upbringing and cheap crappy rest. Number of scifi novels come to mind where poor semi-mindless masses are governed by 'educated' elites. I always thought how such society must have screwed up badly in the past to end up like that. Nope, road to hell is indeed paved with good intentions and small little steps which seem innocent or even beneficial on their own, in their time.
Rather than coming up with the right answers?
Wouldn't that be the expected result here? Less knowledge, more questions?
When I use LLMs, it’s less about patching holes in my memory and more about taking an idea a few steps further than I otherwise might. For me it’s expanding the surface area of inquiry, not shrinking it. If the study’s thesis were true in my case, I’d expect to be less curious, not more.
Now that said I also have a healthy dose of skepticism for all output but I find for the general case I can at least explore my thoughts further than what I may have done in the past.
I don't have a dog in this fight, but "asking more questions" could be evidence of cognitive decline if you're having to ask more questions than ever!
It's easy to twist evidence to fit biases, which is why I'd hold judgement to better evidence comes through.
Personally, I find myself often asking AI about things I wouldn't have been bothered to find out about before.
For example I've always these funny little grates on the outside of houses near me and wondered what they are. Googling "little grates outside houses" doesn't help at all. Give AI a vagueish description and it instantly tells you they are old boot scapers.
Maybe there is a movie in the back of my head or a song. Typical search engine queries would never find it. I can give super vague references to a LLM and with search enabled get an answer that’s correct often enough.
If I’m constantly asking “what does this mean again?” that would signal decline. But if I’m asking “what if I combine this with X?” or “what are the tradeoffs of Y?” that feels like the opposite: more engagement, not less.
That’s why I’m skeptical of blanket claims from one study, the lived experience doesn’t map so cleanly.
But if I'm teaching a class, and one student keeps asking questions that they feel the material raised, I don't tend to think "brain damage". I think "engaged and interested student".
https://youtu.be/omYP8IUXQTs?si=SgehtLWjnNho5MR6
Most importantly, I did not remember anything (which is a good thing because half of the output is wrong). I then switched to Stackoverflow etc. instead of the "AI". Suddenly my mental maps worked again, I recalled what I read, programming was fun again, the results were correct and the process much faster.
406 more comments available on Hacker News