Enrollment at Trade Schools Is Expected to Grow
Original: Enrollment at trade schools is expected to grow
Key topics
As AI continues to advance, junior devs are wondering if they've made a mistake by pursuing a career in software engineering, but commenters reassure them that the world still needs skilled coders and that AI's limitations will keep them relevant. The discussion pivots to the bigger question: will AI eventually replace not just software engineers, but all labor, including trade jobs? While some predict a robotics-driven apocalypse, others argue that new industries and opportunities will emerge as people learn to harness AI, rendering the job market more dynamic than fixed. The debate highlights the uncertainty surrounding AI's growth rate, with some pointing to logarithmic rather than exponential progress.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
48m
Peak period
75
0-12h
Avg / period
13.8
Based on 83 loaded comments
Key moments
- 01Story posted
Aug 30, 2025 at 6:47 PM EDT
4 months ago
Step 01 - 02First comment
Aug 30, 2025 at 7:35 PM EDT
48m after posting
Step 02 - 03Peak activity
75 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 5, 2025 at 4:15 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
In the meantime keep learning and practicing cs fundamentals, ignore hype and build something interesting.
I don't really agree with the reasoning [1], and I don't think we can expect this same rate of progress indefinitely, but I do understand the concern.
[1] https://en.wikipedia.org/wiki/Jevons_paradox
If software falls, everything falls.
But as we've seen, these models can't do the job themselves. They're best thought of as an exoskeleton that requires a pilot. They make mistakes, and those mistakes multiply into a mess if a human isn't around. They don't get the big picture, and it's not clear they ever will with the current models and techniques.
The only field that has truly been disrupted is graphics design and art. The image and video models are sublime and truly deliver 10,000x speed, cost, and talent reductions.
This is probably for three reasons:
1. There's so much straightforward training data
2. The laws of optics and structure seem correspondingly easier than the rules governing intelligence. Simple animals evolved vision hundreds of millions of years ago, and we have all the math and algorithmic implementations already. Not so, for intelligence.
3. Mistakes don't multiply. You can brush up the canvas easily and deliver the job as a smaller work than, say, a 100k LOC program with failure modes.
I don’t think that follows at all. Robotics is notably much, much, much harder than AI/ML. You can replace programmers without robotics. You can’t replace trades without them.
Are you so sure?
Almost every animal has solved locomotion, some even with incredibly primitive brains. Evolution knocked this out of the park hundreds of millions of years ago.
Drosophila can do it, and we've mapped their brains.
Only a few animals have solved reasoning.
I'm sure the robotics videos I've seen lately have been cherry picked, but the results are nothing short of astounding. And there are now hundreds of billions of dollars being poured into solving it.
I'd wager humans stumble across something evolution had a cake walk with before they stumble across the thing that's only happened once in the known universe.
https://en.m.wikipedia.org/wiki/Moravec%27s_paradox
https://harimus.github.io/2024/05/31/motortask.html
Edit: just to specifically address your argument, doing something evolution has optimized for hundreds of millions of years is much harder than something evolution “came up with” very recently (abstract thought).
You've got this backwards.
If evolution stumbled upon locomotion early -- and several times independently through convergent evolution --, that means it's an easy problem, relatively speaking.
We've come up with math and heuristics for robotics (just like vision and optics). We're turning up completely empty for intelligence.
>Only a few animals have solved reasoning.
the assumption here seems to be that reasoning will be able to do what evolution did hundreds of millions of years ago (with billions of years of work being put into that doing) much easier than evolution did for.. some reason that is not exactly expressed?
logically also I should note that given the premises laid out by the first quoted paragraph the second quoted paragraph should not be "only a few animals have solved reasoning" it should be "evolution has only solved reasoning a few times"
We also should end the exploitative nature of globalization. Outsourced work should be held to the same standards as laborers in modern countries (preferably EU, rather than American, standards).
ETA:
You updated your post and I think I agree with most of what you said after you updated.
All relevant and recent evidence points to logarithmic improvement, not the exponential we were told (promised) in the beginning.
We're likely waiting at this point for another breakthrough on the level of the attention paper. That could be next year, it could be 5-10 years from now, it could be 50 years from now. There's no point in prediction.
People like to assume that progress is this steady upward line, but I think it's more like a staircase. Someone comes up with something cool, there's a lot of amazing progress in the short-to-mid term, and then things kind of level out. I mean, hell, this isn't even the first time that this has happened with AI [1].
The newer AI models are pretty cool but I think we're getting into the "leveling out" phase of it.
[1] https://en.wikipedia.org/wiki/AI_winter
Your exponential problems have exponential problems. Scaling this system is factorially hard.
Relative to time. Not relative to capital investment. There it's nearly perfectly linear.
Any citations for this pretty strong assertion? And please don't reply with "oh you can just tell by feel".
Inflation, end of ZIRP, and IRS section 174 kicked this off back in 2022 before AI coding was even a thing.
Junior devs won't lose jobs to AI. They'll lose jobs to the global market.
American software developers have lost the stranglehold on the job market.
If it had been ZIRP and low interest, companies would have just borrowed to cover the amortization that 174 introduced. But unfortunately money doesn't grow on trees anymore.
Anyone who tells you they know what the future looks like five years from now is lying.
> Five years from now AI might still break down at even a small bit of complexity, or it might be installing air conditioners, or it might be colonizing Mercury and putting humans in zoos.
do all these seem logically consistent possibilities to you?
https://en.m.wikipedia.org/wiki/Consistency
> AI might still break down at even a small bit of complexity, or it might be installing air conditioners, or it might be colonizing Mercury and putting humans in zoos.
that each of these things, being logically consistent, have equal chances of being the case 5 years from now?
>There’s a significant difference between predicting what it will specifically look like, and predicting sets of possibilities it won’t look like
which I took to mean there are probability distributions around what things will happen, and it seemed to be your assertion that there wasn't, that a number of things only one of which seemed especially probable, were equally probable. I'm glad to learn you don't think this as it seems totally crazy, especially for someone praising LLMs which after all spend their time making millions of little choices based on probability.
On a codebase of 10,000 lines any action will cost 100,000,000 AI units. One with 1,000,000 it will cost 1,000,000,000,000 AI units.
I work on these things for a living and no one else seems to ever think two steps ahead on what the mathematical limitations of the transformer architecture mean for transformer based applications.
Humans also keep struggling with context, so while large contexts may limit AI performance, they won't necessarily prevent them from being strongly superhuman.
OK, I will bite.
So "Sparsely-gated MoE" isn’t some new intelligence, it's a sharding trick. You trade parameter count for FLOPs/latency with a router. And MoE predates transformers anyway.
RLHF is packaging. Supervised finetune on instructions, learn a reward model, then nudge the policy. That’s a training objective swap plus preference data. It's useful, but not breakthrough.
CoT is a prompting hack to force the same model to externalize intermediate tokens. The capability was there, you’re just sampling a longer trajectory. It’s UX for sampling.
Scaling laws are an empirical fit telling you "buy more compute and data" That’s a budgeting guideline, not new math or architecture. https://www.reddit.com/r/ProgrammerHumor/comments/8c1i45/sta...
LoRA is linear algebra 101, low rank adapters to cut training cost and avoid touching the full weights. The base capability still comes from the giant pretrained transformer.
AlphaFold 2’s magic is mostly attention + A LOT of domain data/priors (MSAs, structures, evolutionary signal). Again attention core + data engineering.
"DeepSeek’s cost breakthrough" is systems engineering.
Agentic software dev/MCP is orchestration, that’s middleware and protocols, it helps use the model, it doesn’t make the model smarter.
Video generation? Diffusion with temporal conditioning and better consistency losses. It’s DALL-E style tech stretched across time with tons of data curation and filtering.
Most headline "wins" are compiler and kernel wins: FlashAttention, paged KV-cache, speculative decoding, distillation, quantization (8/4 bit), ZeRO/FSDP/TP/PP... These only move the cost curve, not the intelligence.
The biggest single driver the last few years has been the data so de dup, document quality scores, aggressive filtration, mixture balancing (web/code/math), synthetic bootstrapping, eval driven rewrites etc etc. You can swap half a dozen training "tricks" and get similar results if your data mix and scale are right.
For me a real post attention "breakthrough", would be something like: training that learns abstractions with sample efficiency far beyond scaling laws, reliable formal reasoning, causal/world-model learning that transfers out of distribution. None of the things you listed do that.
Almost everything since attention is optimization, ops, and data curation. I mean give me exact pretrain mix, filtering heuristics, and finetuning datasets for Claude/GPT-5 and without peeking at the secret sauce architecture I can get close just by matching tokens, quality filters and training schedule. The "breakthroughs" are mostly better ways to spend compute and clean data, not new ways to think.
Not necessarily a bad approach but feels like something is missing for it to be “intelligent”.
Should really be called “artificial knowledge” instead.
It’s not that it knows grammar, it just was trained on a dataset that applied proper capitalization.
Humans learn from seeing patterns. I suspect AI only repeats them, more like a parrot.
However, the kind of first-order Markov-chain model you're describing does not form "somewhat reasonable" sentences, except very rarely. A second-order Markov model seems to do much better at first, but that's because it's copying big chunks out of its training set. When the training set gets big enough, it stops doing that.
Here's what the first-order model you described looks like trained on Moby Dick and the King James Bible:
> The Project Gutenberg-tm License terms of God, into a prince of the last days, and what things that he had an altar of the camp of the rest content, I laughed us into the sand which were upon the hope and made a snare shall continually flitting through their pains ye offer sacrifice. 50:6 And he saw that the forest of the threshingfloors. 23:2 Son of the fire, and came to anger of Israel. 48:20 And they cannot survive without money. 5:20 Despise not on fire. 1:5 And when he healed of legerdemain in the pitiless jaw; spilled it a chain work, and blood upon by letters. 10:10 And the chief of the Antothite. 12:4 And he hath given them he sprinkled her that the LORD, and hold of Bigthana and hill in the patterns of man, is your God. 3:4 And at the sword. 8:25 And the priest: and cast lots. 23:35 Hezrai the other rings of them competent to, these coffin-canoes were departed from the lower jaw; you to a vast bodies have I will not of which sat still in controversy and my people go. 16:7 And the pledge again with sea-water; which is to the glory, and make ye say, and carried to his disciples said unto the Philistines,
The only grammatical sentence in the bunch is "And they cannot survive without money."
Trained on my reproducible corpus of 49 megs of RFCs, I get:
> RFC 1213, defines a prototype algorithm, since the remote networks to have keys as during a standard is used by the 16-bit words, our first fragment. While designing X.500 and a CR NUL ::= "Community: " {<time> "-" (also known to that databases are not contain the causes the Internet community name and especially since in the data element of decrease in a few protocols are specified in their experimentation. 6. REFERENCES [1] is given in which point several kinds of the destination file is a request arrives, the abstract information present state of cfdpcln and error 38 0[0] Message Indication Not supported. If the textual reference shall be of 10.2.0.0 path down the interface's Area Networks Graphics - Size: The meanings for publication date information Agreements on the SNMPv2 entity and eases the header in datagram shortest-path trees are the Digest Authentication Value ----- ----------- ---------- Receive a list of this command to construct a sequence number of an error in the end-system to unblock the basic monitoring map well, and found and root dispersion <$Ephi tau>, where that would be sent by the array of
This contains zero grammatical sentences.
Here's the source code, so you can see if I fucked up the algorithm:
(11 lines of code is indeed "less than 100," but maybe you had in mind a more memory-efficient implementation.)I also suspect that current AI only repeats patterns without understanding them, as you evidently did in your comment about Markov-chain models, and as people evidently do almost all of the time.
But I know I don't know, as your earlier comments claim to.
Moreover, I think it's obviously foolish to extrapolate from my current experience with current LLMs to the ultimate limits of the Transformer architecture, much less whatever DeepSeek and Anthropic come up with three years from now. I haven't even implemented a Transformer! Have you?
I may seem a little harsh in this comment, but I think it's important that you stop claiming to know things you don't actually know.
"They talk by flapping their meat at each other!"
It’s like asking a college student 4th grade math questions and then being impressed they knew the answer.
I’ve use copilot a lot. Faster then google, gives great results.
Today I asked it for the name of a French restaurant that closed in my area a few years ago. The first answer was a Chinese fusion place… all the others were off too.
Sure, keep questions confined to something it was heavily trained on, answers will be great.
But yeah, AI going to get rid of a lot of low skilled labor.
What's the point of this anecdote? That it's not omniscient? Nobody is should be thinking that it is.
I can ask it how many coins I have in my pocket and I bet you it won't know that either.
No, it's more like asking a 4th-grader college math questions, and then desperately looking for ways to not be impressed when they get it right.
Today I asked it for the name of a French restaurant that closed in my area a few years ago. The first answer was a Chinese fusion place… all the others were off too.
What would have been impressive is if the model had replied, "WTF, do I look like Google? Look it up there, dumbass."
AND don't be afraid to start small!
Making a discord or twitch chat bot, quake mod, a silly sound board of all ypur favorite Groot quotes.
For reasons kind of unclear to me, it seems like trade schools have been kind of stigmatized, as somehow "lesser" than university. I don't completely understand why that is; the world needs welders and AC technicians and Practical Nurses much more than we need more software engineers working at a Silicon Valley startup.
The world needs software engineers too. Silicon Valley isn't the world. Not to mention, you know... it's not just programmers that come out of universities.
Anyway, trades are "looked down" on like that because they're a lot of very hard, very physical work. I would certainly encourage my children to go to university if it's going to lead to a much more comfortable life.
That said, I think universities aren't a good fit for a lot of people. A lot of people (and I include my brother-in-law in this group) would not be happy with a desk job, and while I think he's pretty smart I don't know that he would do well having to attend four years of a university. I think trade schools are excellent for these kinds of people.
I don't have children, but I would like to think that if I did I would try and help them get a career they would be happy with, and "comfortable" doesn't necessarily imply that.
I prefer to have a desk job, I like writing software, it's why I spend too much time on HN, but I think a lot of people would benefit from a trade school, and I don't think they should be stigmatized.
However, the biggest thing I think the HN crowd might appreciate that they have and we lack is an easy path to freedom through self employment - if you want self-employment as a programmer you need the fortune of a novel idea, improvement, or something new in some sense. You might need to also chase the VC dragon.
You want to start a plumbing business? Work hard 5-10 years, get out on your own with a van and tools and you have a turn key business idea. Provide good service at a proper rate. End of story.
All it took was an internet connection and a decent laptop.
If you’re a tradesmen you’re never going to compete with Eastern Europe, LATAM, India, or anywhere global.
Cold-calling to get work would definitely be harder because you definitely are competing with much cheaper labor.
See the many software and other computing people who successfully run under a consultant / contractor model. You can absolutely be self employed. Good service at a proper rate (and pretty high too, usually.) Self employed and high percentage remote if you want it.
Whereas a tradesman is more naturally limited in their local competition and the work fare more obvious and standard. Service work, new build installations. It’s almost as if the trade itself conveys a kind of franchise like quality to it.
This strikes me as underselling the hurdles here. Ignoring the whole "just start a self-sufficient business" thing, what happens when you get sick? What about medical costs as you age? Retirement plans?
There is more mobility and more flexibility to future jobs with a university degree like mathematics than with trade.
If you are a skilled tradesman, your skills are sought after globally.
Typically you are sponsored by your employer.
It takes forever to get into even an apprenticeship at the existing places. Sure, you have lots of people retiring out but as far as new jobs go?
On the other hand, they want to date unemployed or underemployed men in menial service jobs even less, so there’s that.
I see people hem and haw about trades and how "great" they are all the time, but as someone who has worked in trades for the last 15 years, not nearly as many people can handle it or like it as they think. Many people heard about their cousin or friend who broke $100K doing trade work, but what they fail to mention is they did that by working 90 hours a week every week all year to do it. They don't mention how they kept working through injury and now they will feel it the rest of their life. And they don't mention or haven't been working long enough to feel the bust between the booms when they are either taking jobs that only earn $5 an hour, or just don't have enough work for full hours.
Yes we need trade workers, but there has never been a lack of trade workers, only a lack of pay. I know far more people that have left trades than have joined. Many liked the physical work, but couldn't justify the health costs on top of the poor or unstable pay.
The median outcome for someone baseline reliable & skilled but doesn't have the inclination to run a business is OK but usually not great.
Nearly every skilled trade costs at least 100$ an hour for their labor, often far more. Yes even in West Virginia or Mississippi. Stop getting scammed by working for someone else and work for yourself.
Trades are so lucky too in that it’s hard for normies to evaluate the quality of their work - so you can make tons of money while still being really shitty at it.
No one ever rushes them, plenty of work, and not stuck in a cubicle.
Every student college bound is a crime. Not everyone needs to forget calculus...
Their source for future estimation is apparently Google Trends tallies of searches for trade schools.
Actual growth over the last few years is 3.2% per year, from 2019 to 2024.
US population growth is about 0.5% per year, so deduct that.
Always look for the actuals.
[1] https://validatedinsightstradeschools2.carrd.co/