Canaries in the Coal Mine? Recent Employment Effects of AI [pdf]
Key topics
A recent Stanford study explores the employment effects of AI, sparking a lively debate about whether large language models (LLMs) augment or automate labor. Commenters weighed in on the issue, with some, like yakshaving_jgt, reporting significant productivity boosts from using LLMs for coding tasks, while others, such as ath3nd, cited a study showing decreased productivity. The discussion highlights the complexity of AI's impact on work, with some pointing out that the technology's effects will take time to fully manifest, much like the adjustments made during the offshoring trend. As commenters like eru and trhway noted, the economy's response to AI will likely be shaped by factors like the ability to efficiently utilize new tools and the distribution of tasks between human workers and machines.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
91
0-12h
Avg / period
27.5
Based on 110 loaded comments
Key moments
- 01Story posted
Aug 27, 2025 at 10:28 PM EDT
4 months ago
Step 01 - 02First comment
Aug 28, 2025 at 12:18 AM EDT
2h after posting
Step 02 - 03Peak activity
91 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 2, 2025 at 8:21 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Some nits I'd pick along those lines:
>For instance, according to the most recent AI Index Report, AI systems could solve just 4.4% of coding problems on SWE-Bench, a widely used benchmark for software engineering, in 2023, but performance increased to 71.7% in 2024 (Maslej et al., 2025).
Something like this should have the context of SWE-Bench not existing before November, 2023.
Pre-2023 systems were flying blind with regard to what they were going to be tested with. Post-2023 systems have been created in a world where this test exists. Hard to generalize from before/after performance.
> The patterns we observe in the data appear most acutely starting in late 2022, around the time of rapid proliferation of generative AI tools.
This is quite early for "replacement" of software development jobs as by their own prior statement/citation the tools even a year later, when SWE-Bench was introduced, were only hitting that 4.4% task success rate.
It's timing lines up more neatly with the post-COVID-bubble tech industry slowdown. Or with the start of hype about AI productivity vs actual replaced employee productivity.
But with progress continuing in the models, too, it's an even more complicated affair.
However it wasn't just noticing the difference in wages. That had been known since forever and didn't wake a genius. Figuring out how to produce efficiently in the cheaper places and get the goods to rich markets took more smarts and experimentation.
Container shipping played a big role in that, and so did modern communication and cheaper flights.
That's an opinion many disagree with. As a matter of fact, the only limited study up to date showed that LLMs usage decrease productivity for experienced developers by roughly 19%. Let's reserve opinions and link studies.
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
My anecdotal experience, for example, is that LLMs are such a negative drain on both time and quality that one has to be really early in their career to benefit from their usage.
The quality of the Haskell code is about as good as I would have written myself, though I think it falls for primitive obsession more than I would. Still, I can add those abstractions myself after the fact.
Maybe one of the reasons I'm getting good results is because the LLM effectively has to argue with GHC, and GHC always wins here.
I've found that it's a superpower also for finding logic bugs that I've missed, and for writing SQL queries (which I was never that good at).
Claude code is nice because it is just a separate cli tool that doesn't force you to change editor etc. It can also research things for you, make plans that you can iterate before letting it loose, etc.
Claude is also better than chatgpt at writing haskell in my experience.
Though I use claude code. The setup is mostly stock, though I do have a hook that feeds the output of `ghciwatch` back into claude directly after editing. I think this helps.
- I find the code quality to be so-so. It is much more into if-then-else than the style is to yolo for my liking. - I don't rely on it for making architectural decisions. We do discuss when I'm unsure though. - I do not use it for critical things such as data migrations. I find that the errors is makes are easy to miss, but not something I do myself. - I let it build "leaves" that are not so sensitive more freely. - If you define the tasks well with types then it works faily well. - cluade is very prone to writing tests that test nothing. Last week it wrote a test that put 3 tuples with strings in a list and checked the length of the list and that none of the strings where empty. A slight overfit on untyped languages :) - In my experience, the uplift from Opus vs Sonnet is much larger when doing Haskell than JS/Python. - It matters a lot if the project is well structured. - I think there is plenty of room to improve with better setup, even without models changing.
It's no surprise to me that devs who are accustomed to working on one thing at a time due to fast feedback loops have not learned to adapt to paralellizing their work (something that has been demonized at agile style organizations) and sit and wait on agents and start watching YouTube instead, as the study found (productivity hits were due to the participants looking at fun non-work stuff instead of attempting to parallelize any work).
The study reflects usage of emergent tools without training, and with regressive training on previous generation sequential processes, so I would expect these results. If there is any merit in coordinating multiple agents on slower feedback work, this study would not find it.
If the study showed that experienced developers suffered a negative performance impact while using an LLM, maybe where LLMs shine are with junior developers?
Until a new study that shows otherwise comes out, it seems the scientific conclusion is that junior developers, the ones with the skill issues, benefit from using LLMs, while more experienced developers are impacted negatively.
I look forward to any new studies that disprove that, but for now it seems settled. So you were right, might indeed be a skills issue if LLMs help a developer and if they do, it might be the dev is early in their career. Do LLMs help you, out of curiosity?
Then, someone did a two week study on the productivity difference between Notepad, vim, emacs, and VSCode. And it turns out that there was lower observed productivity for all of the latter 3, with the smallest reduction seen in VSCode.
Would you conclude that Notepad was the best editor, followed by VSCode and then vim and emacs being the worst editors for programming?
That’s the flaw I see in the methodology of that study. I’m glad they did it, but the amount of “Haha, I knew it all along and if you claim AI helps you at all, it’s just because you sucked all along…” citing of that study is astonishing.
I would like to see your study, one that's not sponsored by OpenAI or github, that shows LLMs actually improved anything for experienced developers. Crickets.
So, to summarize:
1. An actual study shows that experienced developer's productivity declines 19% when using an LLM.
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
2.The recent actual MIT study showing 95% GenAI projects fail to have any tangible results in enterprises:
https://fortune.com/2025/08/18/mit-report-95-percent-generat...
And your source is: 'Trust me bro'. I swear the new LLM fanbase is the same as good ol' scrum: a bunch of fanatic gaslighters.
It's always a "skill issue" , "not doing it right" , "not the proper llm/scrum flavor", or a "flawed study".
When I see the studies, then I might actually listen to the LLM booster crowd, but for now I got studies, what you got? Vibes? Figures.
Your argument seems to project significantly more certainty and spittle.
The burden of proof is on the ones saying a new concept/tool (LLMs/NFT) is revolutionary or useful. I provided studies showing not only the new concept is not revolutionary, but that it is a step back in terms of productivity. Where are the studies and evidence proving that LLMs are a revolution?
NFT boosters tried for years to make us believe something that wasn't there. I will take the LLM crowd more seriously when I actually see the impact and usefulness of LLMs. For now, it's simply not there.
https://fortune.com/2025/08/18/mit-report-95-percent-generat...
> Your argument seems to project significantly more certainty and spittle.
I am not surprised that a bunch of folks outsourcing their critical thinking to a fancy autocomplete don't have any arguments nor studies though, to refute a pretty simple argument with some receipts behind it. Spittle? Please, at least there is an argument and links.
From the LLM cult crowd there is usually nothing, just crickets. Show me the studies, show me the links, show me the proof that LLMs are the revolution you so desperately want it to be.
Until then, I got the receipts that, if anything, LLMs are just another tool but hardly a revolution worth paying attention to.
You submitted one study and claimed it’s the only in existence (it’s not)
“I got the receipts”
You have one receipt that you misrepresent by saying it scientifically settles things the paper itself points out that it explicitly does not claim
Oh no, please no. I can't take it one more time. Is it just me or are devs the absolute worst profession in the regards of self-inflicted dogmas?
"We conduct a randomized controlled trial (RCT) to understand how AI tools at the February-June 2025 frontier affect the productivity of experienced open-source developers. 16 developers with moderate AI experience complete 246 tasks in mature projects on which they have an average of 5 years of prior experience."
So the question is what other kinds of software development tasks this result applies to. Moderate AI experience is fine. This applies to many other situations. But 5 years of experience with a single code base is an outlier.
That said, they used relatively large repositories (1.1 million LOC) and the tasks were randomly assigned. So developers couldn't pick and choose tasks in areas of the codebase they already knew extremely well.
I think the study does generalise to some degree, but I've seen conclusions drawn from this study that the methodology doesn't support. In my view, it doesn't generalise over all or even most software development taks.
Personally, I'm a bit sceptical (but not hostile) about LLMs for coding (and some other thinking tasks), because the difference in quality between requests for which there are many examples and tasks for which there are only few examples is so extreme.
Reasoning capabilities of LLMs still seem minimal.
There is this paper that surveys results of 37 studies and reaches a different conclusion: https://arxiv.org/abs/2507.03156
> Our analysis reveals that LLM-assistants offer both considerable benefits and critical risks. Commonly reported gains include minimized code search, accelerated development, and the automation of trivial and repetitive tasks. However, studies also highlight concerns around cognitive offloading, reduced team collaboration, and inconsistent effects on code quality.
Why are you ignoring the existence of these 37 other studies and pretending the one study you keep sharing is the only in existence and thus authoritatively conclusive?
Furthermore from the study you keep sharing, they state:
> We do not provide evidence that: AI systems do not currently speed up many or most software developers. Clarification: We do not claim that our developers or repositories represent a majority or plurality of software development work
Why do YOU claim that this study provides evidence, conclusively and as settled science, that AI systems do not speed up many or most developers? You are unscientifically misrepresenting the study you are so eager to share. You are a complete “hype man” for this study beyond what it evidences because of your eagerness for a way to shut down discourse and dismiss any progress since the study’s focus on Sonnet 3.5. The study you share even says that there has been a lot of progress in the last five years and future progress as well as different techniques in using the tools may produce productive results and that the study doesn’t evidence otherwise! You are unserious.
They are not great if your tasks are not well defined. Sometimes, they suprise you with great solutions, sometimes they produce mess that just wastes your time and deviates from your mission.
To, me LLMs have been great accelerants when you know what you want, and can define it well. Otherwise, they can waste your time by creating a lot of code slop, that you will have to re-write anyways.
One huge positive sideffect, is that sometimes, when you create a component, (i.e. UI, feature, etc), often you need a setup to test, view controllers, data, which is very boring and annoying / time wasting to deal. LLM can do that for you within seconds (even creating mock data), and since this is mostly test code, it doesn't matter if the code quality is not great, it just matters to get something in the screen to test the real functionality. AI/LLMs have been a huge time savers for this part.
When it's a problem lots of people banged their head against and wrote posts about similar solutions, that makes for good document-prediction. But maybe we should've just... removed the pain-point.
If LLM boosters were not so preachy about it, I'd left them off the hook easier. But at the current moment:
- Only study up to date shows experienced developers have 19% less productivity when using LLMs https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
- There are studies showing that using LLMs regularly makes you dumber https://www.media.mit.edu/publications/your-brain-on-chatgpt...
- The fresh study from MIT shows 95% of AI pilots fail https://fortune.com/2025/08/18/mit-report-95-percent-generat...
- The companies developing LLMs don't have that part of their business profitable nor any path for profitability. You can see it with Anthropic's constantly changing token limits and plans, and with Microsoft and OpenAI not able to reach a deal https://www.ft.com/content/b81d5fb6-26e9-417a-a0cc-6b6689b70...
- Hell, Sam Altman himself admitted that the current Ai market is just a bubble https://www.cnbc.com/2025/08/18/openai-sam-altman-warns-ai-m...
When the LLM cultists wake up during the bubble pop, I wonder what they are gonna jump on next. The world is running out of hype bandwagons to jump on. Maybe... LLM NFTs?
As it is, I have no problem with your naysaying, I'm getting results, your disbelief doesn't change that, in fact I find it more amusing than anything.
Mystical practitioners discount the studies done in the open as not fair or not being done right, or the people participating not believing in it hard enough.
My man, 95% of AI pilots failed.
https://fortune.com/2025/08/18/mit-report-95-percent-generat...
> As it is, I have no problem with your naysaying, I'm getting results, your disbelief doesn't change that
Studies and evidence mean nothing to cults and religious believers, so yeah, I am happy that you feel and believe like you have your personal connection to the higher being that is the LLM. Keep the faith!
You just described getting into a car and letting it "run" the sub 4 minute mile for you. Great success!
I've seen plenty around me who claim to be faster with the help of AI. Some are! But most seem to be faster at producing lines of code, not faster at completing the goal. I see a lot more slop and frequently that slop just results in work being outsourced to others. Which, to be fair, does mean they're "faster". But their speed is not on the intended metric.
Maybe I'm the one hallucinating. But maybe you are too. All I know is that when I use AI tools I feel faster but I've also found a get a lot less done.
But we're all just talking to "some random dude on the internet" and that context isn't shared. I'm certain you see both people where AI is helping as well as people where it isn't. Maybe in different proportions than others. But if you're upset that I don't know you, well... that's a bit hard to do in forums like this.
That's a very different situation from having an intensive conversation with an AI to generate a formalized CUE spec with correctness guards, E2E testing specifications, etc, then decomposing that spec into lanes and dispatching a swarm of agents to build it, review work, implement e2e tests/QA, etc. They're both AI, but one is vibe coding and one is autonomous engineering.
History is full of people making wrong predictions in both directions about new technology.
As the most obvious parallel, pets.com went bust in the first dot com bust and so did webvan. Today chewy is successfully replicating pets and ordering groceries online for delivery is common.
We might see 1,000 different AI companies go bankrupt in the next few years, but still have AI be a huge chunk of the economy throughout the 2030s.
My point is: are there more false positives or false negatives?
You're also cherry picking. Even with AI there are pretty high profile people saying it's a fad as well as pretty high profile people saying it's going to kill us all.
Look at crypto. WAS it hype? Clearly. Don't tell me the price of bitcoin, tell me how the technology actually changed the world. Tell me how it did even a tenth of what the crypto bros promised.
Look at VR. Don't tell me how much you like the latest Quest, tell me how many people are in the metaverse. Tell me how many people even own a VR system. Tell me how the tech achieved a hundredth of what was promised.
Look at Segway. When was the last time you even saw one? Have you ever even used one? How many people even know what they are?
It doesn't matter if your prediction is right if it is 1 in 1000. The Simpson has a better batting record than that and they aren't even trying. What matters is consistent predictions. Even if you believe this time is different I don't know how you can not understand why people are skeptical. In the last decade we watched people become billionaires off of VR and crypto.
Even if AI is different, people are being glamorized for their experience in crypto and VR as reasons for why they'll be successful in AI. If you believe in AI then why wouldn't you see this as a fox in the chicken coup?
Those people didn't make their billions through technology, they made their billions through hype.
You can believe AI is a bubble and full of hype even if you believe the technology has a lot of uses. It's a lot easier to build hype around a grain of truth than a complete fabrication.
Writing code is a bit crazy, maybe writing tedious test case variations.
But asking an LLM questions about a well established domain you're not expert in is a fantastic use case. And very relevant for making software. In practice, most software requires you to understand the domain your aiming to serve.
I use LLMs every day. They are useful to me (quite useful), but I don’t really use them for coding. I use them as a “fast reference” source, an editor, or as a teacher.
I’ve been at this since 1983, so I’ve weathered a couple of sea changes.
They're super helpful in these contexts. But these are also contexts where I don't need to rely on accuracy.
That’s a massive overstatement of what the study found. One big caveat is this: “our developers typically only use Cursor for a few dozen hours before and during the study.” In other words, the 19% slowdown could simply be a learning curve effect.
> one has to be really early in their career to benefit from their usage.
I have decades of experience, and find them very beneficial. But as with any tool, it helps to understand what they are and aren’t good at, and hope to use them effectively. That knowledge comes with experience.
Be careful of dismissing a new tool just because you haven’t figured out how to use it effectively.
Seems about right when trying to tell an LLM what to code. But flipping the script, letting the LLM tell you what to code, productivity gains seem much greater. Like most programmers will tell you: Writing code isn't the part of software development that is the bottleneck.
LLMs are a just a simple tool, if people misuse it it's on them.
1. Converting exported data into a suitable import format based on a known schema 2. Creating syntax highlighting rules for language not natively support in a Typst report
Both situations didn't have an existing solution, and while the outputs were not exactly correct, they only needed minor adjustments.
Any other situation, I'd generally prefer to learn how to do the thing, since understanding how to do something can sometimes be as important as the result.
Comparing salaries between countries with vastly different approaches to taxes, health insurance, living cost, hidden side costs etc. is hard and can easily be hugely misleading.
Not even including any subtle things just what "before and after taxes" means can differ. E.g. where I live "after taxes" does for most people not just deduct taxes but also the base health insurance cost and some other things (but to make it more fun, only most but not all people. This means what after tax means can differ between neighbors ).
And then there are so many hidden cost which can influence taxes, e.g. in some areas you practically have to have a car this means that in effect the cost of a car isn't that different from a fixed sum tax, if you consider social standards it even scales with income up to a certain point income like most taxes (and cost x10+ if you can't drive a car for health reasons). On the other hand if you live in a area with decent public transportation and then a car is a Luxus good, but someone has to pay for the public transportation, and if you area isn't overrun by well paying tourists this means you likely pay more tax (but as public transportation tends to scale better you still likely save money especially if you aren't wealthy).
Anyway so comparing after tax in vastly different countries is IMHO a folly. And even before tax is tricky but you have to choose something. I guess another option is "what to frugal living people with reasonable health insurance and rent have left over at the end of the month" is theoretically the better statistic, but just not practical.
You're probably working in a domestic company which usually pays less than offshored jobs by a large transnational (and domestics say in Russia were paying significantly less than the offshored). I don't think many companies do significant offshoring into Western Europe though.
> The contest between the capitalist and the wage-labourer dates back to the very origin of capital. It raged on throughout the whole manufacturing period. [112] But only since the introduction of machinery has the workman fought against the instrument of labour itself, the material embodiment of capital. He revolts against this particular form of the means of production, as being the material basis of the capitalist mode of production.
> [...]
> The instrument of labour, when it takes the form of a machine, immediately becomes a competitor of the workman himself. [116] The self-expansion of capital by means of machinery is thenceforward directly proportional to the number of the workpeople, whose means of livelihood have been destroyed by that machinery. The whole system of capitalist production is based on the fact that the workman sells his labour-power as a commodity. Division of labour specialises this labour-power, by reducing it to skill in handling a particular tool. So soon as the handling of this tool becomes the work of a machine, then, with the use-value, the exchange-value too, of the workman’s labour-power vanishes; the workman becomes unsaleable, like paper money thrown out of currency by legal enactment.
Karl Marx, The Capital, Book I Chapter 15.
https://www.marxists.org/archive/marx/works/1867-c1/ch15.htm
Given the absurdly common mal practice(1) of training LLMs on/for tests i.e. what you could describe as training on the test set any widely used/industry standard test to evaluate LLMs is not really worth half of what it claims it is.
(1): Which is at least half intend, but also to some degree accident due to web scrabbling, model cross training etc. having a high chance to accidentally sneak in test data.
In the end you have to have your own tests to evaluate agent/LLM performance, and worse you have to not make them public out of fare of scientific malpractice turning them worthless. Tbh. that is a pretty shitty situation.
While this is true, there are ways to test (open models) on tasks created after the model was released. We see good numbers there as well, so something is generalising there.
For example, everyone now writes emails with perfect grammar in a fraction of a time. So now the expectation for emails is that they will have perfect grammar.
Or one can build an interactive dashboard to visualize their spreadsheet and make it pleasing. Again the expectation just changed. The bar is higher.
So far I have not seen productivity increase in dimensions with direct sight to revenue. (Of course there is the niche of customer service, translation services etc that already were in the process of being automated)
I had a conversation with my manager about the implications of everyone using AI to write/summarise everything. The end result will most likely be staff getting Copilot to generate a report, then their manager uses Copilot to summarise the report and generate a new report for their manager, ad inifinitum.
Eventually all context is lost, busywork is amplified, and nobody gains anything.
why not fire everyone in between the top-most manager and the actual "worker" doing the work, as the report could be generated with the correct level of summary?
https://www.cnbc.com/2025/08/27/google-executive-says-compan...
Middle management can sometimes be good at this, because they may actually have the time to step back and take a holistic look at things. It’s not always easy to do that when you’re deep in the weeds with clients, managers, colleagues, or direct reports bugging you about misc things.
Overall I think (or hope) the more useless reporting will die a slow death, but I also think there’ll be a loooooong period of AI slop before we reach the point where everyone says “why are we actually doing this?”
And absolutely bloody _hideous_ style, if they are using our friends the magic robots to do this.
You do not need to build a spreadsheet visualiser tool there are plenty of options that exist and are free and open source.
I'm not against advances, I'm just really failing to see what problem was in need of solving here.
The only use I can get behind is the translation, which admittedly works relatively well with LLMs in general due to the nature of the work.
https://www.fool.com/investing/2024/11/29/this-magnificent-s...
Think like a forestry investor, not a cash crop next season.
(This isn’t unique to IT; this cyclical underinvest-shortage-panic pattern happens in a lot of industries.)
I'd say some of it may be the economy but I also think some of it Isn't.
All to say we could have quite a bit more resilience as an economy, but we decided to sacrifice our leadership in these areas.
They are just round tripping the cash that was sitting in their accounts through investments that make their way back through advertising channels or compute channels.
Once you see the bigger picture, you'll realise its all just a Fugazi post covid
Traditionally, you wouldn't look at the release of a productivity tool coinciding with a hiring slowdown and assume that it's automation causing the hiring slowdown, your first instinct would be that the sector is not doing well.
Outside of tech, my eulogy writer friend got fired and replaced by ChatGPT. So when gramma dies, someone will now read a page of slop at her funeral instead of something that a person with empathy wrote.
If you can find a workable way to put the family in the improvements loop, the AI eulogy could be far better at expressing the family’s sentiment about grandma. (I’m not going to want to go 3 rounds of edits over 2-3 days with a human to get it just right, but going 8 rounds of tweaking/perfecting with an AI in a 20-30 minute sitting is appealing and would give a better result in a lot of cases.)
Under those conditions, how much more am I willing to pay for a human-written eulogy? $0 at most, and probably a negative amount.
My conclusion was senior engineers were better because they were used to managing developers and taking on more managerial tasks building 'LLM Soft Skills' and also frankly fixing mistakes, the junior developers were pressured for speed and had their managers to correct them.
Within 12 months, despite extensive attempts, only the mid level team members remained.
You just used "AI" to get rid of people you didn't want anyway, but since it was your order it had to be a "success story".
When you set impossible constraints that neglect requirements for sustainability, and tell people to do the impossible, you pigeonhole and sieve only the people you are actually looking for (the ones that can meet that sieve).
The lying, and deceit that you do, naturally occur after-the-fact (which blind people don't notice, often making them evil). The danger of most deceit and lying occur where you say something truthful omitting something important that they know, but then later contradict yourself in the outcome. These are called lies of omission. Deceivers and Vipers take full effect of these actions pretending its not them, its the circumstance, but a circumstance they control.
Of course the middle team would be the only ones left, after all you tortured your senior team having them baby a LLM driving them to burnout, and the junior team lacked the knowledge that makes the difference between Junior and Mid, to be productive. You set a filter that only your midlevel team could meet, and any conclusions you make will equally conform to your initial decisions in the criteria you set which is you don't want to hire young people, or old people. You want them just right, and there are laws against age discrimination. People have tried to get around these laws for years, and have never had more luck in evading these than now; with the advent of blackbox AI which can obscure these type of decisions through hiding them in the weights. Short term, there will of course be more profit, long-term the lawsuits will get you, and your good character which you thought you had will not be good.
Additionally, you have your perfect team left that is wholly dependent on LLM, who won't be able to solve the rare but inevitable problems the senior level people would (before they occur).
As a professor of software engineering, I'm feeling an existential crisis coming on. Are we preparing students in vain? Last term was the first time I had senior project students who didn't have a job lined up in the Fall of their final year. Maybe it's time to retire.
Since you are a professor, they might listen to you.
Plagiarism is training on generously licensed open source software and creating a derivative work without attribution.
Not all hackers were in favor of piracy, the majority of open source hackers have always been pretty protective of their licenses, which were written before the existence of the laundromats.
Taking all IP and using it against its creators is entirely new and does not match the piracy issues.
The series in figure 6, though, I think suggests that we may just be seeing a time-delayed effect and eventually everyone's going to be impacted.
Not just early career, also independent devs who are any from early-career to late-career.
I did a thing for a client a few weeks ago (embedded, with prototype board + code for industrial sensor containing multiple sensor types, using wifi to both configure and monitor the device, and to retrieve sensor values).
They balked at a two-week bill; their argument was that this should have not been more than a 2-day bill (literally, 2 days for everything from soldering up the prototype board to writing the code).
Their frame-of-reference was that their in-house dev (or similar, not sure now) could do that in two days with an arduino or ESP32 devkit.
My suspicion is that because their in-house dev took only two days to get claude code to write a ping program for a esp32 dev board (no soldering, sensors, etc. Purely WiFi comms and nothing else), their expectations were that this should have been the same.
(I eventually accepted the payment for only 2 days worth of dev, of which at least half was expenses for me - driving, meetings with them, purchases, etc.)
Hysteresis is a bitch.
This study showed that early career workers of which they only focused on the 22-25 range had a ~20% drop in employment between 2022 and now. If you include the 26-30 range which includes most early-career that's roughly ~30% less jobs, from the Payment Processors perspective.
The study doesn't seek to cover other impactors such higher costs on the labor pool, and interference in employment matching, which are also of great concern.
30% after shock normalization is well beyond statistical significance. This is happening, people said it would happen, and no one acted to stop it because they listened to evil people seeking short-term profit; blind to all else.
Sad and dark times are ahead. There are things that can be reasonably predicted ahead-of-time, but the moment you give preferential treatment to liars is the moment you lock in losses. Sure the data proving the prediction will come, but not in time to take corrective action; such is the structured cascading failures involving hysteresis.