Openai Eats Jobs, Then Offers to Help You Find a New One at Walmart
Posted4 months agoActive4 months ago
theregister.comTechstoryHigh profile
heatednegative
Debate
85/100
AIJob MarketAutomation
Key topics
AI
Job Market
Automation
OpenAI is launching a jobs platform to help people find new employment after being potentially displaced by AI, sparking controversy and debate about the impact of AI on the job market.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
26m
Peak period
141
0-6h
Avg / period
20
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 5, 2025 at 8:17 AM EDT
4 months ago
Step 01 - 02First comment
Sep 5, 2025 at 8:43 AM EDT
26m after posting
Step 02 - 03Peak activity
141 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 8, 2025 at 10:09 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45137658Type: storyLast synced: 11/20/2025, 8:32:40 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I feel like we need a new word for money going to datacenters instead of paychecks. 'AI taking jobs' implies AI is doing the work , which is not the case.
it sounds like you don't believe datacenters are "useful", either. that's a pretty hot take IMO.
Let's reduce headcount and spin it as AI disruption! That way we dont have to acknowledge we overhired during covid AND our stock price will go to the moon as they say.
Crazy how these CEO are so brazenly and openly committing fraud. Market and investors are playing along because stock price is going up. Board doesn't give a fuck.
USA is one giant casino right now.
The most valuable thing AI can do right now is write code, and it couldn't do that without thousands of StackOverflow volunteers.
If you have an LLM that was trained on (say) everything on the internet except for programming, and then trained it on the Python.org online documentation, would that be enough for it to start programming Python? I've not tried, but I get the impression that it needs to see lots and lots of examples first.
Though like non-GAAP earnings & adjusted EBITDA, very few care. Those that do are often old, technical, conservative & silent type of investors instead of podcasters or CNBC guests. RIP Charlie M.
in addition SalesForce grew in employment size in 2025 AFIK and 4000 jobs are for them only around ~~5%, which means it's to small to be a meaningful metric if you don't fully trust what their press department does (and you shouldn't)
still I see people using modern AI for small productivity boosts all over the place including private live (and often with a wastely underestimate risk assessment) so in the best case it's only good enough to let people process more of the backlog (which otherwise would be discarded due to time pressure but isn't worthless) and in the worst case will lead to idk. 1/3 of people in many areas losing their job. But that is _without_ major breakthrough in AI, just based one better applying what AI already can do now :/ (and is excluding mostly physical jobs, but it's worse for some other jobs, like low skill graphic design positions)
And as software developers, it would be silly if we didn't think that businesses would love to find a way to replace us, as the software we have created did for other roles for the past 60 years.
Alternatively though if the market is bad and there not launching as many new products or appealing to as many new customers, customer support may be a cost center you’d force to have “AI efficiencies”
Companies like IBM & Klarna have made news for reducing positions like these & then re-hiring them.
AI, like most tech, will increase productivity & reduce headcount but it's not there yet. Remember, the days are long & the years are short.
Customer support is the "big" thing for AI replacement but it's also the worst role to replace with an AI, since customer frustration can (and usually does) lead to spurned customers switching to competitors.
Salesforce fired 4000 humans and replaced them with a dumb chatbot that couldn't answer many questions beyond the very basic. Customers absolutely hated the chatbot and started rethinking their Salesforce spend, resulting in a number of customers reducing their spend or switching to another CRM provider entirely.
Unless all the competitors are also going the AI route for support, because they're all equally greedy.
That's basically what happened with the path from humans answering the phone to "press 1 to..." to "say 1 to...", or website chat support being replaced by non-AI bots.
I really can't see an argument against the notion that customer service has been steadily worsening as a result of companies preferring to invest less on labor.
Their function is around reconciling utilization and bills from multiple related suppliers with different internal stakeholders. They do a bunch of analysis and work with the internal stakeholders to optimize or migrate spend. It is high ROI for us, and the issue is both finding people with the right analytical and presentation skills and managing the toil of the “heavy” work.
Basically, we’re able to train interns to do 80% of the processing work with LLM tooling. So we’re doing to promote two of the existing staff and replace 2/6 vacancies with entry level new grads, and use the unit to recruit talent and cycle them through.
In terms of order of magnitude, we’ll save about $500k in payroll, spend $50k in services, and get same or better outcomes.
Another example is we gave an L1 service desk manager Gemini and made him watch a YouTube video about statistics. He’s using it to analyze call statistics and understand how his business works without alot of math knowledge. For example, he looked at the times where the desk was at 95th percentile call volume and identified a few ways to time shift certain things to avoid the need for more agents or reduce overall wait times. All stuff that would require expensive talent and some sort of data analysis software… which frankly probably wouldn’t have been purchased.
Thats the real AI story. Stupid business people are just firing people. The real magic is using the tools to make smart people smarter. If you work for a big company, giving Gemini or ChatGPT to motivated contracts and procurement teams would literally print money for you due to the stuff your folks are missing.
This is to say that we know from looking at outcomes over the long term that the kinds of concrete gains you're describing are offset by subtler kinds of losses which most likely you would struggle to describe as decimal numbers but which are equally real in their impact on your business.
Public LLMs have been around for 3 years, and adoption is still nascent. We don't have any long term data, and the longest term data we have involves a bunch of outdated models. Most people are still awful at using LLMs, and probably the most skilled users are the college kids who are graduating right now (the youth is always god-tier with new tech).
I cannot think of anything more foolish right now than not trying to leverage SOTA models to save you money, especially because you heard rumors of shadow losses that can only be found in the bottom line.
Gemini "analysis" looks superficially detailed but if you actually do any deep dive (or even a shallow dive) you're going to start finding all the things its wrong about because it's not actually analyzing anything; its just connecting most likely words together. Very crucial, because of the ways LLMs work, they can't generate any conclusion that isn't already in its data set. But it's exactly those sorts of non-obvious conclusions that are the reason you hire people to do Finance.
Will AI eventually take over these kinds of jobs? Sure, in 10 to 20 years when actual AI exists. But LLMs aren't AI. They're just brute-force black box machine learning algorithms.
AI is an entropy machine for everything it touches. Companies that runs off of AI is a zombie company. Without people to understand an industry, what does your company ad? Without people to see new revenue streams, new directions, and most importantly NEW RISKS to your business model, you are a dead zombie company living off what the previous living, growing company built. But does that matter to owners when they get $500,000 more a year in their pockets?
Except it seems like the opposite is happening. CS grads have high unemployment. Companies laying off staff.
The rhetoric doesn't seem to add up to the reality.
Do you use tech to grow your business or increase dividends?
Also reducing staff via attrition shows far better management skills than layoffs which imo says more about the CEO & upper management.
I worked on both - my skillset went from coding pretty bar charts in SVG + Javascript to configuring Grafana, Dockerfiles and Terraform templates.
There's very little overlap between the two, other than general geekiness, but thanks I'm still doing OK.
When he retired a few years ago, most of that was gone. The attorneys and paralegals were still required, there was a single receptionist for the whole office (who also did accounting) instead of about one for each attorney, and they'd added an IT person... but between Outlook and Microsoft Word and LexisNexis and the fileserver, all of those jobs working with paper were basically gone. They managed their own schedules (in digital Outlook calendars, of course), answered their own (cellular) phones, searched for documents with the computers, digitally typeset their own documents, and so on.
I'm an engineer working in industrial automation, and see the same thing: the expensive part of the cell isn't the $250k CNC or the $50k 6-axis robots or the $1M custom integration, those can be amortized and depreciated over a couple years, it's the ongoing costs of salaries and benefits for the dozen humans who are working in that zone. If you can build a bowl screw feeder and torque driver so that instead of operating an impact driver to put each individual screw in each individual part, you simply dump a box of screws in the hopper once an hour, and do that for most of the tasks... you can turn a 12-person work area into a machine that a single person can start, tune, load, unload, and clean.
The same sort of thing is going to happen - in our lifetimes - to all kinds of jobs.
Condensing the workforce as you describe risks destroying redundancy and sustainability.
It may work in tests with high performers over short dutations but may fall under over longer terms, with average performers, or with even a small amount of atrition.
Having cog number 37 pick up the slack for 39 doesn't work with no excess capacity.
I recall the mine water pump had a boy run up and down a ladder opening and closing steam valves to make the piston go up and down. The boy eventually rigged up a stick to use the motion of the piston to automatically open and close the valves. Then he went to sleep while the stick did his job.
Hence the invention of the steam engine.
Complete aside, just because you brought up this thought and I like the concept of it:
My mom trained professionally as a secretary in the 1970s and worked in a law office in the 1980s; at that point, if you were taking dictation, you were generally doing longhand stenography to capture dictation, and then you'd type it up later. A stenotype would've been a rarity in a pre-computer office because of the cost of the machine; after all, if you need a secretary for all these other tasks, it's cheaper to give them a $2 notebook than it is a $1,500+ machine.
My observation so far has been that executive leadership believes things that are not true about AI and starts doing the cost-cutting measures now, without any of the productivity gains expected/promised, which is actually leading to a net productivity loss from AI expectations based on hype rather than AI realities. When you lose out on team size, can't hire people for necessary roles (some exec teams now won't hire unless the role is AI related), and don't backfill attrition, you end up with an organization that can't get things done as quickly, and productivity suffers, because the miracle of AI has yet to manifest meaningfully anywhere.
AI alone can't do that, even if you make the weakest link in the chain stronger, there are probably more weak links. In a complex system the speed is controlled by the weakest, most inefficient link in the chain. To make an organization more efficient they need to do much more than use AI.
Maybe AI exposes other inefficiencies.
to be fair this positions never made that much sense as they tend to cause more trouble then they are helping on the long run, but they exist anyway
and companies should know better then throwing away "junior, not yet skilled, but learning" positions (but then many small startups also are not used to/have the resources to teach juniors, which is a huge issue in the industry IMHO)
but I imagine for many of the huge "we mainly hire from universities"/FANG companies it will turn into a "we need only senior engineers and hire juniors only to grow our own senior engineers", this means the moment to you growth takes too long/stagnates by whatever _arbitrary_ metric you get kicked out fast. And like with their hiring process they have the resources, scale, and number of people who want to work for them to be able to really use some arbitrary imperfect effective discriminatory metrics.
Another aspect is that a lot of the day to day work of software engineering is really dump simple churn, and AI has the potential to massively cut down the time a developer needs to do that, so less developers needed especially in mid to low skill positions.
Now the only luck devs have is that there is basically always more work which was cut due to time pressure but often isn't even supper low priority, so getting things done faster might luckily not map one to one to less jobs being available.
you get HR's glossy `Exit Packet' with a cover of a pristine chartered white catamaran on the translucent aquamarine Carribbean in a palm treed cove, afloat with bikini babes lounging on deck, five minutes to fill your cardboard box, and manhandled by two security wide-shouldered bulls squeezing your arms to your sides gripped under your forearms, marching you down the aisle with rubber-knecking wide-eyed heads watching you repeatedly slip, fall forward, the experienced bulls lowering your arms to regain your shoes' purchase with the carpet, hurriedly packing you into the elevator with silent ignominious stares and hushed whispers of those packed in around you, then past hot Tanya at Reception in front of every Tom, Dick, Irene, and Harry, out the main revolving glass door entrance, into the overcast dishwater grey and miserable wind blown wet in truth is Seattle, to the sidewalk, left at the curb -furthest from proper and successful glitterati as possible.
All double-time haste, signalling to everyone get this despicable loser/criminal POS off the property, fast.
Its over, you're done. Sooner than you thought possible, you're freezing in a tent wasting days into months: tailing-downward foodbank boxes, the TIDE(tm) pee-bottle, those el-cheapo dirty gloves with the tips cut off, dirty layered thriftstore fleeces, its under filthy roaring I-5 for you pal, and your ilk of dangerous insane addict-crazed zombies screaming outside your hideous blue-tarped tent, and where's the knife.
The answer to the fundamental question of your entire existence and net worth has reduced to simply ask: "How do I best empty my bowels"?
ICEstapo hunter/killers gunning for you, to flush you offshore unseen and forgotten forever. You better run, you better run your ass off. Why didn't you study harder all those wasted years?!
Welcome to Sam Altman's Club.
- OCR eat a good chunk of data entry jobs,
- Automated translation eat a number of translation jobs,
- LLM have eaten quite a few tier I support roles.
I don't have numbers tho, maybe people are still doing data entry or hiring translators on mechanical turk.
Initially machine translation was way worse (by professional standards) than people assumed, essentially useless, you had to rewrite everything.
As time went on, and translation got better, the workflow shifted from doing it yourself to doing a machine pass, and rewriting it to be good enough. (Machine translation today is still just 'okay', not professional quality)
On the initially set rates 15 years ago you could eke out a decent-ish salary (good even if you worked lots of hours and were fast). Today if you tried to do the work by hand, you'd starve to death.
"okay" is an overstatement. It's "readable" if you put in triple the effort needed to read proper native language. But you're doing a lot of work re-translating machine languae in your head to understand it.
I guess for businesses that's "good enough". Very few products ever truly get lambasted for bad localization.
While they help with programming, I feel like the scope of my tasks have increased over time as well. I feel like this is happening to me - I'm insanely more productive and my tech stack has increased hugely over the past two years as has my productivity.
But I don't make significantly more money, or get a ton more recognition, it's just accepted.
The question is no longer whether AI will put people out of work, but how many and how quickly.
So jobs being killed by AI are basically being killed same way that office number crunching technology killed administrative assistant positions and put those tasks onto other people.
Take for example a purchasing department for a big company. Some project needs widgets. Someone crosses the specs against what their suppliers make. They take the result of that and makes a judgement call. AI replaces that initial step so a team of N can now do the work that formerly took a team of N + Y. Bespoke software could have replaced that step too but it would have been more expensive, less flexible, etc, etc. since there's all this work required to make human facing content into machine parsable content, including the user's input and the juice simply wasn't worth the squeeze. With AI doing all that drudgery on an as-needed basis the juice now is worth the squeeze in some applications.
And the sick thing is that the company that tries to be smart longer term won't be able to compete with the short term companies that cut as much as possible using AI to maximize short term benefits. Long term these 'cut everything' AI leaning zombie companies won't last, but they will last long enough to undercut and take out the longer term thinking companies with them.
AI is an entropy machine. It sucks all momentum from everything it touches.
Every task the AI can juggle for you is one you don't have to. If your department goes from 4 to 3 great, if it goes from 2-1 or 1-0 that's fine too. Companies already exist at those numbers. You can think about it in terms of "a company can now be bigger before they need a dedicated person/team for job X" if that helps take the emotion out.
Companies that can benefit from these sorts of purchasing use cases don't exist at 0. And at 1, this is already a minimal need so not useful. Past 2, and you need 2 staff anyways from reasons listed and more, so AI doesn't give a cost benefit, but it does introduce all kinds of risk (there's a lot more than just comparing SKUs) and weaken relationship with supplies, which can kill a company. Just doesn't make sense all around.
To think the same isn't happening all over the place and will only continue is ignoring just how powerful this tech is.
There is so much else that people do. So many details that are just being ignored because of short term 'gains' that justify ignoring so many details.
If this person plus a junior represented "1.3 engineering knots," he's saying... "actually, I'm still 1.3 engineering knots without him."
When this person leaves, they go find someone else who is 1.3 engineering knots. The junior represented .3, without the 1., it doesn't matter that much. Headcount strategy shifts.
So the company you talk about is already in the entropy vortex. It has no momentum. It has no future. It just hopes it can keep doing what it is doing now.
- Translation. See: Gizmodo firing its Spanish translation team and switching exclusively to LLMs.
- Editors. See Microsoft replacing their news editors at MSN with LLMs.
- Customer service. Various examples around the world.
- Article graphics for publications. See: The San Francisco Standard (which used it for various articles for a period), Bleepingcomputer.com, Hackaday (selectively, before some pushback).
- Voice acting. The Finals game used synthetic voices for the announcer voices.
This sounds so weird to me, and I feel I am missing something.
You might as well ask why google built an ad company or email or video, or a browser or a phone OS etc, when they should have spent more money on their core search engine.
The series was really good. Too bad they cancelled it for being too expensive.
The book, however, is excellent- definitely recommend.
Hyperstition is a real thing... if, that is, you're William Gibson.
But reality is that there will be new high skilled jobs from AI thanks to Jevons' Paradox, the more companies using AI will lead to a huge demand for highly skilled people who can use AI in more ways than we are today.
Not so much about being replaced, but there will be new jobs for people to do.
I guess for those people being 'replaced' it is a 'skill issue'
For me it's funny that the first time most programmers ever think about the ethics of automating away jobs is when they themselves become automated.
I contend it was when Dodge won the court case deciding that shareholders were more important than employees. It’s been a slow burn ever since.
A lot of their capability was due to them being better at automation. See: NUMMI
In fact I fail to see any connection between those two facts other than that both are decisions to allow or not allow something to happen by OpenAI.
Imagine if ChatGPT gave "do a luigi" as a solution to walmart tracking your face, gait, device fingerprints, location, and payment details, then offering that data to local police forces for the grand panopticon to use for parallel reconstruction.
It would be unimaginable. That's because the only way for someone to be in the position to determine what is censored in the chat window, would be for them to be completely on the side of the data panopticon.
There is no world where technology can empower the average user more than those who came in with means.
It is funny, in worst way possible of course, that even our chairs are not as stable as we thought they are. Even automation can be somehow automated away.
Remember all those posts stating how software engineering is harder, more unique, somehow more special than other engineering, or generally types of jobs? Seems like its time for some re-evaluation of that big ego statements... but maybe its just me.
0. The people who got into it just as a job
1. The people who thought they could do it as art
And #1 is getting thrashed and thrown out the window by the advent of AI coding tools, and the revelation companies didn’t give a darn about their art. Same with AI art tools and real artists. It even begs the question if programming should ever have been viewed as an art form.
On that note, programmers collectively have never minded writing code that oppresses other people. Whether with constant distractions in Windows 11, building unnecessarily deadly weapons at Northrop Grumman, or automating the livelihoods of millions of “inferior” jobs. That was even a trend, “disrupting” traditional industries (with no regard to what happens to those employed in said traditional industry). Nice to see the shoe just a little on the other foot.
For many of you here, keep in mind your big salary, came from disrupting and destroying other people’s salaries. Sleep well tonight and don’t complain when it’s your turn.
Northrop Grumman only builds what Congress asks of them, which is usually boring shit like toilet seats and SLEPs. You can argue that they design unnecessarily deadly weapons, but if they've built it then it is precisely as deadly as required by law. Every time Northrop grows a conscience, BAE wins a contract.
That's a lame "I was just following orders" excuse. Doesn't matter who gets the contract, if you work for a weapons manufacturer or a large corporation that exploits user data you have no moral high ground. Simple as that.
The thing is, there's no innovation in the "track everything that breaths and sell the data to advertisers and cops" market.
They might get better at the data collection and introspection, but we as a society have gotten nothing but streamlined spyware and mental illness from these markets.
"unnecessarily deadly"?
I had no idea that it was possible to measure degrees of dead: she's dead, they're dead, we're all dead, etc. - I thought it was the same "dead" for everyone.
Also, interesting but ambiguous sentence structure.
Is this an offshoot of LLMs that I've overlooked?
Recent grads are having serious trouble to get work right now: https://www.understandingai.org/p/new-evidence-strongly-sugg...
I'm less talking about automation and more about the underpinnings of the automation and the consequences in greater society. Not just the effects it has on poor ole software engineers.
It is quite ironic to see the automation hit engineers, who in the past generally did not care about the consequences of their work, particularly in data spaces. We have all collectively found ourselves in a local minima of optimization, where the most profitable thing we can do is collect as much data on people as possible and continually trade it back and forth between parties who have proven they have no business holding said data.
> It would be unimaginable.
By "do a luigi" you're referring to the person who executed a health insurance CEO in cold blood on the street?
Are you really suggesting that training LLMs to not suggest committing murder is evil censorship? If LLMs started suggesting literal murder as a solution to problems that people typed in, do you really think that would be a good idea?
They are pursuing profits. Their ethical focus is essentially a form of theater.
Automation and technology has been replacing jobs for well over a century, almost always to better outcomes for society. If it were an ethical issue, then it would be unethical not to do it.
In any case, which jobs have been replaced by LLMs? Most of the actual ones I know were BS jobs to begin with - jobs I wish had not existed to begin with. The rest of the ones are where CEOs are simply using AI as an excuse to execute layoffs (i.e. the work isn't actually being done by an LLM).
It turns out that standard of living requires more than just access to cheap goods and services
Which is why despite everything getting cheaper, standard of living is not getting better in equivalent measure
Why would LLMs be incapable of these new jobs?
> Meanwhile the workers who were previously employed in menial clerical tasks will simply switch to supervising the AI's that perform those same tasks for them.
Put this to numbers, right now - if we remove all workers and leave managers on those fields - how many people are still employed?
That you specifically wish for them to not even exist is your own internal problem and actually pretty horrible thing to say all things considered.
People had/have decent livehoods from those, I know a few. If they could easily got better jobs they would go for them.
Egos here sometimes are quite a thing to see. Maybe its good that chops are coming also for this privileged groups, a bit of humility never hurts.
If so where do we stop. Do we stop at knowledge work or do we go back to shovels and ban heavy equipment or shall we go all the way back to labor intensive farming methods?
>Egos here sometimes are quite a thing to see. Maybe its good that chops are coming also for this privileged groups, a bit of humility never hurts.
This doesn't appear to be so. AI is discussed as a pretext for layoffs more fashion than function.
Which artists have lost their jobs?
But I am willing to grant you that. From a big picture society perspective, if it means that ordinary people like me who cannot afford to pay an artist can now create art sufficiently good for my needs, then this is a net win. I just made an AI song a week ago that got mildly popular, and just got a request to use it at a conference. No one is losing their job because of me. I wouldn't have had the money to pay an artist to create it, and nor would the conference organizers. Yet, society is clearly benefiting.
The same goes for translators (I'm not actually aware that they're losing jobs in a significant way, but I'll accept the premise). Even before LLMs, the fact that I could use Babelfish to translate was fantastic - LLMs are merely an incremental improvement over it.
To me, arguing we shouldn't have AI translators is not really different from arguing we shouldn't have Babelfish/Google Translate. Likely 99% of the people who will benefit from it couldn't afford a professional translator.
(I have, BTW, used a professional translator to get some document translated - his work isn't going away, because organizations need a certified translator).
> People at various bureaucratic positions doing more menial white collar work?
"Menial white collar work" sounds like a good thing to eliminate. Do you want to go back to the days where word processors were not a thing and you had to pay someone to type things up?
> People had/have decent livehoods from those, I know a few. If they could easily got better jobs they would go for them.
I'll admit I spoke somewhat insensitively - yes, even I know people who had good careers with some of them, but again: Look to the past and think of how many technologies have replaced people, and do you wish those technologies did not replace people?
Do you want to deal with switchboard operators every time you make a call?
Do you want to have to deal with a stock broker every time you want to buy/sell?
Do you want to pay a professional every time you want to print a simple thing?
Do you want to go back to snail mail?
Do you want to do all your shopping in person or via a physical catalog?
The list goes on. All of these involved replacing jobs where people earned honest money.
Everything I've listed above has been a bigger disruption than LLMs (so far - things may change in a few years).
> Egos here sometimes are quite a thing to see. Maybe its good that chops are coming also for this privileged groups, a bit of humility never hurts.
Actually, I would expect the SW industry to be amongst the most impacted, given a recent report showing which industries actually use LLMs the most (I think usage was SW was greater than all other industries combined).
As both an engineer and a programmer, who makes a living via programming, I am not opposed to LLMs, even if my job is at risk. And no, I'm not sitting on a pile of $$$ that I can retire on any time soon.
I would like to make one small point about job replacement, the better outcomes for society are arguably inconclusive at this point. You've been indoctrinated to think that all progress and disruption is good because of capitalism.
We're still in the post-industrialization arc of history and we're on a course of overconsumption and ecological destruction.
Yes, we've seen QoL improvements over the course of recent generations. Do you really think it's sustainable?
When a factory decides to shut down, and the company offers to pay for 2 years of vocational training for any employee that wants it, is it hypocrisy? One of my physical therapists, who took such an offer, definitely doesn't see it that way. The entity responsible for her losing her job actually ended up setting up a whole new career for her.
> I would like to make one small point about job replacement, the better outcomes for society are arguably inconclusive at this point. You've been indoctrinated to think that all progress and disruption is good because of capitalism.
That's overstating my stance. I can accept that it's too early to say whether LLMs have been a net positive (or will be a net positive), but my inclination is strongly in that direction. For me, it definitely has been a net positive. Because of health issues, LLMs allow me to do things I simply couldn't do before.
> Yes, we've seen QoL improvements over the course of recent generations. Do you really think it's sustainable?
This is an age old question and nothing new with LLMs. We've been arguing it since the dawn of the Industrial Revolution (and for some, since the dawn of farming). What I do know is that it resulted in a lot of great things for society (e.g. medicine), and I don't have much faith that we would have achieved them otherwise.
So lay people off to reduce costs, say that they have been replaced by AI now, and the stockholders love you even more!
Indeed, a model that should cascade thru American businesses quickly.
I cannot edit my original comment, so I'll address this here:
Yes, I admit some legitimate jobs may have been lost (and if not yet, likely will be). When I spoke of BS jobs, I was referring to things like people being paid to ghostwrite rich college students' essays. That's really the only significant market I know to have been impacted. And good riddance.
This time the purported capabilities of "AI" are a direct attack on thinking. Outsourcing thinking is creepy and turns humans into biorobots. It is different from robotic welding in an assembly line.
Even if new bullshit jobs are created, the work will just be that of a human photocopier.
[All this is written under the assumption that "AI" works, which it does not but which is the premise assumed by the persons quoted in the Register article.]
I also fail to see how LLMs can turn humans into "biorobots". You can still do all the things you could do before LLMs came along. The economic value of those things just decreased enourmously.
They have to do this because the industry has basically been kicking the aging-workforce can down the road for a few decades since off-shoring and automation outpaced increasing demand, and now they don’t have nearly enough people that even know how to program CNC machines when CAM software falls short.
I have a feeling a lot of displaced software people will go that route, and have a big change in compensation and working conditions in the process.
I've watched my cousin weld on a horse trailer overhead in 105F Texas heat, would be interesting to see the typical SWE step away from an Xbox and do that.
I’ve seen devs say they’d pick up a trade like being a plumber or electrician because their their master electrician cousin gets paid a ton money, probably they imagine for wiring up new residential buildings and changing out light sockets… how long did it take that cousin to get there? In any trade, there’s quite a number of years of low pay, manual labor, cramming into tight spaces in hot attics or through bug infested crawl spaces, factory basements, etc. that most apprentices complete in their early twenties. Nobody gives a shit what you did as a developer and nobody gives a shit how good you are at googling things in most blue collar work environments. Getting experienced enough to have your own business making good money in some job where you need many thousands of work hours to even take a test to get licensed isn’t a lateral move from being a JS toolchain whiz. Even in less structured jobs like working as a bartender — it takes years of barbacking, serving, or bartending in the least desirable jobs (events, corporate spaces) before you get something you can pay rent with.
The fact that you like programming more than welding is nice to know but there's probably also a lot of people who like welding more than progamming.
All the good devs that I know aren't worried about losing their jobs, they are happy there is a shortcut through boilerplate and documentation. They are also equally unhappy about having to talk management, who know very little about the world of dev, off the ledge as they are getting ready to jump off with their AI wings that will fail.
Finally, the original point was about censorship and controlling of information, not automating jobs.
While many of them are mistaken, the much bigger problem is for all the early career developers, many of whom will never work in the field. These people were assured by everyone from professors to industry leaders to tech writers that the bounty of problems available for humanity to solve would outpace the rate at which automation would reduce the demand for developers. I thought it was pretty obviously a fairy tale that people who believed in infinite growth created to soothe themselves and other industry denizens suspecting the tech industry hadn’t unlocked the secret to infinite free lunch, but in reality are closer to the business end of an ouroboro than they realize.
Just as the manufacturing sector let its Tool and Die knowledge atrophy, perhaps irreversibly, the software business will do the same with development. Off-shoring meant the sector had a glut of tool and die knowledge so there was no immediate financial incentive to hire apprentices. There’s a bunch of near-retirees with all of that knowledge and nobody to take over for them, and now that advanced manufacturing is picking up steam in the US again, many have no choice but to outsource that to China, too.
Dispensing with the pretenses of being computer scientists or engineers, software development is a trade, not an academic discipline, and education can’t instill professional competence. After a decade or two of never having to hire a junior because the existing pool of developers can serve all of the industry’s needs, suddenly we’ll have run out of people to replace the retirees with and that’s that for the incredible US software industry.
'Good' is doing heavy lifting here. E.g AI/Automation could possibly eliminate 90% of IT jobs and cause all kind of socio-economic issues in society. All the while good developers remain in great demand.
What rational worker would want to take part in this?
The argument usually centers around the fact that LLMs aren't AGI, which is obviously true but also kinda missing the point
It's not like there is an organic bottom up movement on driving this usage. It's always top down mandated by executives with little regard on how it impacts worker's lives.
We've also seen how these tools have made certain jobs worse, not better, like translating:
https://www.bloodinthemachine.com/p/how-ai-is-killing-jobs-i...
I don't think LLMs will be able to pick up on what's done by an evolving and growing codebase with previous projects also included. More likely it will draw from older stuff and combine it with other people's more normal stuff and end up with an incoherent mess that won't compile. Not all work is 'come up with the correct answer to the problem, and then everybody uses it forever'.
We already see it happening in the US too, with the Nashville data centers causing immense medical issues.
It's more about a logical outcome. Automating scripts means existing employees can do other or more work.
AI doesn't feel like that at all. it wants to automate labor itself. And no country has the structure ready for that sort of "post work" world.
As a software developer, when I automate someone's job, say of a cashier, I do not start to get paid their salary - my salary stays the same.
This is different for capital investors and shareholders. They keep the cashier's salary (not directly but ultimately). This results in an increasing concentration of wealth, creates lots of suffering and destabilises the world. That is where it is unethical.
71 more comments available on Hacker News