Greatest Irony of the AI Age: Humans Hired to Clean AI Slop
Key topics
The article discusses how humans are being hired to clean up the output of AI systems, highlighting the irony that AI is not replacing human labor as expected. The discussion revolves around the implications of AI on the labor market and the quality of AI-generated content.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
104
0-12h
Avg / period
23.4
Based on 117 loaded comments
Key moments
- 01Story posted
Sep 24, 2025 at 12:15 AM EDT
3 months ago
Step 01 - 02First comment
Sep 24, 2025 at 1:24 AM EDT
1h after posting
Step 02 - 03Peak activity
104 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 30, 2025 at 10:23 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I assume you missed the whole part where 75% of people use LLMs for porn, AI bf/gf roleplaying, spam, ads, scams, &c.
It's like youtube, in theory: unlimited free educational content to raise the bar world wide, in practice: 3/4th of videos are braindead garbage, and probably 99% of views are on concentrated this.
In about two years we've gone from "AI just generates rubbish where the text should be" to "AI spells things pretty wrong." This is largely down to generating a whole image with a textual element. Using a model like SDXL with a LORA like FOOOCUS to do inpainting and input image with a very rough approximation of the right text (added via MS Paint) you can get a pretty much perfect result. Give it another couple of years and the text generation will be spot on.
So yes, right now we need a human to either use the AI well, or to fix it afterwards. That's how technology always goes - something is invented, it's not perfect, humans need to fix the outputs, but eventually the human input diminishes to nothing.
This is not how AI has ever gone. Every approach so far has either been a total dead end, or the underlying concept got pivoted into a simplified, not-AI tech.
This new approach of machine learning content generation will either keep developing, or it will join everything else in the history of AI by hitting a point of diminishing to zero returns.
I agree we probably won't magically scale current techniques to AGI, but I also think the local maxima for creative output is going to be high enough that it changes how we approach it the way computers changed how we approach knowledge work.
That's why I focus on it at least.
You're talking about the progress of technology. I'm talking about how humans use technology in it's early states. They're not mutually exclusive.
And most SOTA models (Imagen, Qwen 20b, etc) at this point can actually already handle a fair amount of text in a single T2I generation. Flux Dev provided your willing to roll a couple gens can do it as well.
[1] https://github.com/lllyasviel/Fooocus
AI (at least this form of AI) is not going to take our jobs away and let us all idle and poor, just like the milling machine or the plough didn't take people's jobs away and make everyone poor. it will enable us to do even greater things.
The plough didn't make everyone poor, but people working in agriculture these days are a tiny percentage of the population compared to the majority 150 years ago.
(I don't think LLMs are like that, tho).
Touching on this topic, I cannot recommend enough "The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger" which (among other things) illustrates the story of dockworkers: there were entire towns dedicated to loading ships.
But the people employed in that area have declined by 90% in the last 60 years, while shipping has grown by orders of magnitude. New port cities arose, and old ones died. One needs to accept inevitable change sometimes.
[0] https://en.wikipedia.org/wiki/The_Box_(Levinson_book)
Sometimes demand scales, maybe food is less elastic. Programming has been automating itself with each new language and library for 70 years and here we are, so many software devs. Demand scaled up as a result of automation.
Just as gun powder enabled greater things. I agree with you just humans have shown, time after time, an ability to first use innovation to make lives miserable for their fellow humans.
In reality most devs can cleanup after themselves.
In my country, we also have a class (they have achieved the status of social class IMO) that expects to keep their jobs and get ever increasing privileges while not having to upgrade their competences or even having to learn anything new during all their life: our public workers.
Except I still doubt whether AI is the new Spinning Jenny. Because the quality is so bad and because it can’t replace human’s in most things or doesn’t necessarily even speed the prodiction in a sognificant way, we might just be facing another IT-bubble and financial meltdown, since US seems to have but all of the eggs in a one basket.
It doesnt do this.
This is highly dependent on which model is being used and what hardware it's running on. In particular, some older article claimed that the energy used to generate an image was equivalent to charging a mobile phone, but the actual energy required for a single image generation (SDXL, 25 steps) is about 35 seconds of running a 80W GPU.
Am I missing something? Does the CPU/GPU/APU doing this calculation on servers/PCs run the same wattage as mobile devices?
The proper unit is watt hours.
You should be using KWh.
I’m not sure it will be as high as a full charge of a phone, but it’s incomplete without the resources needed for collecting data and training the model.
If you compare datacenter energy usage to the rest, it amounts to 5%. Making great economies on LLMs won't save the planet.
This can't be correct, I'd like to see how this was measured.
Running a GPU at full throttle for one hour uses less power than serving data for one hour?
I'm very sceptical.
[1] https://www.iea.org/commentaries/the-carbon-footprint-of-str...
[2] https://epoch.ai/gradient-updates/how-much-energy-does-chatg...
The Netflix consumption takes into account everything[1], the numbers for AI are only the GPU power consumption, not including the user's phone/laptop.
IOW, you are comparing the power cost of using a datacenter + global network + 55" TV to the cost of a single 1shot query (i.e. a tiny prompt) on the GPU only
Once again, I am going to say that the power cost of serving up a stored chunk of data is going to be less than the power cost of first running a GPU and then serving up that chunk.
==================
[1] Which (in addition to the consumption by netflix data centers) includes the network equipment in between, the computer/TV on the user's end. Consider that the user is watching netflix on a TV (min 100w, but more for a 60" large screen).
The data center +network usage will be the main cost factor for streaming. For an LLM, you are not sending or receiving nearly as much data, so while I wouldn't know the numbers, it should be nominal
We're not talking about a human occasionally chatting with ChatGPT, that's not who the article and earlier comments are about.
People creating this sort of AI slop are running agents that provide huge contexts and apply multiple layers of brute-force, like "reasoning" and dozens/hundreds of iterations until the desired output is achieved. They end up using hundreds (or even thousands) of dollars worth of inference per month on their $200 plans, currently sponsored by the AI bubble.
And just how many people manage to 1shot the image?
There are maybe 5 to 20 images generated before the user is happy.
Compared to what?
> How many people leave the lights on at home?
What does that have to do with this?
Their napkin math went like, human artists take $50 or so per art, which is let's say $200/hr skill, which means each art cannot take longer than 10 minutes, therefore the values used for AI must add up to less than 10 workstation minutes, or something like that.
And that math is equally broken for both sides: SDXL users easily spend hours rolling dice a hundred times without usable images, and likewise, artists just easily spend a day or two for an interesting request that may or may not come with free chocolates.
So those estimates are not only biased, but basically entirely useless.
They'll hire those people back at half their total compensation, with no stock, far fewer benefits, to clean up AI slop. And or just contract it overseas at ~1/3 the former total cost.
Another ten years from now the AI systems will have improved drastically, reducing the slop factor. There's no scenario where it goes back to how it was, that era is over. And the cost will decline substantially versus the peak for US developers.
Because I think it won't just be a linear relationship. If you let 1 vibe coder replace a team of 10, you'll need a lot more than 10 people to clean it up and maintain it going forward when they hit the wall.
Personally I'm looking forward to the news stories about major companies collapsing under the weight of their LLM-induced tech debt.
Why does that fact stop being true when the code is created by AI?
When the big lawsuits hit, they'll roll back.
There are really two observations here: 1. AI hasn't commoditized skilled labor. 2. AI is diluting/degrading media culture.
For the first, I'm waiting for more data, e.g. from the BLS. For the second, I think a new category of media has emerged. It lands somewhere near chiptune and deep-fried memes.
The problem is, actually skilled labor - think of translators, designers, copywriters - still is obviously needed, but at an intermediate/senior level. These people won't be replaced for a few years to come, and thus won't show up in labor board statistics.
What is getting replaced (or rather, positions not refilled as the existing people move up the career ladder) is the bottom of the barrel: interns and juniors, because that level of workmanship can actually be done by AI in quite a few cases despite it also being skilled work. But this kind of replacement doesn't show up in any kind of statistics, maybe the number of open positions - but a change in that number can also credibly be attributed to economic uncertainty thanks to tariffs, the Russian invasion, people holding their money together and foregoing spending, yadda yadda.
Obviously this is going to completely wreck the entire media/creative economy in a few years: when the entry side of the career funnel has dried up "thanks" to AI... suddenly there will not be any interns that evolve into juniors, no juniors that evolve into intermediates, no intermediates that evolve into seniors... and all that will be left for many an ad/media agency are a bunch of ghouls in suits that last touched Photoshop one and a half decades ago and sales teams.
Part of learning is doing. You can read about fixing a car, but until you do it, you won't know how it's actually done. For most things, doing is what turns "reading a bunch of stuff" into "skill".
Yet how far will this go? I see a neuralink, or I see smartglasses, where people just ask "how do I do this" and follow along as some kind of monkey. Not even understanding anything, at all, about whatever they do.
Who will advance our capabilities? Or debug issues not yet seen? Certainly AI is nowhere near either of those. Taking existing data and correlating it and discovering new relationships in that data isn't an advancement of capability.
What? Are you arguing that anyone in the world who isn't themselves running empirical research is not advancing any capabilities?
Also, on a related note, there's absolutely nothing stopping AIs from designing, running and analyzing their own experiments. As just one example, I'll mention the impressive OpenDrop microfluidics device [0] and a recent Steve Mould video about it [1] - it allows a computer to precisely control the mixing of liquids in almost arbitrary ways.
[0] https://gaudishop.ch/index.php/product/opendrop-v4-digital-m...
[1] https://www.youtube.com/watch?v=rf-efIZI_Dg
... which requires these individuals to have skills surpassing AI, and as AI is only getting better... in the end, for large corporations the decision between AI or humans will always be price.
AI killing the ad industry sounds great and I fully support it.
The only difference between the two is in the delivery of the end product.
"Instead of treated, we get tricked"; as the old broadway show goes.
It's the hard knock life for [some].
But in the context of highly skilled work, I don't think anyone hires juniors or interns to actually do any productive work. You typically hire juniors for the hope of retention of future intermediate talents.
This is even more true for construction workers and cooks. The actually, actually skilled, I suppose.
Also, an AI still can't come close to replacing either interns or juniors—but, I suppose we're just supposed to act like shouldering more work cleaning up after an AI that can't learn rather than hiring someone up to the task is progress.
> This has happened before in other industries. Lots of things that are now automated or no longer exist due to computerisation (e.g. manually processing cheques, fixing many types of bookkeeping errors) were part of the training of juniors and how they gained their first real world experience. There is still a funnel in those careers. A narrower one, but still sufficient.
Now the automation of that arbitrariness, trained not for specificity, does definitely degrade that leading edge and puts specificity further out of reach - particularly that humans mistake output as valid or specific.
It's an act of insanity made tech-normal.
I wouldn't hold your breath for accurate numbers, given the way Trump has treated that bureau since they gave a jobs report he didn't like.
LLMs are not AI. Machine learning is more useful. Perhaps they will evolve or perhaps they will prove a dead end.
LLMs are a particular application of machine learning, and as such LLMs both benefit by and contribute to general machine learning techniques.
I agree that LLMs are not the AI we all imagine, but the fact that it broke a huge milestone is a big deal - natural language used to be one of the metrics of AGI!
I believe it is only a matter of time until we get to a multi-sensory self-modifying large models which can both understand and learn from all five of human senses, and maybe even some of the senses we have no access to.
what if we have chosen a wrong metric there?
But they do close a big gap - they're capable of "understanding" fuzzy ill-defined sentences and "infer" the context, insofar as they can help formalize it into a format parsable by another system.
There's no reason to assume that models trained to predict a plausible next sequence of tokens wouldn't eventually develop "understanding" if it was the most efficient way to predict them.
But that’s it. Nothing here has justified the huge amount of money that are still being invested here. It’s nowhere near useful as mainframes computing or as attractive as mobile phones.
What changed is how we measure progress. This is common in the tech world - some times your KPIs become their own goal, and you must design new KPIs.
Obviously NLP was not a good enough predictor of progress towards AGI and we must find a better metric.
What does AI do, at its heart? It is literally trained to make things that can pass for what's ordinary. What's the best way to do that, normally? Make a bland thing that is straight down the middle of the road. Boring music, boring pictures, boring writing.
Now there are still some issues with common sense, due to the models lacking certain qualities that I'm sure experts are working on. Things like people with 8 fingers, lack of a model of physics, and so on. But we're already at a place where you could easily not spot a fake, especially while not paying attention.
So where does that leave us? AI is great at producing scaffolding. Lorem Ipsum, but for everything.
Humans come in to add a bit of agency. You have to take some risk when you're producing something, decisions have to be made. Taste, one might call it. And someone needs to be responsible for the decisions. Part of that is cleaning up obvious errors, but also part of it is customizing the skeleton so that it does what you want.
(from Ocean's Eleven)
I’ve only seen the movie when it came out, so I didn’t remember this scene and thought you might’ve been doing the ellipsis yourself. So I checked it out. For anyone else curious, the character was interrupted.
https://www.imdb.com/title/tt0240772/characters/nm0000093?it...
(Of course, you have to actually be using these signals, and not just cargo culting throwing LLM outputs everywhere.)
Conclusion: we are not in the age of AI.
We still call it the "industrial revolution".
My jury is still out as to whether the current models are proto-AI. Obviously an incredible innovation. I'm just not certain they have the potential to go the whole way.
/layman disclaimer
Perhaps at some point we will see a self-propelling technological singularity with the AI developing its own successor autonomously, but that's clearly not the current situation.
Dunno but I see plenty of people making exactly this claim every day, even on this site
Because that's what we've been promised, not once but many times by many different companies.
So sure, there's a marginal improvement like refactoring tools that do a lot of otherwise manual labor.
In the AI case, you’re not making the same thing over and over, so it’s more difficult to spot problems and when they happen you have to manually find and fix them, likely throwing everything away and starting from scratch. So in the end all the time and effort put into the machine was wasted and you would’ve been better going with the artisan (which you still need) in the first place.
I can understand how you might have that misunderstanding, but just think about it a little, what kind of minor changes can result in catastrophic failures
Producing physical objects to spec and doing quality assurance for that spec is way harder then you think.
Some errors are easy to spot for sure, but that's literally the same for AI generated slop
You are performing manual validation of outputs multiple times before manufacturing runs, and performing manual checks every 0.5-2 hours throughout the run. QA then performs their own checks every two hours, including validation that line operators have been performing their checks as required. This is in addition to line staff who have their eyes on the product to catch obvious issues as they process them.
Any defect that is found marks all product palleted since the last successful check as suspect. Suspect product is then subjected to distributed sampling to gauge the potential scope of the defect. If the defect appears to be present in that palleted product AND distributed throughout, it all gets marked for rework.
This is all done when making a single SKU.
In the case of AI, let's say AI programming, not only are we not performing this level of oversight and validation on that output, but the output isn't even the same SKU! It's making a new one-of-a-kind SKU every time, without the pre and post quality checks common in manufacturing.
AI proponents follow a methodology of not checking at all (i.e. spec-driven development) or only sampling every tenth, twentieth, or hundredth SKU rolling off the analogous assembly line.
But people won't care until a major correction happens. My guess is that we'll see a string of AI-enabled script kiddies piecing together massive hacks that leak embarrassing or incriminating information (think Celebgate-scale incidents). The attack surface is just so massive - there's never been a better time to be a hacker.
Either way, the point would stand. You wouldn’t have that factory issue then say “alright boys, dismantle everything, we need to get an artisan to rebuild every single item by hand”.
But according to this Indian service provider's website, the workers (Indians?) are hired to "clean up" not "fish out" the "faulty items"
Imagine a factory where the majority of items produced are faulty and are easily "fished out". But instead of discarding them,^1 workers have to fix each one
1. The energy costs of production are substantial
... about when Tesla delivers full self driving.
I like the idea that mediocre sci-fi show Upload came up with: maybe they can get self-driving to the point where it doesn't require a human in the loop and a squirrel will work.
You mean, full self driving only on pre mapped roads then?
I doubt even that will happen, but it's just a subset anyway.
GP specifically said Tesla delivering FSD.
https://en.wikipedia.org/wiki/Sales_Pitch_(short_story)
I got shudders just re-reading it when I came across:
'“It’s too late to vid your wife,” the fasrad said. “There are three emergency-rockets in the stern; if you want, I’ll fire them off in the hope of attracting a passing military transport.”'
...which sounds exactly like ChatGPT5.
In the history of AI systems you basically had people inputing Prolog rules in "smart" systems or programmers hardcoding rules is programs like ELIZA or Generalised Problem Solvers.
Stopped reading there. The author is very biased and out of touch.
There's a lot of LLM byproducts that leave us in bad taste (hate all of the LLM slop on the internet), but I don't think this is one of them.