Youtube Made AI Enhancements to Videos Without Warning or Permission
Posted4 months agoActive4 months ago
bbc.comTechstoryHigh profile
heatednegative
Debate
80/100
YoutubeAIVideo ProcessingUser Consent
Key topics
Youtube
AI
Video Processing
User Consent
YouTube is using AI to enhance videos without user consent, sparking concerns about the impact on video quality and the potential for AI-generated content to be indistinguishable from real videos.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
79
24-36h
Avg / period
22.9
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Aug 24, 2025 at 6:37 AM EDT
4 months ago
Step 01 - 02First comment
Aug 24, 2025 at 8:39 AM EDT
2h after posting
Step 02 - 03Peak activity
79 comments in 24-36h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 28, 2025 at 5:58 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45003073Type: storyLast synced: 11/20/2025, 8:23:06 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
This is especially bad in animation, where the art gets visibly distorted.
And a new generation what is trained on a constantly enabled face filters and 'AI'-upscaled slop is already here.
So to make edible stuff from shit.
Maybe Google has done the math and realized it's cheaper to upscale in realtime than store videos at high resolution forever. Wouldn't surprise me considering the number of shorts is probably growing exponentially.
Also shorts seem to be increasing exponentially... but Youtube viewership is not. So compute wouldn't need to increase as fast as storage.
I obviously don't know the numbers. Just saying that it could be a good reason why Youtube is doing this AI upscaling. I really don't see why otherwise. There's no improvement in image quality, quite the contrary.
It's 100% a push to remove human creators from the equation entirely.
For now it's a kind of autoencoding, regenerating the same input video with minimal changes. They will refine the pipeline until the end video is indistinguishable from the original. Then, once that is perfected, they will offer famous content creators the chance to sell their "image" to other creators, so less popular underpaid creators can record videos and change their appearance to those of famous ones, making each content creator a brand to be sold. Eventually humans will get out of the pipeline and everything will be autogenerated, of course.
I'm frightened by how realistic this sounds.
1. See that AI upscaling works kinda well on certain illustrations.
2. Start a project to see if you can do the same with video.
3. Develop 15 different quality metrics, trying to capture what it means when "it looks a bit fake"
4. Project's results aren't very good, but it's embarrassing to admit failure.
5. Choose a metric which went up, declare victory, put it live in production.
My despondent brain auto-translated that to: "My livelihood depends on Youtube"
[1]https://qntm.org/perso
But serious discussion demands the truth: It is fiction, in the style of a twitter thread.
That's why I think it's funny that they claim they will now be "using AI" to determine if someone is an adult and able to watch certain youtube videos. Google already knows how old you are. It doesn't need a new technique to figure out that you're 11 years old or 39 years old. They're literally just pretending to not know this information.
Lots of very hateful, negative content too. It didn’t take me long to find the video “why this new artist sucks.” Another find, what I assume is an overblown small quibble turned into clickbait videos, was “this record label is trying to SILENCE me.” Maybe, somehow, these two things are related.
That's about AI, not very polarizing at the level it's currently at.
> Another find, what I assume is an overblown small quibble turned into clickbait videos, was “this record label is trying to SILENCE me.”
That might be overblown, but it doesn't sound polarizing at all. OP was saying he always has the most polarizing opinions.
If that last one is the vid I'm thinking of, the same record company has sent him hundreds of copyright strikes and he has to have a lawyer constantly fighting them for fair use. He does some stuff verging on listen-along reaction videos, but the strikes he talks about there are when he is interviewing the artists who made the songs and they play short snippits of them for reference while talking about the history of making them, thought process behind the songwriting, etc.
I think it's not just automated content ID stuff where it claims the monetization, but the same firm for that label going after him over and over where 3 strikes removes his channel. The title or thumbnail might be overblown, probably the firm just earns a commission and he's dealing with a corporate machine that is scatter shotting against big videos with lots of views that have any of their sound rather than targetting him to silence something they don't want to get out, but I don't think the video was very polarizing.
If you're referring to his video I'm Sorry...This New Artist Completely Sucks[1], then it's a video about a fully AI generated "artist" he made using various AI tools.
So it's not hateful against anyone. Though the title is a bit clickbait-y, I'll give you that.
[1]: https://www.youtube.com/watch?v=eKxNGFjyRv0
It's almost as if there's a mindless robot submitting the claims to YouTube. Perish the thought! (-:
Touching up videos is bad but it is hardly material to break out the pitchforks compared to some of the political manoeuvres YouTube has been involved in.
Say what you want about Microsoft, but if I have a problem with something I've pretty much always ended up getting support for that problem. I think Google's lack of response adds to their "mystique".
But it also creates superstitions since creators don't really understand the firm rules to follow.
Regardless, it is one of the most dystopian things about modern society - the lack of accountability for their decisions.
It's worth stating, though, that the vast majority of youtube's problems are the fault of copyright law and massive media publishers. Google could care less if you wanted to upload full camrips of 2025's biggest blockbusters, but the powers-that-be demand Google is able to take it down immediately. This is why 15 seconds of a song playing in the background gets your video demonitized.
Even if you produce interesting videos, you still must MB to get the likes, to stay relevant to the algorithm, to capture a bigger share of the limited resource that is human attention.
The creators are fighting each other for land, our eyeballs are the crops, meanwhile the landlord takes most of the profits.
As a viewer I certainly hate that crap and wish Google didn't intentionally make it this way.
It's most glaringly obvious in TV shows. Scenes from The Big Bang Theory look like someone clumsily tries to paint over the scenes with oil paint. It's as if the actors are wearing an inch thick layer of poorly applied makeup.
It's far less glaring in Rick Beato's videos, but it's there if you pay attention. Jill Bearup wanted to see how bad it could get and reuploaded the "enhanced" videos a hundred times over until it became a horrifying mess of artifacts.
The question remains why YouTube would do this, and the only answers I can come up with are "because they can" and "they want to brainwash us into accepting uncanny valley AI slop as real".
This might be the uploaders doing to avoid copyright strikes.
It's true though that aggressive denoising gives things an artificially generated look since both processes use denoising heavily.
Perhaps this was done to optimize video encoding, since the less noise/surface detail there is the easier it is to compress.
The controversy is that YouTube is making strange changes to the videos of users, that make the videos look fake.
YouTube creators put hours upon hours on writing, shooting and editing their videos. And those that do it full time often depend on YouTube and their audience for income.
If YouTube messes up the videos of creators and makes the videos look like they are fake, of course the creators are gonna be upset!
Article 8 can be revoked for public safety, prevention of disorder or crime, protection of the rights of other people, but also for the protection of health and morals.
Given the problems with attention spans in systems like TikTok and shorts, they definitely could ban it even given article 8.
Sorry to burst your bubble.
The Venn diagram of AI voice users and good content creators is pretty close to two separate circles. I don't really care about the minority in the intersection.
As a french-speaking person, I now find myself seeing french youtubers seemingly posting videos with english titles and robotic voice, before realizing that it's Youtube being stupid again.
What's more infuriating is that it's legitimately at heart a cool feature, just executed in the most brain-dead way possible, by making it opt-out and without the ability to specify known languages.
I mostly don't watch them. But they literally spam every single search. (While we're at it, Youtube also isn't very good at honoring keywords in searches either)
- auto-dubbing
- auto-translation
- shorts (they're fine in a separate space, just not in the timeline)
- member only streams (if I'm not a member, which is 100% of them)
The only viable interface for that is the web and plenty of browser extensions.
there are ways to get this same experience with android. Use https://github.com/ReVanced/ and make your phone work for you instead of working for someone else.
Also, if you have an Android TV, I'd suggest SmartTube, it's way better than the original app and it has the same benefits of ReVanced: https://github.com/yuliskov/SmartTube
No they're not. Nothing that mandates vertical video has ever been fine nor ever will be. Tiktok, Reels, Shorts, all bad and should be destroyed.
Unless the action is primarily vertical, which is rarely ever the case, it's always been and always will be wrong.
Yes I will die on this hill. Videos that are worse to watch on everything but a phone and have bad framing for most content are objectively bad.
There is nothing wrong with the concept of short videos of course, but this "built for phones, sucks for everything else" trash needs to go away.
Vertical videos, if they're focused on a human, work fine for the same reason.
I suspect that a still image is also different from video because, without motion, there's no feeling that if the person might move a few inches to one side and go out of frame.
If so it's really just another kind of lossy compression. No different in principle from encoding a video to AV-1 format.
I haven't noticed it outside copyrighted material, so it's probably intentional.
---
By the way, this reminds me also of another stupid Google thing related to languages:
Say your Chrome is set to English. When encountering another language page, Chrome will (since a decade ago or so) helpfully ask you to auto-translate by default. When you click a button "Never translate <language>", it will add language to the list which is sent out to every HTTP request the browser makes via `Accept-Language` header (it's not obvious this happens unless you're the kind of person who lives in DevTools and inspects outgoing traffic).
Fast-forward N years, Chrome privacy team realizes this increases fingerprinting surface, making every user less unique, so they propose this: "Reduce fingerprinting in Accept-Language header information" (https://chromestatus.com/feature/5188040623390720)
So basically they compensate for one "feature" with another, instead of not doing the first thing in the first place.
Sometimes it feels like Google keeps anyone with any kind of executive power hermetically sealed a some house borrowed from a reality TV show where they're not allowed any contact with the outside world.
Sometimes it is better when some things are left out. /s
Were you asleep in the last 10 years ? /s They have names for it: accessibily, User eXperience. Or as some other people put it: enshitification.
Who in his right mind thought this was a good idea??
I have a Firefox extension which tries to suppress the translations, but it only works for the main view, not for videos in the sidebar. It's better than nothing.
Says everything. Hey PM at YouTube: How about you think stuff through before even starting to waste time on stuff like this?
What makes you think they don't think it through? This effect is an experiment that they are running. It seems to be useless, unwanted from our perspective, but what if they find that it increases engagement?
Basing it on a lot of stupid decisions youtube has made over the years, the last being the horrendous autotranslation of titles/descriptions/audio that can't be turned off. Can only be explained by having morons making decisions, who can't imagine that anyone could speak more than one language.
I don't think it's stupidity, or shortsightedness, or ignorance, or anything like that. They just have different priorities. And they are not having enough negative feedback to reconsider these decisions.
There is no practical difference between an idiot on the internet and a smart troll role-playing an idiot 100% of the time.
Since youtube has been making a lot of objectively stupid decisions, it does not matter if they are actually stupid or this is some kind of meta commentary on the power elites running their way and ordinary folks not able to do anything about it. It's all the same in practice.
As long as YouTube continues to be the Jupiter sized gorilla in the room, they're not going to care very much about what the plebes think.
The level of post-processing matters. There is a difference between color grading an image and removing wrinkles from a face.
The line is not cut clear but these companies are pushing the boundaries so we get used to fake imagery. That is not good.
You're implying the latter doesn't happen normally but denoising (which basically every smartphone camera does) often has the effect of removing details like wrinkles. The effect is especially pronounced in low light settings, where noise is the highest.
I do not get the argument of "if nothing happens when an ant bites you then I can shot you with a cannon be cause it is the same thing just larger". The impact on society matters for me, and it is very different. Justifying AI at any cost is a marketing strategy that will hurt us long-term.
this argument fails because at least in the examples provided, it's closer to the non-gen ai denoising/upscaling algorithms than telling chatgpt to upscale an image.
>The impact on society matters for me, and it is very different. Justifying AI at any cost is a marketing strategy that will hurt us long-term.
There's no indication it's AI, except some vague references to "machine learning".
Maybe you’re thinking of TikTok and samsung facial smoothing filters? Those are a lot more subtle and can be turned off.
Just a couple days ago I got an ad with a Ned Flanders singing about the causes of erectyle dysfunction (!), a huge cocktail of copyright infringement, dangerous medical advice and AI generated slop. Youtube answered the report telling me they've reviewed and found nothing wrong.
The constant low quality, extremely intertwined ads start to remind me of those of shady forums and porn pages of the nineties. I'm expecting them to start advertising heroine now they've decided short term profits trump everything else.
In other words, their Google Ads account is fully paid up. Copyright infringement only matters if you're a lowly uploader.
https://www.reddit.com/r/youtube/comments/1lllnse/youtube_sh...
I skimmed the videos as well, and there is much more talk about this thing, and barely any examples of it. As this is an experiment, I guess that all this noise serves as a feedback to YouTube.
Basically YouTube is applying a sharpening filter to "Shorts" videos.
Do these videos that YT creates to backfill their lack of Shorts get credited back to the original creator as far as monetization from ads?
This really has a feel of the delivery apps making websites for the restaurants that did not previously have one without the restaurant knowing anything about it while setting higher prices on the menu items while keeping that extra money instead of paying the restaurants the extra.
Although, I probably wouldn't want any automatic filtering applied to my video either, AI modifications or not.
Aside: The mention of Technorati tags (and even Flickr) in the linked blog post hit me right in the Web 2.0 nostalgia feels.
[0] https://colorspretty.blogspot.com/2007/01/flickrs-dirty-litt...
Have to say, I am not a fan of the AI sharpening filter at all. Would much prefer the low res videos.
Now imagine the near future of the Internet, when all people have to adapt to that in order to not be dismissed as AI.
However, polished to a point that we humans start to lose our unique tone is what style guides that go into the minutiae of comma placement try do do. And I'm currently reading a book I'm 100% sure has been edited by an expert human editor that did quite the job of taking away all the uniqueness of the work. So, we can't just blame the LLMs for making things more gray when we have historically paid other people to do it.
It’s like saying you wouldn’t hire an engineer because you suspect they’d use computers rather than pencil and paper.
Even if the text is a simple article, a personal touch / style will go a long way to make it more pleasant to read.
LLMs are just making everything equally average, minus their own imperfections. Moving forward, they will in-breed while everything becomes progressively worse.
That's death to our culture.
This is what tech bros in SV built and they all love it.
This is a contradiction in terms.
At this point getting involved with youtube is just the usual naive behaviour that somehow you are the exception and bad things won't happen to you.
118 more comments available on Hacker News