Youtube Removes Windows 11 Bypass Tutorials, Claims 'risk of Physical Harm'
Posted2 months agoActiveabout 2 months ago
news.itsfoss.comTechstoryHigh profile
heatednegative
Debate
85/100
Windows 11Youtube CensorshipMicrosoft
Key topics
Windows 11
Youtube Censorship
Microsoft
YouTube removed tutorials on bypassing Windows 11's hardware requirements, citing 'risk of physical harm', sparking controversy and criticism towards Microsoft and Google's censorship practices.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
8m
Peak period
85
0-6h
Avg / period
17.8
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Nov 7, 2025 at 3:50 PM EST
2 months ago
Step 01 - 02First comment
Nov 7, 2025 at 3:58 PM EST
8m after posting
Step 02 - 03Peak activity
85 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 10, 2025 at 10:56 AM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45850963Type: storyLast synced: 11/23/2025, 1:00:33 AM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Anyway I doubt youtube did this intentionally, but it does show how vulnerable their system is to false reports.
> The platform claimed its "initial actions" (could be either the first takedown or appeal denial, or both) were not the result of automation.
Didn't know YouTube can improve their review time from 45 minutes to 5 minutes without automation. I bet it's pure magic.
/s, in case that wasn't blatantly obvious...
If they sensor something like this, how could we trust platforms with the actually important subjects?
Most Americans literally can’t imagine news as anything other than entertainment.
The only real competing video platform that promises no censorship is Rumble ( https://rumble.com ), but it has a very right-wing slant due to conservatives flocking to it during all the Covid-era social media censorship.
There's also this annoying pattern where 98% of the complaints about censorship are from people who are mad that the objectively stupid and dangerous stuff they were trying to profit from got censored, so it becomes a "boy who cried wolf" situation where any complaint about internet censorship is ignored on the assumption it's one of those. (What if there really is a Nigerian prince who needs my help, and I don't read his email?)
This time, though... Society is not being destroyed by people pirating Windows 11. That is entirely different from censoring things that destroy society, and they don't have a good excuse.
That one time in Germany, actually an 80 year long ongoing event in central Europe. Hitler didn't wake up one day with a novel idea about the Jews and the place of the German people, these were foundational ideas in the culture at least as far back as Wagner.
If anything, this pro-censorship argument is self defeating, because the "disinformation" peddlers that were silenced in the second reich were generally those of the liberal, anglo, and francophilic variety, those who would seek to decenter the goal of a collective German destiny.
Censorship is only ever a good if you find yourself a part of the group that would be doing the censoring.
Take freedom of speech for instance, half the thing you can say in usa would be deemed as hate speech in Europe.
Rumble isn't going to save the internet.
We call those "free speech" platforms nowadays, because apparently the only free speech is Nazi speech.
https://slatestarcodex.com/2017/05/01/neutral-vs-conservativ...
> The moral of the story is: if you’re against witch-hunts, and you promise to found your own little utopian community where witch-hunts will never happen, your new society will end up consisting of approximately three principled civil libertarians and seven zillion witches. It will be a terrible place to live even if witch-hunts are genuinely wrong.
Framing it in terms of trust is already problematic.
We don't trust the NYTimes or Washington Post, they are a source of information that needs to be taken with shovels of salt and require additional research to get to anything trustworthy. And we always understood that was their role.
We don't trust supermarkets or retailers to give us important pricing information, we do the research to get anything actionable.
Why is trust involved for YouTube ?
and it is why total freedom of speech on a platform does not mean we can trust it. maybe even the opposite because people who tell a lie are more motivated (money or whatever)
I am not justifying w11 video removal I'm just saying thinking youtube trustworthy because it's open to everybody is a mistake
More or less, the charitable and responsible approach to being ultra-rich, and which has disappeared in this century.
I see the people in charge of these big corporations as lizards, given every decision they take seems to be anti-Humanity. We should cherish non-profits, small businesses, having a good and boring life, doing normal things. Instead we idolise being successful, rich, or famous. What a stupid system…
The answer is no, we can't.
Why is Microsoft allowed to operate in such a user hostile way?
Why aren't people like up in arms massively tanking their stock value, boycotting, reputation harming in every legal way possible en masse?
Like are people just careless and distracted 24/7?
Like surely this should just not be a thing?
I just don't understand how inhumane hostile behavior is just so rampant and like allowed to exist in our society.
1789.
Then what have I been using and supporting it for?
It's because that's the default. Do you see any other facet of human organization which doesn't have constant hostile behavior? If it's large enough, or going on for enough time, there is abuse happening in it.
>Like are people just careless and distracted 24/7?
People just want to live their lives, on which a removed Win 11 bypass video has zero effect.
Nuked my Windows 10 install and put Pop OS on it + a MacBook separately.
I had Windows 11 (kept it around for gaming), I binned it a few weeks ago.
Don't game enough to justify it any more (haven't even tried gaming on linux yet).
Juice was no longer worth the squeeze.
Proton is an impressive piece of software.
Actually, I would trade visuals for better games. Most games nowadays are better enjoyed as movies than games.
Well - it is time that the rest of the world censors these two corporation. I don't want them to restrict information.
People will find workarounds by the way. This is now a Streisand effect - as people see that Google and Microsoft try to hide information from them, they will now look at this much more closely than before, with more attention.
(Having said that, my bypass strategy is to not use Windows 11 altogether. I don't depend on it, having used Linux since 21 years now, but my machine to the left is actually using Win10, for various reasons, such as that I can fix problems of elderly relatives still using Windows. But I won't use Win11 ever with its recall-spy software. I also don't care that it can be disabled - any corporation that tries to sniff-invade on me, is evil and must be banned.)
Edit: Ok so the video was restored. That was good, but still, we need an alternative here. Google holds WAY too much power via youtube.
Nowadays they censor by putting pressure (by denying payment capabilities) on sites that offer content that they dont agree with.
This comment section is wild.
The videos are up. Microsoft and Google weren't meeting in secret backrooms to censor this one channel. The most likely explanation is that a competing channel was trying to move their own videos up in the rankings by mass-reporting other videos on the topic.
It's a growing problem on social media platforms: Cutthroat channels or influencers will use alt accounts or even paid services to report their competition. They know that with enough reports in a short period of time they can get the content removed for a while, which creates a window for their own content to get more views.
The clue is the "risk of physical harm". People who abuse the report function know that the report options involving physical harm, violence, or suicide are the quickest way to get content taken down.
The only frequent obvious problem I see is Youtube not telling people why their videos get hidden or taken down or down ranked. Long time creators get left in the dark from random big changes to the platform that could be solved with an email.
We have companies with billions of customers but smaller customer service than a mid-sized retailer from the 90s. Something is not right.
IME it's especially bad with Admob. They've purposefully kept their email contact option broken for years and the only "help" you can access is from their forum, which is the absolute worst and never provides any meaningful resolutions. It's awful.
People posting on these sites as content creators aren’t customers.
The root problem is twofold: the inability to reliably automate distinguishing "good actor" and "bad actor", and a lack of will to throw serious resources at solving the problem via manual, high precision moderation.
This can be accomplished with bogus dmca notices too. Since google gets such a high volume of notices the default action is just to shoot first and ask questions later. Alarmingly, there are 0 consequences (financial or legal) for sending bogus dmca notices
I think it's high time google stopped acting as judge jury and executioner in the court of copyright enforcement.
[https://copyrightalliance.org/education/copyright-law-explai...]
Not saying Google is good or anything, but this is well trod ground at this point.
https://techhq.com/news/dmca-takedown-notices-case-in-califo...
That's not the argument IMO. They don't have to be intentionally malicious in each action. A drunk driver doesn't want to kill a little girl in the road. Their prior choices shape the outcome of their later options. A drunk driver decides to get behind the wheel after drinking. A large company makes a decision to make more profit knowing there are repercussions and calculating the risk.
Complain to Congress, they’re the ones who set this up to work this way.
Who lobbied for it to work that way? I'm assuming google aren't entirely innocent here.
It's also not clear how an informational YouTube video would be either a circumvention tool or an act of circumvention if nothing in the video itself is infringing.
Also, it doesn't even need to be collusion between Microsoft and Google, but to pretend like that's never a thing is to be ignorant of history.
Stop defending these big companies for these things. Even if your version of the story is true, the fact they allow their platform to be abused this way is incredibly damaging to content creators trying to spread awareness of issues.
But also, do you seriously think there is a massive amount of competition at the scale of a 330k subscriber channel for people to bother pulling off this kind of attack for two videos on bypassing Windows 11 account and hardware requirements?
Regardless of what happened here, Google is to blame at least for the tools they have made.
As for Microsoft, I don't think there's anything disagreeable with saying that they've tried hard to get people to switch to hardware with their TPM implementation and lying about the reasons. Likewise for forcing Microsoft accounts on people. I am not certain they were involved in this case, but they created the need for this kind of video to exist, so they are also implicated here.
Enough to cause this behavior. I don’t know if theres a mathematical or organization law or something, but it seems like theres always a way to abuse review mechanisms for large communities / sites.
Never enough manpower to do review for each case. Or reviews take a long time.
Manpower at a given salary cost.
All content platforms could throw more money at this problem, hire more / more skilled reviewers, and create better outcomes. Or spend less and get worse.
It's a choice under their control, not an inevitability.
If they don't react quickly and decisively to reports of "possible physical harm", even if the reports seem unfounded, they'll eventually get the NY Times to say that somebody who committed suicide "previously watched a video which has been reported to Youtube multiple times, but no action was taken by Google."
If that's too expensive, your platform is broken. You need to be able to process user reports. If you can't, rethink what you're doing.
The central ill of centralized web platforms is that the US never mandated customer/content SLAs in regulation, even as their size necessitated that as a social good. (I.e. when they became 'too big for alternatives to be alternatives')
It wouldn't be complicated:
Google, Meta, Apple, Steam, Amazon, etc. could all be better, more effective platforms if they spent more time and money on resolution.As-is, they invest what current law requires, and we get the current situation.
I really wish someone could tell me that either
1) Yes we can make a system that enables functional and effective customer support (because this is what this case is about) no matter the language
2) No we can’t because it’s fundamentally about manpower which can match the context with actual harm.
Whatever I suspect, having any definitive answer to this decides how these problems need to eventually be solved. Which in turn tells us what we should ask and hope for.
I'm not saying that it's humans, but it's humans.
Augmented by technology, but the only currently viable arbitrator of human-generated edge cases is another human.
If a platform can't afford to hire moderation resources to do the job effectively (read: skilled resources in enough quantity to make effective decisions), then it's not a viable business.
But, it is viable. Many profitable businesses exist that don't pay for this.
One may instead mean that they want such businesses to be made non viable, in which case we should critically consider which business models that we might currently like other consequences of may be made non viable. For example, will users suddenly need to pay per post? If so, is that worth the trade-off?
Imho, we should do what we can to make sure they're required to pay for those externalities.
Then, they either figure out a way to do that profitably (great! innovation!) or they go under.
But we shouldn't allow them to continue to profit by causing external ills.
They do figure out how. That's the problem. This stuff is all trade offs.
If you say they have to remove the videos or they're in trouble then they remove the videos even if they shouldn't.
You can come up with some other rule but you can't eliminate the trade off so the choice you're making is how you want to pay. Do you want more illegitimate takedowns or less censorship of whatever you were trying to censor?
If you tried to mandate total perfection then they wouldn't be able to do it and neither would anybody else, and then you don't have video hosting. Which nobody is going to accept.
And that requirement can be created by more robust, outcome-defined regulation.
> The platform claimed its "initial actions" (could be either the first takedown or appeal denial, or both) were not the result of automation.
They'll silently fix the edge case in the OP and never admit it was any kind of algorithmic (OR human) failure.
People are so quick to assume conspiracy because it is mentally convenient
Does Microsoft unfairly benefit from Google's takedown tirefire? I do not know.
But if I were designing a voting system for takedowns it would be: 1. 1 non-DMCA takedown vote per user per year 2. No takedown votes for accounts less than 1 year old 3. Takedown all equivalent content when a video is voted down. 4. Verification of DMCA ownership before taking down DMCA-protected content.
Why? Because they were all paying people to DDoS each other. Kinda silly, but good for business.
I'll just have to remember to never visit Spain, lest I get arrested for drug trafficking because of my phone.
They need to do what? Browser, zoom, email client. They are never going to install anything.
All of these have great options on linux, and they work just as well.
Just put them on Debian stable and be done with it.
That being said, it's also pretty easy to get a full linux shell and even install gui apps via flatpak or whatever.
And if they do care they will find workarounds as you said.
Nothing will change, the frog has been sitting in boiling water for more than a generation now and the newbloods never experienced the computational freedom you hold dear; they will happily use whatever corporate surveillance technology is being forced upon them. They will even defend it to the bone if you try to take it away
286 more comments available on Hacker News