Microsoft Only Lets You Opt Out of AI Photo Scanning 3x a Year
Posted3 months agoActive3 months ago
hardware.slashdot.orgTechstoryHigh profile
heatednegative
Debate
85/100
MicrosoftArtificial IntelligencePrivacyOnedrive
Key topics
Microsoft
Artificial Intelligence
Privacy
Onedrive
Microsoft is testing an AI-powered photo scanning feature in OneDrive that users can only opt out of three times a year, sparking concerns about privacy and data usage.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
28m
Peak period
50
0-6h
Avg / period
17.8
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 11, 2025 at 2:36 PM EDT
3 months ago
Step 01 - 02First comment
Oct 11, 2025 at 3:05 PM EDT
28m after posting
Step 02 - 03Peak activity
50 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 14, 2025 at 4:59 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45551504Type: storyLast synced: 11/22/2025, 11:17:55 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Heaven forfend!
Users: save files "on their PC" (they think)
Microsoft: Rolls out AI photo-scanning feature to unknowing users intending to learn something.
Users: WTF? And there are rules on turning it on and off?
Microsoft: We have nothing more to share at this time.
Favorite quote from the article:
> [Microsoft's publicist chose not to answer this question.]
https://www.microsoft.com/en-us/servicesagreement#15_binding...
Astonishing. They clearly feel their users have no choice but to accept this onerous and ridiculous requirement. As if users wouldn't understand that they'd have to go way out of their way to write the code which enforces this outcome. All for a feature which provides me dubious benefit. I know who the people in my photographs are. Why is Microsoft so eager to also be able to know this?
Privacy legislation is clearly lacking. This type of action should bring the hammer down swiftly and soundly upon these gross and inappropriate corporate decision makers. Microsoft has needed that hammer blow for quite some time now. This should make that obvious. I guess I'll hold my breath while I see how Congress responds.
Presumably it can be used for filtering as well - find me all pictures of me with my dad, etc.
Depending on the algorithm and parameters, you can easily get a scary amount of false positives, especially using algorithms that shrink images during hashing, which is a lot of them.
I'd imagine outside of egregious abuse and truly unique images, you could squint at a legal image and say it looks very much like another illegal image, and get a false positive.
From what I'm reading about PhotoDNA, it's your standard phashing system from 15 years ago, which is terrifying.
But yes, you can add heuristics, but you will still get false positives.
Folks did read. They guessed that known hashes would be stored on devices and images would be scanned against that. Was this a wrong guess?
> the conversation was dominated by uninformed outrage about things that weren’t happening.
The thing that wasn't happening yet was mission creep beyond the original targets. Because expanding-beyond-originally-stated-parameters is thing that happens with far reaching monitoring systems. Because it happens with the type of regularity that is typically limited to physics.
There were 2ndary concerns about how false positives would be handled. There were concerns about what the procedures were for any positive. Given Gov propensities to ruin lives now and ignore that harm (or craft a justification) later, the concerns seem valid.
That's what I recall the concerned voices were on about. To me, they didn't seem outraged.
Yes. Completely wrong. Not even close.
Why don’t you just go and read about it instead of guessing? Seriously, the point of my comment was that discussion with people who are just guessing is worthless.
I’m not making people guess. I explained directly what I wanted people to know very, very plainly.
You are replying now as if the discussion we are having is whether it’s a good system or not. That is not the discussion we are having.
This is the point I was making:
> instead of reading about how it actually worked, huge amounts of people just guessed incorrectly about how it worked and the conversation was dominated by uninformed outrage about things that weren’t happening.
The discussion is about the ignorance, not about the system itself. If you knew how it worked and disagreed with it, then I would completely support that. I’m not 100% convinced myself! But you don’t know how it works, you just assumed – and you got it very wrong. So did a lot of other people. And collectively, that drowned out any discussion of how it actually worked, because you were all mad about something imaginary.
You are perfectly capable of reading how it worked. You do not need me to waste a lot of time re-writing Apple’s materials on a complex system in this small text box on Hacker News so you can then post a one sentence shallow dismissal. There is no value in doing that at all, it just places an asymmetric burden on me to continue the conversation.
> Yes. Completely wrong. Not even close.
Per Apple:
Recapping here. In your estimation: Is not even close to . And folks who read the latter and thought the former were, in your view, "Completely wrong".Well, okay then.
https://web.archive.org/web/20250905063000/https://www.apple...
That said, I think this is mostly immaterial to the problem? As the comment you’re responding to says, the main problem they have with the system is mission creep, that governments will expand the system to cover more types of photos, etc. since the software is already present to scan through people’s photos on device. Which could happen regardless of how fancy the matching algorithm was.
Also, once the system is created it’s easy to envision governments putting whatever images they want to know people have into the phone or changing the specificity of the filter so it starts sending many more images to the cloud. Especially since the filter ran on locally stored images and not things that were already in the cloud.
Their nudity filter on iMessages was fine though (I don’t think it ever sends anything to the internet? Just contacts your parents if you’re a minor with Family Sharing enabled?)
A key point is that the system was designed to make sure the database was strongly cryptographically private against review. -- that's actually where 95% of the technical complexity in the proposal came from: to make absolutely sure the public could never discover exactly what government organizations were or weren't scanning for.
Just as an example, part of my responses here were to develop and publish a second-preimage attack on their hash function-- simply to make the point concrete that varrious bad scenarios would be facilitated by the existence of one.
I would not care if it worked 100% accurately. My outrage is informed by people like you who think it is OK in any form whatever.
I think the premise for either system is flawed and both are too error prone for critical applications.
Phew, not AI then… ?
The biggest worry would always be that the tools would be stultifying and shit but executives would use them to drive layoffs on an epic scale anyway.
And hey now here we are: the tools are stultifying and shit, the projects have largely failed, and the only way to fix the losses is: layoffs.
Manager: hey let's go all in on this fancy new toy! We'll all be billionaires!
Employee: oh yeah I will work nights and weekends with no pay for this! I wanna be a billionaire!
Manager: actually it failed, we ran out of money, you no longer have a job... But at least we didn't build skynet, right?
Odd choice and poor optics (just limit the number of times you can enable and add a warning screen) but I wouldn't assume this was intentionally evil bad faith.
I’ve seen reports in the past that people found that syncing to the cloud was turned back on automatically after installing Windows updates.
I would not be surprised if Microsoft accidentally flip the setting back on for people who opted out of AI photo scanning.
And so if you can only turn it back off three times a year, it only takes Microsoft messing up and opting you back in three times in a year against your will and then you are stuck opted in to AI scanning for the rest of the year.
Like you said, they should be limiting the number of times it can be turned back on, not the number of times it can be turned off.
Not trying to say that you could have prevented this; I would not be surprised if Windows 10 enterprise decided to "helpfully" turn on auto updates and updated itself with its fun new "features" on next computer restart.
And even so, let's say they didn't use Windows — I'd still expect the same rigor for any operating system update.
Then you are hopelessly naive.
This was exactly my thought as well.
Right now it doesn't say if these are supposed to be three different "seasons" of the year that you are able to opt-out, or three different "windows of opportunity".
Or maybe it means your allocation is limited to three non-surveillance requests per year. Which should be enough for average users. People aren't so big on privacy any more anyway.
Now would these be on a calendar year basis, or maybe one year after first implementation?
And what about rolling over from one year to another?
Or is it use it or lose it?
Enquiring minds want to know ;)
This was pre-AI hype, perhaps 15 years ago. It seems Microsoft feel it is normalised. More you are their product. It strikes me as great insecurity.
My assumption is that when this feature is on and you turn it off, they end up deleting the tags (since you've revoked permission for them to tag them). If it gets turned back on again, I assume that means they need to rescan them. So in effect, it sounded to me like a limit on how many times you can toggle this feature to prevent wasted processing.
Their disclaimer already suggests they don't train on your photos.
worst possible reading of any given feature must be assumed to the detriment of the user and benefit of the company
Honestly, these days, I do not expect much of Microsoft. In fact, I recently thought to myself, there is no way they can still disappoint. But what do they do? They find a way damn it.
So if you enable the feature, it sends your photos to MS to scan... If you turn it off, they delete that data, meaning if you turn it on again, they have to process the photos again. Every time you enable it, you are using server resources.
However, this should mean that they don't let you re-enable it after you turn it off 3 times, not that you can't turn it off if you have enabled it 3 times.
all facial grouping data will be permanently removed within 30 days
I feel like you're way too emotionally invested in whatever this is to assess it without bias. I don't care what the emotions are around it, that's a marketing issue. I only care about the technical details in this case and there isn't anything about it in particular that concerns me.
It's probably opt-out, because most users don't want to wait 24 hours for their photos to get analyzed when they just want to search for that dog photo from 15 years ago using their phone, because their dog just died and they want to share old photos with the family.
This doesn't apply to your encrypted vault files. Throw your files in there if you don't want to toggle off any given processing option they might add 3 years from now.
Clearly, you personally can't think of a reason yourself based on that 'probably' alone.
<< I feel like you're way too emotionally invested
I think. You feel. I am not invested at all. I have.. limited encounters with windows these days. But it would be silly to simply dismiss it. Why? For the children man. Think of the poor children who were not raised free from this silliness.
<< I only care about the technical details in this case and there isn't anything about it in particular that concerns me.
I can respect that. What are those technical details? MS was a little light on the details.
"Microsoft collects, uses, and stores facial scans and biometric information from your photos through the OneDrive app for facial grouping technologies. This helps you quickly and easily organize photos of friends and family. Only you can see your face groupings. If you share a photo or album with another individual, face groupings will not be shared.
Microsoft does not use any of your facial scans and biometric information to train or improve the AI model overall. Any data you provide is only used to help triage and improve the results of your account, no one else's.
While the feature is on, Microsoft uses this data to group faces in your photos. You can turn this feature off at any time through Settings. When you turn off this feature in your OneDrive settings, all facial grouping data will be permanently removed within 30 days. Microsoft will further protect you by deleting your data after a period of inactivity. See the Microsoft account activity policy for more information."
You can also see here some of the ways they're trying to expose these features to users, who can use Co-Pilot etc. https://techcommunity.microsoft.com/blog/onedriveblog/copilo...
I turn all Co-Pilot things off and I've got all those AI/tagging settings off in OneDrive, but I'm not worried about the settings being disingenuous currently.
There's always a worry that some day, a company will change and then you're screwed, because they have all your data and they aren't who you thought they were anymore. That's always a risk. Just right now, I'm less worried about Microsoft in that way than I am with other companies.
In a way, being anti-government is GOOD, because overly relying on government is dangerous. The same applies to all these mega-platforms. At the same time, I know a lot of people who have lots a lot of data, because they never had it backed up anywhere, and people who have the data, but can't find anything, because there's so much of it and none of it is organized. These are just, actual real world problems and Microsoft legitimately sees that the technology is there now to solve these problems.
That's what I see.
Did this line ever win an argument for you or you just use it to annoy who you're talking to?
After all, sometimes an emotional reaction comes from a logical basis, but the emotion can avalanche and then the logical underpinnings get swept away so they don't get re-evaluated the way they should.
Then proceeds to appeal to emotion with dog photo statement.
It's super common for people to take a cynical interpretation of something and just run with it, because negativity bias goes zoom.
Be less deterministic than that, prove you have free will and think for yourself.
So you can opt out of them taking all of your most private moments and putting them into a data set that will be leaked, but you can only opt out 3 times. What are the odds a "bug" (feature) turns it on 4 times? Anything less than 100% is an underestimate.
And what does a disclaimer mean, legally speaking? They won't face any consequences when they use it for training purposes. They'll simply deny that they do it. When it's revealed that they did it, they'll say sorry, that wasn't intentional. When it's revealed to be intentional, they'll say it's good for you so be quiet.
This is how my parents get Binged a few times per year
So to me it looks like MS tries to avoid that users ram MS's infrastructure with repeated expensive full scans of their library. I would have worded it differently and said "you can only turn ON this setting 4 times a year". But maybe they do want to leave the door open to "accidentally" pushing a wrong setting to the users.
Nobody really believes the fiction about processing being heavy and that's why they limit opt outs.
Analyzing and tagging photos is not free. Many people don't mind their photos actually being tagged, but they are a little more sensitive about facial recognition being used.
That's probably why they separate these out, so you can get normal tagging if you want without facial recognition grouping.
https://support.microsoft.com/en-us/office/group-photos-by-p...
If you have a large list of scenarios where Microsoft didn't respect privacy settings or toggles, I would be interested in seeing them.
I know there have been cases where software automated changes to Windows settings that were intended to only be changed by the user. Default browsers were one issue, because malicious software could replace your default browser even with lower permissions.
Are you talking about things like that, or something else?
Then why they are doing it ? Maybe because CIA/NSA and advertisers pay good money.
Most moms and old folks aren't going to fuss or understand privacy and technical considerations, they just want to search for things like "greenhouse" and find that old photo of the greenhouse they setup in the backyard 13 years ago.
It's one thing if all of your photos are local and you run a model to process your entire collection locally, then you upload your own pre-tagged photos. Many people now only have their photos on their phones and the processing doesn't generally happen on the phone for battery reasons. You CAN use smaller object detection/tagging models on phones, but a cloud model will be much smarter at it.
They understand some of this is a touchy subject, which is why they have these privacy options and have limitations on how they'll process or use the data.
In a really sad way.
Nobody. Absolutely nobody. Believes it's to save poor little Microsoft from having their very limited resources wasted by cackling super villain power users who'll force Microsoft to scan their massive 1.5 GB meme image collections several times.
If it was about privacy as you claim in another comment, it would be opt in. Microsoft clearly doesn't care about user privacy, as they've repeatedly demonstrated. And making it opt out, and only three times, proves it. Repeating the same thing parent comments said is a weird strategy. Nobody is believing it.
Maybe in your social bubble. I don't know anyone with OneDrive subscription.
Aren't these 2 different topics? MS and big-tech in general make things opt-out so they can touch the data before users get the chance to disable this. I expect they would impose a limit to how many times you go through the scanning process. I've run into this with various other services where there were limits on how many times I can toggle such settings.
But I'm also finding a hard time giving MS the benefit of the doubt, given their history. They could have said like GP suggested that you can't turn it "on" not "off".
> As stated many times elsewhere here .... Nobody really believes the fiction
Not really fair though, wisdom of the crowd is not evidence. I tend to agree on the general MS sentiment. But you stating it with confidence without any extra facts isn't contributing to the conversation.
The good news is that the power of this effect is lost when significant attention is placed on it as it is in this case.
Someone show me any cases, where big tech has successfully removed such data from already trained models, or in case of being unable to do that with the blackboxes they create, removed the whole blackbox, because a few people complain about their data being in those black boxes. No one can, because this has not happened. Just like ML models are used as laundering devices, they are also used as responsibility shields for big tech, who rake in the big money.
This is M$ real intention here. Lets not fool ourselves.
They should limit the number of times you turn it on, not off. Some PM probably overthought it and insisted you need to tell people about the limit before turning it off and ended up with this awkward language.
Then you can guess Microsoft hopes to make even more money than it costs them running this feature.
Language used is deceptive and comes with "not now" or "later" options and never a permanent "no". Any disagreement is followed by a form of "we'll ask you again later" message.
Companies are deliberately removing user's control over software by dark patterns to achieve their own goals.
Advanced user may not want to have their data scanned for whatever reasons and with this setting it cannot control the software because vendor decided it's just 3 times and later settings goes permanent "on".
And considering all the AI push within Windows, Microsoft products is rather impossible to assume that MS will not be interested in training their algorithms on their customers/users data.
---
And I really don't know how else you can interpret this whole talk with an unnamed "Microsoft's publicist" when:
> Microsoft's publicist chose not to answer this question
and
> We have nothing more to share at this time
but as a hostile behavior. Of course they won't admit they want your data but they want it and will have it.
Telling that these companies have some real creeps high up.
We know all major GenAI companies trained extensively on illegally acquired material, and they were hiding this fact. Even the engineers felt this isn't right, but there were no whistleblowers. I don't believe for a second it would be different with Microsoft. Maybe they'd introduce the plan internally as a kind of CSAM, but, as opposed to Apple, they wouldn't inform users. The history of their attitude towards users is very consistent.
There is that initial phase of potential fair use within reason, but the illegal acquisition is still a crime. Eventually after they've distilled things enough, it can become more firmly fair use.
So they just take the legal risk and do it, because after enough training the legal challenges should be within an acceptable range.
That makes sense for publicly released images, books and data. There exists some plausible deniability in sweeping up influences that have already been released into the world. Private data can contain unique things which the world has not seen yet, which becomes a bigger problem.
Meta/Facebook? I would not and will never trust them. Microsoft? I still trust them a lot more than many other companies. The fact many people are even bothered by this, is because they actually use OneDrive. Why not Dropbox or Google Drive? I certainly trust OneDrive more than I trust Dropbox or Google Drive. That trust is not infinite, but it's there.
If Microsoft abuses that trust in a truly critical way that resonates beyond the technically literate, that would not just hurt their end-user personal business, but it would hurt their B2B as well.
That would be a limit on how many times you can enable the setting, not preventing you from turning it off.
I don't know what they're seeing from their side, but I'm sure they have some customers that have truly massive photo collections. It wouldn't surprise me if they have multiple customers with over 40TB of photos in OneDrive.
They can put a giant warming in front of the last "off" click or whatever, not it needs to be there.
This reeks of MS thinking very MS-centric and hoping they fortunately retain some users.
I bet you "have nothing to hide".
We work with computers. Every thing that gets in the way of working is wasting time and nerves.
But really I know nothing about the process, I was going to make an allegory about how it would be the same as adobe deleting all your drawings after you let your photoshop subscriptions lapse. But realized that this is exactly the computing future that these sort of companies want and my allegory is far from the proof by absurdity I wanted it to be. sigh, now I am depressed.
Did you read it all ? They also sugest that they care about your privacy. /s
If that was the case, the message should be about a limit on re-enabling the feature n times, not about turning it off.
Also the if they are concerned about processing costs, the default for this should be off, NOT on. The default should for any feature like this that use customers personal data should be OFF for any company that respects their customers privacy.
> You are trying to reach really far out to find a plausible
This behavior tallies up with other things MS have been trying to do recently to gather as much personal data as possible from users to feed their AI efforts.
Their spokes person also avoided answering why they are doing this.
On the other hand, you comment seem to be trying to reach really far trying to find portray this as normal behavior.
If you don't trust Microsoft but need to use Onedrive, there are encrypted volume tools (e.g. Cryptomator) specifically designed for use with Onedrive.
A database of pretty much all Western citizen's faces? That's a massive sales opportunity for all oppressive and wanna-be oppressive governments. Also, ads.
I look forward to getting a check from Microsoft for violating my privacy.
I live in a state with better-than-average online privacy laws, and scanning my face without my permission is a violation. I expect the class action lawyers are salivating at Microsoft's hubris.
I got $400 out of Facebook because it tagged me in the background of someone else's photo. Your turn, MS.
Presumably, it's somewhat expensive to run face recognition on all of your photos. When you turn it off, they have to throw away the index (they'd better be doing this for privacy reasons), and then rebuild it from scratch when you turn the feature on again.
My wife has a phone with a button on the side that opens the microphone to ask questions to Google. I guess 90% of the audios they get are "How the /&%/&#"% do I close this )(&(/&(%)?????!?!??"
In fact, if you follow the linked page, you'll find a screenshot showing it was originally worded differently, "You can only change this setting 3 times a year" dating all the way back to 2023. So at some point someone made a conscious decision to change the wording to restrict the number of times you can turn it _off_
For example, people who don't use their encrypted vault on OneDrive, so they upload photos that should otherwise be encrypted to their normal OneDrive which gets scanned and tagged. It could be a photo of their driver's license, social security card, or something illicit.
So these users toggle the tagging feature on and off during this time.
Maybe the idea is to push these people's use case to the vault where it probably belongs?
The issue is that is a feature that 100% should in any sane world be opt in - not opt out.
Microsoft privacy settings are a case of - “It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying 'Beware of the Leopard.”
If they had taste, someone opinionated over there would knock heads before shipping another version of windows that requires restarts or mutates user settings.
And unlike most things, both prompts require you to explicitly click some sort of "no", not just click away to dismiss. The backup one is particularly obnoxious because you have to flip a shitty little slider as the only button is "continue". Fuck. Off.
But that's not necessarily true for everyone. And it doesn't need to be this way, either.
For starters I think it'd help if we understood why they do this. I'm sure there's a cost to the compute MS spends on AI'ing all your photos, turning it off under privacy rules means you need to throw away that compute. And turning it back on creates an additional cost for MS, that they've already spent for nothing. Limiting that makes sense.
What doesn't make sense is that I'd expect virtually nobody to turn it on and off over and over again, beyond 3 times, to the point that cost increases by more than a rounding error... like what type of user would do that, and why would that type of user not be exceedingly rare?
And even in that case, it'd make more sense to do it the other way around: you can turn on the feature 3 times per year, and off anytime. i.e. if you abuse it, you lose out on the feature, not your privacy.
So I think it is an issue that could and should be quickly solved.
Very likely true, but we shouldn't have to presume. If that's their motivation, they should state it clearly up front and make it opt-out by default. They can put a (?) callout on the UI for design decisions that have external constraints.
If the user leaves it off for a year, then delete the encrypted index from the server...
This is probably the case. But Redmond being Redmond, they put their foot in their mouth by saying "you can only turn off this setting 3 times a year" (emphasis mine).
We all know why.
That means that all Microsoft has to do to get your consent to scan photos is turn the setting on every quarter.
Even my 10(?) year old iPhone X can do facial recognition and memory extraction on device while charging.
My Sony A7-III can detect faces in real time, and discriminate it from 5 registered faces to do focus prioritization the moment I half-press the shutter.
That thing will take mere minutes on Azure when batched and fed through GPUs.
If my hunch is right, the option will have a "disable AI use for x months" slider and will turn itself on without letting you know. So you can't opt out of it completely, ever.
Just stop using Microsoft shit. It's a lot easier than untangling yourself from Google.
But Microsoft is pretty easy to avoid after their decade of floundering.
Not in a million years. See you in court. As often, just because a press statement says something, it's not necessarily true and maybe only used to defuse public perception.
171 more comments available on Hacker News