Meta Will Listen Into AI Conversations to Personalize Ads
Key topics
Meta plans to use AI conversation data to personalize ads, sparking concerns about privacy and the exploitation of AI for advertising purposes, with commenters expressing disappointment and skepticism about the company's intentions.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
12m
Peak period
35
0-6h
Avg / period
7
Based on 63 loaded comments
Key moments
- 01Story posted
Oct 2, 2025 at 8:36 AM EDT
3 months ago
Step 01 - 02First comment
Oct 2, 2025 at 8:48 AM EDT
12m after posting
Step 02 - 03Peak activity
35 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 6, 2025 at 1:00 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
it's really not. each individual step down that staircase is considered and intentional.
The slippery slope argument is a logical fallacy.
Yes, they will consider and intentionally take steps to make more money.
Just dismissing any argument about a slippery slope as a fallacy is lazy (as is quoting logical fallacies in any argument)
show me a slippery slope that is not a series of deliberate decisions, then.
i've never seen one. every single step down that slope required someone who wanted to go further down the slope and took the action(s) required to go further.
there is no involuntary sliding down slopes here.
Proton has made an AI chatbot with an extreme focus on privacy: https://lumo.proton.me
I personally do not trust Proton.
It's annoying but it's a bog standard technique for avoiding spam or fraud.
A self-professed privacy-focused company that won't let privacy-conscious customers sign up does look a little sus.
They do accept Bitcoin payment, but not Monero. I wonder if that's because Monero is illegal for them, or just because they didn't bother to.
Easier to track people via Bitcoin than Monero?
AFAIK it's perfectly possible to sign up for a free Proton account without revealing your identity.
Even then, you still want additional advertising, so that people believe the manipulated responses are genuine.
User mentions they didn't sleep well. Model delivers jarring information right before user's bed time. Model subtly suggests other sleep disruptive activities, user receives coupons for free coffee. User converts for ad for sleeping medication.
(This is already happening, intentionally or not)
Notably the open source models OpenAI released right before gpt5 are likely good enough to be substitutes for 95% of typical ChatGPT use cases.
Convincing people to be dissatisfied with everything outside their control and sabotage everything within their control.
That sounds like a recipe for hell
There are so many paths towards this type of outcome:
- eliciting negative emotions is one of the most effective ways to get and keep peoples attention.
- foreign states buy platforms to sabotage population of rival states
- costs of chatbots drop by orders of magnitude, making profiting off them less important
Those three points alone cover a wide area of potential negative outcomes...
I'm not anti AI, but I'm trying to stay eyes wide open. AI can drive a lot of good, but to me the biggest risk is a population of people sleepwalking into being subjected to whatever the AI wants to make them think.
I try to focus my efforts where I can, to influence an outcome where AI increases our freedom and autonomy and abilities rather than undermine them. It's just as important to push things where we want them to go as it is to be aware of where we don't want them to go.
My original comment didn't precisely capture what I meant, which is more so the majority of focus is on being upset about things outside the individual's control.
A victim mindset is built on the victim feeling wronged (not getting what they are owed) based on an agreement they made with another party which the other party didn't consent to.
So the owner/controller of a chatbot could direct the dissatisfaction at whatever is in there interest to. A political party could direct it at another political party. A foreign state could direct it at the whole system (or reinforce division between parties aka divide and conquer), or a specific political actor could direct it at a specific group of people as a scapegoat. As a whole, the result could be instilling dissatisfaction in just about everything, but to each individual user/group it may be a few specific things.
In the past we fought wars with tanks and guns, and to an extent we still do, but most wars fought today are fought in the realm of values, and AI is the nuclear warhead of values manipulation.
No matter the underlying strategy or nefarious intent, the combination of 1) what is best at getting peoples attention, 2) people's susceptibility to being upset about what is outside their control and 3) the opportunity AI affords powerful people to manipulate the masses, spells for the most tangible (not most dire, most tangible) dangers that I see AI representing.
Note I was wary of responding to your last comment that was skeptical about chatBots baising people this way because it's hard to articulate these concerns precisely. In my view your comment I am responding to now only reinforces the point I was trying to make.
Be extremely wary of chatBots that propagate victim mindsights in people who are susceptable to them.
> A victim mindset is built on [...] an agreement they made with another party which the other party didn't consent to.
I don't see what agreements have to do with it. If I stab you with a knife, it doesn't matter whether I've previously agreed not to stab you—I've victimized you regardless. Perhaps you can say I've implicitly agreed to abide by the laws of my country, but then you'd have to concede that the German Jews were not victimized by Nazis, because Nazis had edited the law such that their own actions were all legal. You could say there's some underlying natural law or social contract which all humans have implicitly agreed to, but at that point we're really stretching the idea of "agreement," aren't we? Certainly no type of lawyer ever sat me down to sign the social contract.
At the end of the day, identifying with victimhood can be prosocial or antisocial, and the only way to distinguish between those categories is based on the specifics of the situation: It's prosocial when they're responding to genuine wrongdoing in pursuit of a real solution, and it's antisocial when they're responding to imagined wrongdoing or bolstering a harmful non-solution. It all depends on whether a the wrongdoing in question is legitimate or not, and I don't think you can dance around that question (bypassing the entire field of ethics) with a few remarks about agreement and consent.
Thank you for making this actual point.
I find the vast, vast majority of people who use the term "victim mindset" tend to be promoting views that involve not changing the status quo or making legitimate complaints and so on.
What I'm not for, is people not doing those things, and instead putting all their energy into a circle jerk of complaints that doesn't accomplish anything other than distract people from actually making things better.
Is your issue with the way I framed victim mindset or my point that a major risk of AI (and social media) is propagating victim mentality and biasing people to have a victim mindset?
Are you advocating that there are cases when having a victim mindset is a good thing?
Have you looked up the definition of victim mindset?
The former, I suppose, but the latter is downstream of that.
> It feels like you keep taking a less than respectful interpretation of my comments
Well that's certainly not my intent. But I think there's a lot implicit in the idea that a "victim mindset" is too common in society, and I want to unpack it.
> Have you looked up the definition of victim mindset? Are you advocating that there are cases when having a victim mindset is a good thing?
When I searched it, I got directed to the wiki page on victim mentality, which is mostly about the psychological implications of perceiving yourself as a victim. And yes, I do think it's sometimes good, both individually and for society, to perceive yourself as a victim, for reasons I outlined in my post above.
Attention is far too important for us to give away so easily.
As you'd expect from LLM output, that bit was stolen from humans:
https://www.youtube.com/watch?v=IAM1rSObk4c
i think more people are treating an LLM like a friend than you might expect - i was certainly surprised.
You may be overestimating how many people have friends to talk to in the first place.
But, on the other side, I think that it is one of the nicest one of the big tech in term of Open Source. They have great valuable project that are technically good, and respecting very permissive Open Source licenses in the spirit of "here is a gift, we don't care what you do with this".
Even Llama is a little bit in this spirit, even if the license is not that "free" in theory. But how many of the self-hosted and tinkerer AI users owe to Meta for their models to have bootstrapped the field and still fueling it.
On that aspect I would be quite sad to see them going down.
So, in the end, I'm more in a split-ed brain spirit where I enjoy their contributions avoiding to use it and give them my data, but being thankful to the poor clueless users that sacrifice themselves by using it.
I agree, many of the big tech corps - even Microsoft - have technically excellent and actually useful projects with open source license. But I wouldn't call any of the companies "nice" since their only purpose is to make profit, usually by exploiting their workers and users. Companies are convenient fictions, one can go up in flames and another will take its place. (Though of course "tres comas" unicorns are one in a million.)
But all that money sure attracts great talent, with some doing great open-source work. It's those individuals who should be valued for contributing to the good of humanity - in spite of the overall system within which they work.
Yes, and whatch your naked photos, and watch your porn.
Remember that Android/iOS are secure OSs where you "can" allow an app access to all your files ? And when you don't allow, they find other ways to spy on you. (see recent discovery that Meta's process has interesting access)
But for me this is also a sign their free chat products are deeply unsustainable.
14 more comments available on Hacker News