I Announced My Divorce on Instagram and Then AI Impersonated Me
Key topics
As one woman's Instagram divorce announcement was hijacked by AI-generated slop, the community sparked a lively debate about the perils of creating content on closed platforms. Some commenters, like jwr, urged caution, saying that using platforms known to "do terrible things" invites trouble, while others, such as gardenerik and kuschku, championed decentralized alternatives like Delta.Chat and Mastodon. The discussion revealed a surprising consensus: many users are fed up with the trade-offs of convenience and privacy, and are seeking more control over their online presence. With the rise of AI-generated content, this conversation feels especially relevant now, as people weigh the risks and benefits of their online activities.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
12m
Peak period
118
Day 1
Avg / period
26.7
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 22, 2025 at 2:13 AM EST
12 days ago
Step 01 - 02First comment
Dec 22, 2025 at 2:25 AM EST
12m after posting
Step 02 - 03Peak activity
118 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 30, 2025 at 2:45 PM EST
3d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I keep trying to convince people not to use Instagram, WhatsApp, Facebook, Twitter/X, but I'm not getting anywhere.
Write your own content and post it on your own terms using services that you either own or that can't be overtaken by corporate greed (like Mastodon).
The platforms and their convenience that one "only" has to write the post yet the internet needs so much metadata, so it tried to autogenerate it, instead of asking for it. People are put off by need to write a bloody subject for an email already, imagine if they were shown what's actually the "content" is.
About convincing: get the few that matters on deltachat, so they don't need anything new or extra - it's just email on steroids.
As for Mastodon: it's still someone else's system, there's nothing stopping them from adding AI metadata either on those nodes.
Would this depend on threat model?
At least commenting from an unknown account on any random youtube video won't land you immediately at a "Person of Interest" list and your comments will be ignored as a drop of water inside an ocean of comments.
And where can I find such a story from a trusworthy source? Quick google search rather turned up this:
https://euvsdisinfo.eu/report/us-intelligences-services-cont...
(Debunking it as russian information warfare)
In absence of that blog post:
Start by the beginning, how Moxley left Twitter as director of cyber over there (a company nowhere focused on privacy at the time) to found the Whisper Foundation (if memory serves me the right name). His seed funding money came from Radio Free Asia, which is a well-known CIA front for financing their operations. That guy is a surf-fan, so he decided to invite crypto-experts to surf with him while brainstorming the next big privacy-minded messenger.
So, used his CIA money to pay for everyone's trip and surf in Hawaii which by coincidence also happens to be the exact location of the headquarters for an NSA department that is responsible for breaking privacy-minded algorithms (notably, Snowden was working and siphoning data from there for a while).
Anyways: those geeks somehow happily combined wave-surf with deep algo development in a short time and came up with what would later be known as "signal" (btw, "signal" is a well-known keyword on the intelligence community, again a coincidence). A small startup was founded and shortly after that a giant called "whatsapp" decided to apply the same encryption from an unknown startup onto the billion-sized people-audience of their app. Something for sure very common to happen and for sure without any backdoors as often developed in Hawaii for decades before any outsiders discover them.
Signal kept being advertised over the years as "private" to the tune of 14 million USD in funding per year provided by the US government (CIA) until it ran out some two years ago: https://english.almayadeen.net/articles/analysis/signal-faci...
Only TOR and a few new tools remain funded, signal was never really a "hit" because most of their (target) audience insists on using telegram. Whatsapp that uses the same algorithm as signal recently admitted (this year) that internal staff had access to the the supposedly encrypted message contents, so there goes any hopes for privacy from a company that makes their money from selling user data.
I'd be interested in reading that blog post eventually.
Signal, on the other hand, is a closed "opensource" ecosystem (you cannot run your own server or client), requires a phone number (still -_-) and the opensource part of it does not have great track record (I remember some periods where the server for example was not updated in the public repo).
But yeah, if you want the more popular option, Signal is the one.
And other mastodon servers, just like other email servers, can of course still modify the data they receive how they'd like.
Which is why I think the only solution has to come at the governmental regulatory level. In “freedom” terms it could be framed as freedom from, as in freedom from exploitation, unlawful use of data, etc. but unfortunately freedom to seems to be the most corporate friendly interpretation of freedom.
You'd be surprised how many people in your life can be introduced to secure messaging apps like Signal (which is still centralized, so not perfect, but a big step in the right direction compared to Whatsapp, Facebook, etc) by YOU refusing to use any other communication apps, and helping them learn how to install and use Signal.
Signal is the best messaging app, but not by metrics people use to measure messaging apps, because not a ton of people use it. I use signal, but I also still use SMS (garsp!) because ultimately sometimes I just need to send a message.
It sucks and it's stupid, what we need more than anything else, more than any app, is open and federated messaging protocols.
There is no feasible way for a normie like me to convince enough people to take any kind of action collectively that will be noticed by FAANG.
I think we like to pretend otherwise, like oh if enough people stop using Instagram, they will fail. This is only true in the most literal sense, because "enough" is an enormous number, totally unachievable by advocacy.
We need far better strategies than "vote with your wallet". I think it is at least time to get rid of "vote with your wallet" from our collective vocabularies, for the sake of actual democracy.
If something is bad, it's said that the free market will offer an alternative and the assumed loss of market share will rein in the bad thing. It ignores, as does most un-nuanced discourse about economy and society, that capitalism does not equate to a free market outside of a beginner's economics textbook, and democracy doesn't prevent incumbents from buying up the competition (FB/Instagram) or attempting to block competition outright (Tiktok).
I'm with you, but WhatsApp is tough. How do you keep in touch?
In the cases of special interest groups (think school/club/street/building groups), I just miss out, or ask for updates when I meet people. I am a bit out of the loop sometimes. No-one's died as a result of my leaving. When someone did actually die that I needed to know about, I got a phone call.
Honestly... just leave. Just leave. It's not worth your time worrying about these kind of "what ifs".
Telegram and Signal are, to me, about as trustworthy as WhatsApp. Well, actually, nobody really uses Signal, and Telegram is about the same as WhatsApp so who cares.
Waiting to meet my friends once every 1-2 years is not enough. I want to chat daily with them, because they are my close friends.
Daily telephone conversations with a group of them? Nope. Snail mail? It doesn't work for daily conversation.
So WhatsApp it is!
And what's the alternative?
At any point they might insert an advert, a bot, change the UI or the share features, some AI slop, etc. you will have no recourse.
Just by using their platforms they’re able to update their models of you, your family, your friends. The timing of chats, the data they have on you through Insta or FB, all flesh out and refine their model of you. You are doing their work for them, helping them get richer, all whilst they oversee everything you do.
As for alternatives? I already listed several. You rejected most of them for whatever reasons you gave. Those were primarily your choices rather than firm barriers.
Here’s some more options: Discord, Matrix, blogs +RSS, your own mastodon instance, mailing lists, FaceTime, Zoom, WhereBy, MS Teams, irc, Slack, Mattermost, a custom chat server you wrote yourself.
Plus, what about videos? How is a non-tech savvy creator host their content if it's best in video format?
https://web.archive.org/web/20251222092511/https://eiratanse...
Unlikely.
In this case she explicitly did NOT make any mention of the divorce on social media when her husband first sprung it on her, nor during the process. She wrote this piece after it had been finalized.
Apparently I'm a luddite now, because yes, this. Stop using social media to communicate with people you ostensibly care about.
The new SAM (segment anything) and SAM3D are actually impressive and good on them for releasing it to the public. They still need to release an image model.
I honestly believe the weird pursuit for “safety” is what sabotaged them, it seems to lobotomy models. It’s also the reason Stable Diffusion went from the hot thing to a joke. Stable diffusion 3 was so safe you couldn’t generate a woman laying down on some grass because that’s apparently dangerous for reasons unknown.
All models have had their “safety” and guardrails removed by the community and the world didn’t end.
Companies putting words in people's mouth on social media using "AI" is horrible and shouldn't be allowed.
But I completely fail to see what this has to do with misogyny. Did Instagram have their LLM analyze the post and then only post generated slob when it concluded the post came from a woman? Certainly not.
Major citation needed
https://www.unesco.org/en/articles/generative-ai-unesco-stud...
> Our analysis proves that bias in LLMs is not an unintended flaw but a systematic result of their rational processing, which tends to preserve and amplify existing societal biases encoded in training data. Drawing on existentialist theory, we argue that LLM-generated bias reflects entrenched societal structures and highlights the limitations of purely technical debiasing methods.
https://arxiv.org/html/2410.19775v1
> We find that the portrayals generated by GPT-3.5 and GPT-4 contain higher rates of racial stereotypes than human-written por- trayals using the same prompts. The words distinguishing personas of marked (non-white, non-male) groups reflect patterns of othering and exoticizing these demographics. An inter- sectional lens further reveals tropes that domi- nate portrayals of marginalized groups, such as tropicalism and the hypersexualization of mi- noritized women. These representational harms have concerning implications for downstream applications like story generation.
https://aclanthology.org/2023.acl-long.84.pdf
For example, if you write a post about how you failed to get a job, some “extra spice” could be added, inferring that you lost to an immigrant.
I actually am sympathetic to your confusion—perhaps this is semantics, but I agree with the trivialization of the human experience assessment from the author and your post, but don't read it as an attack on women's pain as such. I think the algorithm sensed that the essay would touch people and engender a response.
--
However, I am certain that Instagram knows the author is a woman, and that the LLM they deployed can do sentiment analysis (or just call the Instagram API and ask whether the post is by a woman). So I don't think we can somehow absolve them of cultural awareness. I wonder how this sort of thing influences its output (and wish we didn't have to puzzle over such things).
I guess it should have been marked clearly as such.
It’s not impersonating anyone. Sure, the description is garbage, it may not be obvious it’s not written by the user, but people need to understand what partaking in closed and proprietary social media actually means. You are not paying anything, you do not control the content, you are the product.
I’ll stick to this point only even if I feel that there are other things in the post that are terribly annoying.
Many apps, like Slack and LinkedIn, use it to display a link card with a description.
The shareholders will be content, because they see value in that. The users might not, but not many of them are actual humans, nowadays they're mostly AI, who has time to read and/or post on social media? Just ask your favorite AI what's the hottest trends on social networks, it should suffice to scratch the itch.
Do not try LinkedIn. Not even once.
And is it just me, or has LinkedIn Recruiter become all the more useless after the LLM age? At least we're not renewing that abomination next year, opting to use more flesh-and-blood headhunters.
They track and log every reel viewed.
I suppose everyone does it but actually seeing it is another level of creepy.
You can choose the option to tell TikTok you are 'not interested' in videos like these, or block the account entirely. There are legitimate criticisms about social media algorithms, but I don't understand why you jump to the conclusion that you have to delete your account.
Not quite what you’re saying, but a couple of steps in that direction.
I am never, ever requesting that they delete the account.
If anyone using palantir wants to draw incorrect conclusions based on unverified data, the impact to them is certainly going to worse than it is to any of us normal citizens
If your credit is impacted because someone made a mistake, that still fucks you over. It doesn't matter if it's real or not because the entire point of centralized data collection and analytics is that you don't need to care, the people doing the collecting and analyzing do it for you. So you just trust them with whatever. It's on YOU, the consumer, to catch these mistakes and spend a painstaking amount of time trying to fix them, and ultimately the consumer is the only one who will face any consequences. And when it comes to credit, these consequences are very material. It means maybe you can't get a car, or a home, or even a job these days. I know my job ran a credit check.
If we imbue these new-age data collection and analysis companies like Palantir and Flock in our systems, a lot of people will suffer, and I don't think anyone cares.
Poison their data. If they have evidence against you, and you can prove their data is even partially bad, you have your reasonable doubt.
Juries are increasingly on the side of the citizen , which is better than nothing
My credit example is actually giving the opponents too much credit here. The bureaus are kinda government. Even that is better!
I have a cellular hotspot with a phone number apparently recycled from someone who still has it tied to a fintech account (Venmo, or something similar). Every time this person makes a purchase, my hotspot screen lights up with an inbound text message notification.
This person makes dozens of purchases each day, but unlike my previous hotspots, this one does not have a web interface that allows me to log in and see the purchase confirmations. All I get to see is "Purchase made for $xx.xx at" on the tiny screen several dozen times a day.
Social media was a mistake.
Sure maybe they exist in some corporate servers when the companies were sold for scraps. And I suppose if I became famous and someone wanted to write an expose about my youthful debauchery, but for all practical purposes all this stuff has disappeared. Or maybe not. How much do we know about the digital presence of someone like the guy who shot Trump or Las Vegas shooter. Or maybe it's known but hidden? I'm impressed that Amazon has my very first order from over 10 years ago, but that's just not par for the course.
Why would AI steal my identity and post as me? I'm not that interesting.
My data is just not the valuable and I imagine that within the next 5-10 years AI will be trained almost entirely on synthetic data.
Even my damn personal website was in the top 5 Google results for my name, despite no attempt at SEO and no popularity.
Today those sites are all gone and it’s as if I no longer exist according to Google .
Instead a new breed of idiots with my name have their life chronicled. I even get a lot of their email because they can’t spell their name properly. One of them even claimed that they owned my domain name in a 3-way email squabble.
I almost no longer exist and it’s kinda nice.
Only PeopleFinder and such show otherwise.
"If you want to have a baby, you won't be able to conceive. If you want to stay childfree, the condom will break."
If you want to find old logs of your IRC and AIM buddies from 20 years ago, they're gone. If you say something stupid once, it's kept forever.
It seems nuts to me shareholders would be happy about a bunch of fake users, at least ones that don't have any money.
Users are $$$. Nobody wants to talk about which are human and which aren’t. It’s all a game of hot potato.
Who in marketing doesn’t want to champion the success of “we got 25% more views this month!”
Before you ask: this quote was made with ChatGPT (GPT-5.2), unmodified, first attempt.
Just a heads up in case you didn't know, but generated comments are not allowed on HN: https://news.ycombinator.com/item?id=45077654
Can't speak for dang, obviously, but that's the rule I'd make in his shoes.
eg:
https://news.ycombinator.com/item?id=45077654
https://news.ycombinator.com/item?id=44704054
https://news.ycombinator.com/item?id=43979537
https://news.ycombinator.com/item?id=43085967
https://news.ycombinator.com/item?id=43085954
https://news.ycombinator.com/item?id=42976756
https://news.ycombinator.com/item?id=40600057
https://news.ycombinator.com/item?id=46102885
https://news.ycombinator.com/item?id=45572704
https://news.ycombinator.com/item?id=43979537
https://news.ycombinator.com/item?id=41237678
https://news.ycombinator.com/item?id=40569734
https://news.ycombinator.com/item?id=35210503
The comment here was a borderline case of the latter, but I think it was on the worthwhile side of the border, personally.
Once Hacker News becomes nothing but bots posting stories written by bots for other bots to comment on - which is the inevitable end point of a permissive attitude towards this stuff - what even is the fucking point to any of this? SEO juice?
We crawled the Internet, identified stores, found item listings, extracted prices and product details, consolidated results for the same item together, and made the whole thing searchable.
And this was the pre-LLM days, so that was all a lot of work, and not "hey magic oracle, please use an amount of compute previously reserved for cancer research to find these fields in this HTML and put them in this JSON format".
We never really found a user base, and neither did most of our competitors (one or two of them lasted longer, but I'm not sure any survived to this day). Users basically always just went to Google or Amazon and searched there instead.
However, shortly after we ran out of money and laid off most of the company, one of our engineers mastered the basics of SEO, and we discovered that users would click through Google to our site to an item listing, then through to make a purchase at a merchant site, and we became profitable.
I suppose we were providing some value in the exchange, since the users were visiting our item listings which displayed the prices from all the various stores selling the item, and not just a naked redirect to Amazon or whatever, but we never turned any significant number of these click-throughs into actual users, and boy howdy was that demoralizing as the person working on the search functionality.
Our shareholders had mostly written us off by that point, since comparison shopping had proven itself to not be the explosive growth area they'd hoped it was when investing, but they did get their money back through a modest sale a few years later.
As long as no one figures out it’s all fake, the line can keep going up and to the right and everyone is happy.
https://news.ycombinator.com/item?id=37723862
Via discounts, promo codes, gamification, whatever else they’re using today to get people to install their apps and sign over their privacy.
That's a bit dismissive of women, does she think that women aren't capable of designing and maintaining software too?
You see this later as well when she slyly glides over women who do what her husband did. When her husband decided to end their marriage, it was representative of men. When women do it, it's their choice to make.
But I am a pedantic person who prefers to focus on the literal statements in text rather than the perceived underlying emotional current. So I’ll pedantically plod through what she actually said.
She’s dealing with two dimensions of divorce: who initiated it (husband, wife, or collaborative), and whether it was surprising or unsurprising.
That gives several possibilities, but she lists three. What unifies them is that they are all written from the perspective of the abstract woman undergoing the experience.
1. Woman initiated, surprise unspecified.
2. Collaborative, so assume unsurprising.
3. Man initiated, surprising (her situation).
She doesn’t claim this covers all possibilities. The point of that bit is just to emphasize that divorces are different, and to object to treating them as a genre for wellness AI slop.
Here is the original text containing that part so others can easily form their own opinion.
“I also object to the flattening of the contours of my particular divorce. There are really important distinctions between the experiences of women who initiate their own divorces versus women who come to a mutual agreement with their spouses to end the marriage versus women, like me, who are completely blindsided by their husbands’ decisions to suddenly end the marriage. All divorces do involve self-discovery and rebuilding your life, but the ways in which you begin down that path often involve dramatically different circumstances.”
> We already know that in a patriarchal society, women’s pain is dismissed, belittled, and ignored. This kind of AI-generated language also depoliticizes patriarchal power dynamics.
Man does something bad, it's the fault of patriarchy. Woman do something bad, it's also men's fault because patriarchy made her do it. Either way you cannot win with a person like that. I think I understand why the husband wanted a divorce.
As someone else said, the red flags of insufferability abound here, first and foremost with announcing something like this which is as personal and momentous as it is, on public social media.
All that sweet, sweet innovation!
> My story is absolutely layered through with trauma, humiliation, and sudden financial insecurity and I truly resent that this AI-generated garbage erases the deliberately uncomfortable and provocative words I chose to include in my original framing.
I truly feel for her, and wish her luck. Also, I feel that, of any of the large megacorps, Meta is the one I would peg to do this. I’m not even sure they feel any shame over it. They may actually appreciate the publicity this generates.
56 more comments available on Hacker News