Robin Williams' Daughter Pleads for People to Stop Sending AI Videos of Her Dad
Mood
heated
Sentiment
negative
Category
other
Key topics
Robin Williams' daughter has pleaded with people to stop sending her AI-generated videos of her father, sparking a heated discussion about the ethics and impact of AI-generated content on personal grief and digital legacy.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
9m
Peak period
150
Day 1
Avg / period
32
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 7, 2025 at 12:56 PM EDT
about 2 months ago
Step 01 - 02First comment
Oct 7, 2025 at 1:05 PM EDT
9m after posting
Step 02 - 03Peak activity
150 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 19, 2025 at 1:49 PM EDT
about 1 month ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
And even if this were a viable answer: legal process _where_? What's to stop these "creators" from simply doing their computation in a different jurisdiction?
We need systems that work without this one neat authoritarian trick. If your solution requires that you lean on the violence of the state, it's unlikely to be adopted by the internet.
Also, calling legal enforcement as “leaning on the violence of the state” is hyperbolic and a false dichotomy. Every system of rights for and against companies (contracts, privacy, property, speech) comes down to enforceable legal policies.
Examples of cases that have shaped society: Brown v Board of Ed, pollution lawsuits against 3M and Dow Chemical, Massachusetts v. EPA resulted in the clean air act, DMCA, FOSTA-SESTA, the EU Right to Be Forgotten, Reno v. ACLU which outlined speech protections online, interracial marriage protected via Loving v. Virginia, the ruling that now requires police have a warrant to access cell phone data was Carpenter v. US, and these are just a few!
> And even if this were a viable answer: legal process _where_? What's to stop these "creators" from simply doing their computation in a different jurisdiction?
Jurisdictional challenges don't mean a law is pointless. Yes, bad actors can operate from other jurisdictions, but this is true for all transnational issues, from hacking to human smuggling to money laundering. DMCA takedowns work globally, as does GDPR for non-EU companies.
Nobody’s arguing for blind criminalization or over policing AI. But perhaps there should be some legal frameworks to protect safe and humane use.
But that shouldn’t be the first step. Telling your fellow man “what you are doing is bothering me, please stop” is significantly simpler, faster, and cheaper than contacting lawyers and preparing for a possibly multi-year case where all the while you’ll have to be reminded and confronted with the very thing you don’t want to deal with.
If asking doesn’t work, then think of other solutions.
Like if nothing sparked the World Wars (conversely: or if Hitler won). Or Greece harnessed steam or electricity to spur an industrial revolution 2200 years ago. Or if Christianity etc. never caught on.
It's not because just telling people on the internet to stop doing something doesn't actually stop them from doing it. This is basic internet 101, streissand effect at full power
The most these lawsuits could hope to do is generate publicity, which would likely just encourage more people to send her videos. This direct plea has that risk too, but I think "please don't do this" will feel a lot less adversarial and more genuine to most people than "it should be illegal for you to do this".
It's not fruitless and doesn't only generate publicity. Some states like California and Indiana recognize and protect the commercial value of a person's name, voice, image, and likeness after death for 70 years, which in this case would apply for Robin William's daughter.
Tupac's estate successfully sued Drake to take his AI generated voice of Tupac out of his Kendrick Lamar diss track.
There is going to be a deluge of copyright suits against OpenAI for their videos of branded and animated characters. Disney just sent a cease and desist to Character.ai last week for using copyrighted characters without authorization.
The problem she wants to solve is "people are sending me AI videos of my dad". She will not have any success solving this problem using lawsuits, even if the lawsuits themselves succeed in court.
1. Whether there is an effective legal framework that prevents AI companies from generating the likenesses of real people.
2. The shared cultural value that, this is not cool actually, not respectful, and in fact somewhat ghoulish.
Establishing a cultural value is probably more important than any legal structures.
If AI somehow allowed me to create videos of the likeness of Isaac Newton or George Washington, that seems far less a concern because they are long dead and none of their grieving family is being hurt by the fakes.
So I don't think there was actually malicious intent and asking people to stop will probably work.
Her goal seems to be to reduce the role in her life played by AI slop portrayals of her dad. Taking the legal route seems like it would do the opposite.
Robin Williams' daughter is wise to avoid creating a "Williams Effect".
Since the raise of generative AI we have seen all sorts of pathetic usages, like "reviving" assesinated people and making them speak to the alleged killer in court, training LLMs to mimic deseased loved ones, generative nudification, people that is not using their brain anymore because they need to ask ChatGPT/Grok... some of them are crimes, others not. Regardless most of them should stop.
I can also 100% tell you that the farming folk of 100 years ago also felt like the farming machines took away their jobs. They saw 0 positives. The ones that could (were young) went into industry, the others... well, at the same time we instituted pensions, which were of course paid for by the active population, so it kind of turned out ok in the end.
I do wonder, what will be the repercussions of this technology. It might turn into a dud or it might truly turn into a revolution.
this is always the answer that the hopeful give: "previous revolutions of this kind, i.e., the Industrial Revolution, created a host of new professions we didn't even know existed, so this one will as well." Except that it was obvious quite early on what the professions created by those revolutions were. Factories were springing up immediately and employing those who had been working in the fields. The pace of change was much slower too; it took about 150 years in the US for the transformation from an agrarian society to an industrial one to happen. That provides time for society to adjust.
I have yet to see anyone demonstrate even an idea of new professions -- that may employ millions of people -- that are likely to emerge. So far, the "hope" is a pipe dream.
(I'm just picking nits. I do agree that this "revolution" is not the same and will not necessarily produce the same benefits as the industrial revolution.)
"Technology is neutral" is a cop-out and should be seen as such. People should, at the very least, try to ask how people / society will make use of a technology and ask whether it should be developed/promoted.
We are all-too-often over-optimistic about how things will be used or position them being used in the best possible light rather than being realistic in how things will be used.
In a perfect world, people might only use AI responsibly and in ways that largely benefit mankind. We don't live in that perfect world, and it is/was predictable that AI would be used in the worst ways more than it's used in beneficial ones.
https://www.bbc.com/future/article/20250523-the-soviet-plan-...
Uncle Ted, Neil Postman, Jacques Ellul were right all along.
1) Rolling coal. It's hard for me to envision a publicly-available form of this technology that is virtuous. It sounds like it's mostly used to harass people and exert unmerited, abusive power over others. Hardly a morally-neutral technology.
2) Fentanyl. It surely has helpful uses, but maybe its misuse is so problematic that humanity might be significantly better off without the existence of this drug.
Maybe AI is morally neutral, but maybe it isn't.
I'm not implying those adjectives apply to AI, but merely presenting a worse case scenario.
Dismissing the question of "does this benefit us?" with "it's just a tool" evokes Jurassic Park for me.
My point is that like any technology, it's how you use it.
I'm happy that nuclear weapons and AI have been invented, and I'm excited about the future.
However, if you ask me to, I can imagine using those weapons against meteors headeds for Earth, or possibly aliens. We don't know.
Phew, I never thought that "it's better to know more than less" would be controversial on HN.
I mean, what if the nuclear bomb actually did burn up the atmosphere? What if AI does turn into a runaway entity that eventually functions to serve its own purposes and comes to see humans the same way we see ants: as a sort of indifferent presence that's in the way of its goals?
And a sort of people who sympathize Winston and blame Felix Hoenikker, but still fail to see any parallels between "fiction" and life.
Nuclear weapons just destroy stuff at scale, by design.
That's a very easy yes.
LLM and stable diffusion though? Yep; agree.
Just yesterday someone posted a "photo" of a 1921 where a submarine lost power, and built sails out of bedsheets to get home.
But the photo posted looked like a post WWII two submarine, rigged like a clipper ship, rather than the real life janky 1920's bed sheet rig and characters everywhere.
Actual incident (with actual photo): https://en.wikipedia.org/wiki/USS_R-14
I mean, thank you I guess, but anyone can do that with the littlest of efforts; and anyone with actual intention of understanding and answering the question would have recognized it as slop ans stopped right there.
I see both of these as entirely valid use cases, but I'd be curious to know where you stand on them / why you might think recreating the actors here would be detrimental.
Maybe there needs to be a rule that, lacking express permission, no ai generated characters can represent a real person who has lived in the last 25 years or similar.
I know, Dune and yeah, i get it - science fiction aint real life - but im still into these vibes.
Anyone wanna start a club?
I wish we had machines that actually thought because they'd at least put an end to whatever this is. In the words of Schopenhauer, this is the worst of all possible worlds not because it couldn't be worse but because if it was a little bit worse it'd at least cease to exist. It's just bad enough so that we're stuck with the same dreck forever. This isn't the Dune future but the Wall-E future. The problem with the Terminator franchise and all those Eliezer Yudkowsky folks is that they are too optimistic.
Yeah, I'm in. Let me know when and where the meetings are held.
The machines didn't enslave anyone in this scenario. "Men with machines" did. I think of the techbro oligarchs who decide what a feed algorithm shows.
<spoiler>
I interpreted Thufir Hawat's massive misunderstanding of Lady Jessica's motivation (which was a huge plot point in the book but sadly didn't make it into the films) as evidence that the conclusion that humans are capable of the exact same undesirable patterns as machines.
Did I read that wrong?
</spoiler>
Dune's a wild ride man!
Taking the first book by itself, it doesn't speak much about the relationship between man and machine. The fundamental themes are rooted in man's relationship with ecology (both as the cause and effect).
If you take the first book alone you're left with only one facet of a much grander story. You're also left with the idea a white savior story that says might makes right, which really isn't what was going on at all.
I think the first book is more nuanced than that. It's a demonstration of the nietzschean perspective, but it doesn't make any assertions about morality.
The story shows us how humans are products of their environment: Striving for peace or "morality" is futile, because peace makes men weak, which creates a power vacuum which ends peace. Similarly, being warlike is also futile because even if you succeed, it guarantees that you will become complacent and weak. It's never said outright, but all of the political theory in the book is based on the idea that "hard times make strong men, strong men make good times, good times make weak men, weak men make bad times". It's like the thesis of "Guns Germs and Steel": frank herbert proposes that in the long term, no cultural or racial differences matter; that everything is just a product of environmental factors. In a way it's also the most liberal perspective you can have. But at the same time, it is also very illiberal because in the short term race and culture does matter.
The "moral" of dune is that political leaders don't really have agency because they are bound by their relationships that define power in the first place, which are a product of the environment. Instead, the real power is held by the philosopher-kings outside of the throne because they have the ability to change the environment (like pardot kynes, who is the self-insert for frank herbert). The book asks us to choose individual agency and understanding over the futility of political games.
From the use of propaganda to control the city-dwellers in the beginning of the book to the change in paul's attitudes towards the end of the book I think the transactional nature of the atredies's goodwill is pretty plainly spelt out for us. I mean we learn by the end that paul is part harkonnen by blood, and in the same way as the harkonenn use of the "brutal" rabban and "angelic" feyd, it's all public relations. Morality is a tool of control.
I think the reason you are uneasy about the idea of the "white savior" playing a role in the book is because you actually subscribe to this fake morality yourself, in real life. You are trying to pidgeonhole the story like it's "Star Wars" or something. Dune is against the idea of "morality" itself. By bringing up the "white savior" concept, you are clearly thinking in terms of morality. By having some morality, this puts you at odds with the real point of the book, which is where the unease comes from. You want the dissonance to be resolved, but the real story of dune is open-ended.
Saying that the first book alone doesn't make any assertions about morality is somewhat hilarious. The baron is queer coded, so too is feyd, the "good guys" are strong manly men. Even just the idea that "hard times make strong men, .." is a morality in and of itself.
I never said I was uneasy about the idea of a white savior, you are reading far too much into my beliefs and ideals. I would also appreciate that you do not project onto my any of your imaginings of my own beliefs. You do not know me.
That said, if you have only read the first books you truly are getting only one small facet of the story that Herbert was trying to tell. A lot of what is laid out in the first novel is inverted and overturned by the 3rd and 4th novels.
Finally, you have written a lot about one book out of a long series of books. I would suggest that, like your wont to project some sort of belief onto me, you, too, are projecting too much upon just the first entry of a much, much, larger epic.
If there were a new kind of "machines that think"--and they aren't a dangerous predator--they could be a contrast to help us understand ourselves and be better.
The danger from these (dumber) machines is that they may be used for reflecting, laundering, and amplifying our own worst impulses and confusions.
???
Why does that matter? Isn't it all the same? No, because I'm human, and I can make special exceptions for humans.
Isn't that perfect hypocrisy? Yes, but I'm human, and it's okay because I get to decide what's okay and I say it's okay because I'm human.
See also: why can I eat a burger but Jeffrey Dahmer went to prison?
A thinking machine is a total unknown. If human intelligence and machine intelligence are not aligned, then what?
I will never bow before the machine god
You would not be the first, see: https://en.wikipedia.org/wiki/Luddite
Funny thing is that we still have hand made fabric today, and were still employing a frighting number of people in the manufacturing of clothing. The issue is that we're making more lower quality products rather than higher quality items.
The problem isn't that the technology is new and exciting and people are getting carried away, the problem is the technology.
Hard disagree. Technology is a tool. Tools don’t do anything without a human in the loop, even if to initially run an autonomous system.
Even technology like guns and nukes are not inherently bad. Can we be trusted to behave ourselves with such tools? I would argue “no”.
Technologies may not seem inherently bad, but it they tend to be used in bad ways, the difference is minimal.
Deepfakes have practically no positive use and plenty of potential for abuse. They're not a neutral technology. They're a negative social outcome from what might be a neutral technology (ML).
The problem is that the potential for truly positive stuff is minimal and the potential for truly awful stuff is unlimited. That's even ignoring the fact that the massive energy and water costs of these technologies is going to be a massive ecological disaster if we don't get it under control - and we won't, because the billionaires running these systems donate millions to the politicians voting against these protections.
Ban deepfakes.
The fact that we cannot trust humanity to behave with such tools means that the tool (or maybe the tool being accessible) is the problem.
Take care of your kids parents. I can't imagine growing up/parenting with this bullshit.
its not really a story, this is an instagram post about someone that can be tagged and forwarded items on instagram by strangers, for those of you that aren't familiar
this is not about any broader AI thing and its not news at all. a journalist made an article out of someone's instagram post
HN came through for me.
If media had one-shot generated actors we could just appreciate whatever we consumed and then forget about everyone involved. Who cares what that generated character likes to eat for breakfast or who they are dating they don't exist.
> I welcome generative AI killing it off.
It probably will, but that pushes us in the direction that Neal Stephenson describes in Fall - millions of people sitting alone consuming slop content generated precisely for them, perfect neuro-stimulation with bio-feedback, just consuming meaningless blinking lights all day and loving it. Is that better than what we have now? It's ad-absurdum, yes, but we live in a very absurd world now.
> Who cares what that generated character likes to eat for breakfast or who they are dating they don't exist.
You never needed to know this about celebrities, and you still don't need to now. Yes, others are into it; let them have their preferences. No doubt you're into something they would decry.
In my ideal world generating content becomes so easy that it loses all value unless you have some relation to the creator. Who cares about the new Avengers movie, Rick from down the street also made an action movie that we are gonna check out. Local celebrities return. Global figures are generated because why would Coke pay 100m for Lebron to dunk a can of coke on camera or some dumb shit when his image has been absolutely trashed by floods of gen content.
- How do I know Rick down the street, anymore, if I'm inside consuming perfectly addicting slop all day?
- How do I ensure that I am also consuming content that has artistic merit that is outside my pure-slop comfort zone?
- Why would I notice or care about a local artist when they can't produce something half as perfect "for me" as a recommendation algorithm trained on all human artistic output?
> Global figures are generated
This I agree with. Ian Macdonald wrote about "aeai" characters in his early 2000s novel River of Gods, where a subplot concerns people who watch TV shows starring AI characters (as actors, who also have an outside-of-the show life). The thing is, right now we see Lebron in the ad and it's an endorsement - how can an AI character of no provenance endorse anything?
Which sucks
https://www.theguardian.com/film/2015/mar/31/robin-williams-...
I don't know if it could be extended further, but I feel like there is merit for it to be considered in this case.
In Denmark people own the copyright to their own voice, imagery and likeness:
https://www.theguardian.com/technology/2025/jun/27/deepfakes...
I think that it is probably the right way to go.
However, there might be a way of doing it that doesn't create a new category of intellectual property. I think I read about a case that went a bit like this. Parliament created some kind of privacy right. A newspaper asked someone for permission to publish some pictures that could be an invasion of some person's privacy. The person asked for money. The newspaper went ahead and published without permission. The person sued. The newspaper argued in court that if the person was willing to let the pictures be published in return for money then they didn't really care about their privacy. The judge said: it does not seem that the purpose of the legislation was to create a new category of intellectual property so that argument might be valid... I've garbled the details and I'm not sure of the final result but that line of reasoning is interesting, I think.
Back in the 00s I remember friends being sent “slimming club” junk mail on paper made to look like a handwritten note from “a concerned friend”. It was addressed but random.
Unfortunately it can be very distressing for those with body image issues.
We’re going to have to treat this slop like junk mail as it grows.
Anything I'm sent that's AI generated doesn't get read. I realize this will get more difficult as models improve, but I'm standing by this as long as they have that recognizable verbose and agreeable signature
If I did have sufficient training data for an AI Dad, I'd much rather just listen to the real recordings, rather than some AI slop generated from it. This is just another dumb application of AI that sounds alright on paper, but makes no sense in real life.
https://www.latimes.com/entertainment-arts/movies/story/2021...
It's not necessarily disgusting by itself, but sending clips to the guy's daughter is very weird.
48 more comments available on Hacker News
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.