Not Hacker News Logo

Not

Hacker

News!

Home
Hiring
Products
Companies
Discussion
Q&A
Users
Not Hacker News Logo

Not

Hacker

News!

AI-observed conversations & context

Daily AI-observed summaries, trends, and audience signals pulled from Hacker News so you can see the conversation before it hits your feed.

LiveBeta

Explore

  • Home
  • Hiring
  • Products
  • Companies
  • Discussion
  • Q&A

Resources

  • Visit Hacker News
  • HN API
  • Modal cronjobs
  • Meta Llama

Briefings

Inbox recaps on the loudest debates & under-the-radar launches.

Connect

© 2025 Not Hacker News! — independent Hacker News companion.

Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.

Not Hacker News Logo

Not

Hacker

News!

Home
Hiring
Products
Companies
Discussion
Q&A
Users
  1. Home
  2. /Discussion
  3. /Robin Williams' daughter pleads for people to stop sending AI videos of her dad
  1. Home
  2. /Discussion
  3. /Robin Williams' daughter pleads for people to stop sending AI videos of her dad
Last activity about 1 month agoPosted Oct 7, 2025 at 12:56 PM EDT

Robin Williams' Daughter Pleads for People to Stop Sending AI Videos of Her Dad

dijksterhuis
246 points
208 comments

Mood

heated

Sentiment

negative

Category

other

Key topics

AI Ethics
Grief and Technology
Digital Legacy
Debate intensity80/100

Robin Williams' daughter has pleaded with people to stop sending her AI-generated videos of her father, sparking a heated discussion about the ethics and impact of AI-generated content on personal grief and digital legacy.

Snapshot generated from the HN discussion

Discussion Activity

Very active discussion

First comment

9m

Peak period

150

Day 1

Avg / period

32

Comment distribution160 data points
Loading chart...

Based on 160 loaded comments

Key moments

  1. 01Story posted

    Oct 7, 2025 at 12:56 PM EDT

    about 2 months ago

    Step 01
  2. 02First comment

    Oct 7, 2025 at 1:05 PM EDT

    9m after posting

    Step 02
  3. 03Peak activity

    150 comments in Day 1

    Hottest window of the conversation

    Step 03
  4. 04Latest activity

    Oct 19, 2025 at 1:49 PM EDT

    about 1 month ago

    Step 04

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (208 comments)
Showing 160 comments of 208
WorldPeas
about 2 months ago
1 reply
As a fan of his work, I too wish it all to stop. People always go headlong for the people who we all miss the most, yet don't understand that it was their underlying humanity that made them so special.
randycupertino
about 2 months ago
9 replies
Rather than "pleading" for them to stop, wouldn't she have more success going after the ai content creation companies via legal process? I thought actors have the right to control commercial uses of their name, image, voice, likeness, or other recognizable aspects of their persona, thus if people are paying for the AI creation wouldn't the companies be wrongly profiting off his likeness? Although I'm sure some laws haven’t yet been explicitly updated yet to cover AI replicas.
jMyles
about 2 months ago
1 reply
It's so frustrating that "just call the cops" is the answer, at the very same time that the cops are creating a massive disruption to our society.

And even if this were a viable answer: legal process _where_? What's to stop these "creators" from simply doing their computation in a different jurisdiction?

We need systems that work without this one neat authoritarian trick. If your solution requires that you lean on the violence of the state, it's unlikely to be adopted by the internet.

randycupertino
about 2 months ago
Legal process is not an “authoritarian trick," it's the primary enforceable framework for wide scale, lasting societal change as it's the only method that actually has teeth.

Also, calling legal enforcement as “leaning on the violence of the state” is hyperbolic and a false dichotomy. Every system of rights for and against companies (contracts, privacy, property, speech) comes down to enforceable legal policies.

Examples of cases that have shaped society: Brown v Board of Ed, pollution lawsuits against 3M and Dow Chemical, Massachusetts v. EPA resulted in the clean air act, DMCA, FOSTA-SESTA, the EU Right to Be Forgotten, Reno v. ACLU which outlined speech protections online, interracial marriage protected via Loving v. Virginia, the ruling that now requires police have a warrant to access cell phone data was Carpenter v. US, and these are just a few!

> And even if this were a viable answer: legal process _where_? What's to stop these "creators" from simply doing their computation in a different jurisdiction?

Jurisdictional challenges don't mean a law is pointless. Yes, bad actors can operate from other jurisdictions, but this is true for all transnational issues, from hacking to human smuggling to money laundering. DMCA takedowns work globally, as does GDPR for non-EU companies.

Nobody’s arguing for blind criminalization or over policing AI. But perhaps there should be some legal frameworks to protect safe and humane use.

latexr
about 2 months ago
4 replies
> Rather than "pleading" for them to stop, wouldn't she have more success going after the ai content creation companies via legal process?

But that shouldn’t be the first step. Telling your fellow man “what you are doing is bothering me, please stop” is significantly simpler, faster, and cheaper than contacting lawyers and preparing for a possibly multi-year case where all the while you’ll have to be reminded and confronted with the very thing you don’t want to deal with.

If asking doesn’t work, then think of other solutions.

Razengan
about 2 months ago
1 reply
How about just praying for an asteroid to reset us and hope we get shit right the next time around
pstuart
about 2 months ago
2 replies
If we can't get it right this time, there's no indication a reboot would be any better (because humans).
dylan604
about 2 months ago
1 reply
who says the next time will be humans? there was no next time for the dinosaurs. maybe humans are not the end, but just a rung on the ladder.
fragmede
about 2 months ago
How about squirrels? I think it would be fun to have a tail.
Razengan
about 2 months ago
I don't know. I mean, things could have gone very differently (in a better or worse way) and the world may be unrecognizable today if certain key events did not happen.

Like if nothing sparked the World Wars (conversely: or if Hitler won). Or Greece harnessed steam or electricity to spur an industrial revolution 2200 years ago. Or if Christianity etc. never caught on.

bossyTeacher
about 2 months ago
1 reply
> Telling your fellow man “what you are doing is bothering me, please stop” is significantly simpler, faster, and cheaper

It's not because just telling people on the internet to stop doing something doesn't actually stop them from doing it. This is basic internet 101, streissand effect at full power

Supermancho
about 2 months ago
The Streisand effect is a reactive effect, not an ongoing condition.
viraptor
about 2 months ago
You can't tell everyone. Barely anyone will know of this being published. And then there will be lots of people thinking "whatever, I don't care". And a not insignificant number of people thinking "lol, time to organise a group of people who will send Robin Williams creepy genAI to her every day!"
WorldPeas
about 2 months ago
Asking solves it for you, biting the hand makes them think twice about doing it to others, but good luck doing that to the many-headed serpent of the internet.
burkaman
about 2 months ago
1 reply
No, she would not have any success. Take a look at this list and think about the sheer number of companies she would need to sue: https://artificialanalysis.ai/text-to-video/arena?tab=leader.... You'll see Google, one of the richest companies on the planet, and OpenAI, the richest private company on the planet. You'll see plenty of Chinese companies (Bytedance, Alibaba, Tencent, etc.). You'll also see "Open Source" - these models can't be sued, and removing them from the internet is obviously impossible.

The most these lawsuits could hope to do is generate publicity, which would likely just encourage more people to send her videos. This direct plea has that risk too, but I think "please don't do this" will feel a lot less adversarial and more genuine to most people than "it should be illegal for you to do this".

randycupertino
about 2 months ago
1 reply
> The most these lawsuits could hope to do is generate publicity, which would likely just encourage more people to send her videos.

It's not fruitless and doesn't only generate publicity. Some states like California and Indiana recognize and protect the commercial value of a person's name, voice, image, and likeness after death for 70 years, which in this case would apply for Robin William's daughter.

Tupac's estate successfully sued Drake to take his AI generated voice of Tupac out of his Kendrick Lamar diss track.

There is going to be a deluge of copyright suits against OpenAI for their videos of branded and animated characters. Disney just sent a cease and desist to Character.ai last week for using copyrighted characters without authorization.

burkaman
about 2 months ago
1 reply
What I'm saying is that successfully suing individual companies or people would have zero impact on her actual problem. If California says it's illegal and OpenAI says they'll ban anyone who tries it, then these people can effortlessly switch to a Grok or Alibaba or open source model, and they'll be extra incentivized to do so because they'll find it fun to "fight back" against California or luddites or whatever. Do you see the difference? Tupac's estate successfully stopped one guy from using Tupac's voice, but they have not and cannot stop the rest of the world from doing so. The same is true for Disney, it is trivial for anyone to generate images and videos using Disney characters today, and it will be forever. Their lawsuit can only hope to prevent a specific very small group of people from making money off of that.

The problem she wants to solve is "people are sending me AI videos of my dad". She will not have any success solving this problem using lawsuits, even if the lawsuits themselves succeed in court.

fragmede
about 2 months ago
Is that really the problem she wants to solve? She could just turn off her phone to accomplish that. The problem is multi-layered and complex. Holy shit, It’s her dad. He’s dead. I don’t have her phone number, but let’s pretend I did and we were friends, why would I be texting her videos of her dead father? He’s Robin Williams, sure, but why? why! would I be making AI videos and sending them to her? Forget Sora, if I made a puppet of her father and made a video of him saying things he didn’t say, and then sent it to her, I think I’d still be a psychopath for sending it to her. I think she should sue open AI and California should have it be illegal without a license, and yeah there’s always gonna be a seedy underbelly. I’m sure there’s Mickey Mouse porn out there somewhere. A lawsuit is going to make it official that she is a person and she’s saying hey I don’t like that and that she would like for people to stop it, and that the rest of us agree with that.
lukev
about 2 months ago
1 reply
There's two things potentially at stake here:

1. Whether there is an effective legal framework that prevents AI companies from generating the likenesses of real people.

2. The shared cultural value that, this is not cool actually, not respectful, and in fact somewhat ghoulish.

Establishing a cultural value is probably more important than any legal structures.

SkyBelow
about 2 months ago
1 reply
I think there is also a major distinction between creating the likeness of someone and sending that likeness to the family of the deceased.

If AI somehow allowed me to create videos of the likeness of Isaac Newton or George Washington, that seems far less a concern because they are long dead and none of their grieving family is being hurt by the fakes.

fragmede
about 2 months ago
https://sora.chatgpt.com/p/s_68e57c1b22708191be1d249c1a52b2b...
izzydata
about 2 months ago
Asking people to stop seems like the first step. Especially since this is specific to people sending them to her in particular. People think they are being nice and showing some form of affection, but as she mentions she finds it disturbing instead.

So I don't think there was actually malicious intent and asking people to stop will probably work.

dylan604
about 2 months ago
what if the creators are not in the same legal jurisdiction or in some place that does not care about whatever rights you think are being wronged?
Centigonal
about 2 months ago
How long would the legal process take? How much would it cost? Does she have to sue all proprietors of commercial video generator models? What about individuals using open source models? How many hours of her time will these suits take to organize? How many AI videos of her dad will she have to watch as part of the proceedings? Will she be painted as a litigious villain by the PR firms of these very well-capitalized interests?

Her goal seems to be to reduce the role in her life played by AI slop portrayals of her dad. Taking the legal route seems like it would do the opposite.

Nevermark
about 2 months ago
Imagine if Barbara Streisand died, and her estate tried suing people for recreations of her.... [0]

Robin Williams' daughter is wise to avoid creating a "Williams Effect".

[0] https://en.wikipedia.org/wiki/Streisand_effect

estebarb
about 2 months ago
Not really. From what I understood of the interview is that her complain is not about money or compensation (which she may be entitled to), but about how people use the technology and how they interact with it and with her. Legal process or even the companies implementing policies won't change that problematic society behavior.

Since the raise of generative AI we have seen all sorts of pathetic usages, like "reviving" assesinated people and making them speak to the alleged killer in court, training LLMs to mimic deseased loved ones, generative nudification, people that is not using their brain anymore because they need to ask ChatGPT/Grok... some of them are crimes, others not. Regardless most of them should stop.

carabiner
about 2 months ago
6 replies
I wish AI had never been invented.
jolt42
about 2 months ago
6 replies
ALL technology can be used for good or bad. It's the usage, not the invention.
thinkingtoilet
about 2 months ago
2 replies
Right. But a machine that helps plant seeds at scale could be used for bad by running someone over, but it's core purpose it to do something helpful. AI's core purpose isn't to do anything good right now. It's about how many jobs it can take, how many artists can it steal from and put out of work, and so on and so on. How many people die from computer mice each year? How many from guns? They're both technology and can be used for good or bad. To hand wave the difference away is dangerous and naive.
prerok
about 2 months ago
1 reply
But... the machine that plants seeds also takes away the livelihood to a bunch of folks. I mean, in my country, we were an agrarian society 100 years ago. I don't have the actual stats but it was close to 90% agrarian. Now, it's at about 5%. Sure, people found other jobs and that will likely be the case here. I will do the dishes while the AI will program.
thinkingtoilet
about 2 months ago
2 replies
I understand the industrial revolution happened. To say this revolution is the same and will produce the same benefits is already factually wrong. One revolution created a net positive of jobs. One has only taken jobs.
prerok
about 2 months ago
1 reply
I would say we don't know that yet. Comparing the current state of LLMs to what they can lead to or what they might enable later on is like comparing early machine prototypes to what we have today.

I can also 100% tell you that the farming folk of 100 years ago also felt like the farming machines took away their jobs. They saw 0 positives. The ones that could (were young) went into industry, the others... well, at the same time we instituted pensions, which were of course paid for by the active population, so it kind of turned out ok in the end.

I do wonder, what will be the repercussions of this technology. It might turn into a dud or it might truly turn into a revolution.

insane_dreamer
about 2 months ago
> we don't know that yet

this is always the answer that the hopeful give: "previous revolutions of this kind, i.e., the Industrial Revolution, created a host of new professions we didn't even know existed, so this one will as well." Except that it was obvious quite early on what the professions created by those revolutions were. Factories were springing up immediately and employing those who had been working in the fields. The pace of change was much slower too; it took about 150 years in the US for the transformation from an agrarian society to an industrial one to happen. That provides time for society to adjust.

I have yet to see anyone demonstrate even an idea of new professions -- that may employ millions of people -- that are likely to emerge. So far, the "hope" is a pipe dream.

nhecker
about 2 months ago
However, aren't there now a lot of job openings out there for LLM-whisperers and other kinds of AI domain experts? Surely these didn't exist in the same quantity 10 years ago.

(I'm just picking nits. I do agree that this "revolution" is not the same and will not necessarily produce the same benefits as the industrial revolution.)

farias0
about 2 months ago
It only "take jobs" because it's useful. It's useful for making transcription at scale, text revision, marketing material, VFX, all those things. It also does other things that don't "take jobs", like computer voice control. It's just a tool, useful for everyone, and not harmful at all at its purpose. Comparing it to guns is just ridiculous.
jzb
about 2 months ago
3 replies
It's real hard for me to conjure up "good" uses for, say, mustard gas or bioweapons or nuclear warheads.

"Technology is neutral" is a cop-out and should be seen as such. People should, at the very least, try to ask how people / society will make use of a technology and ask whether it should be developed/promoted.

We are all-too-often over-optimistic about how things will be used or position them being used in the best possible light rather than being realistic in how things will be used.

In a perfect world, people might only use AI responsibly and in ways that largely benefit mankind. We don't live in that perfect world, and it is/was predictable that AI would be used in the worst ways more than it's used in beneficial ones.

wartywhoa23
about 2 months ago
Why? Soviets tried to re-route rivers with nuclear blasts in their infinite scientifically-based wisdom and godlike hubris. How much illness their radioactive sandbox would cause among people was clearly too minuscle a problem for them to reflect on.

https://www.bbc.com/future/article/20250523-the-soviet-plan-...

sph
about 2 months ago
> "Technology is neutral" is a cop-out and should be seen as such.

Uncle Ted, Neil Postman, Jacques Ellul were right all along.

bananaflag
about 2 months ago
https://en.wikipedia.org/wiki/Project_Orion_(nuclear_propuls...
Exoristos
about 2 months ago
We have very little idea what is "good or bad," especially over the long term.
domador
about 2 months ago
I strongly disagree. Many technologies aren't neutral, with their virtue dependent on the use given them. Some technologies are close to neutral, but there are many that are either 1) designed for evil or 2) very vulnerable to misuse. For some of the latter, it'd be best if they'd never even been invented. An example of each kind of technology:

1) Rolling coal. It's hard for me to envision a publicly-available form of this technology that is virtuous. It sounds like it's mostly used to harass people and exert unmerited, abusive power over others. Hardly a morally-neutral technology.

2) Fentanyl. It surely has helpful uses, but maybe its misuse is so problematic that humanity might be significantly better off without the existence of this drug.

Maybe AI is morally neutral, but maybe it isn't.

UncleMeat
about 2 months ago
But not in equal quantity. Technology does not exist in a contextless void. Approaching technology with this sort of moral solipsism is horrifying, in my opinion.
jahsome
about 2 months ago
While this is true, the horrific usage of a tool can vastly outweigh pitifully minimal benefits.

I'm not implying those adjectives apply to AI, but merely presenting a worse case scenario.

Dismissing the question of "does this benefit us?" with "it's just a tool" evokes Jurassic Park for me.

wyre
about 2 months ago
1 reply
It seems that most of the toxic AI stuff has really only been coming out of a handful of companies: OpenAI, Meta, Palantir, Anduril. I am aware this is a layman take.
dybber
about 2 months ago
1 reply
If not them, it would just had been others doing the same thing.
JohnFen
about 2 months ago
And so? Then we'd be condemning those others instead. Just because others would step up to do terrible things if these companies didn't doesn't mean it's OK that these companies did.
ekjhgkejhgk
about 2 months ago
4 replies
Do you wish that nuclear weapons had never been invented?

My point is that like any technology, it's how you use it.

maximilianburke
about 2 months ago
1 reply
Yes.
ekjhgkejhgk
about 2 months ago
4 replies
Well, a lot of us have a curious mind. Like, fission is a property of this universe. Gradient descent is a property of this universe. All you're saying is you'd rather not know about it.

I'm happy that nuclear weapons and AI have been invented, and I'm excited about the future.

wartywhoa23
about 2 months ago
1 reply
If only having a curious mind would imply having a far-sighted and responsible one.
ekjhgkejhgk
about 2 months ago
2 replies
It normally does. That's why I can consider that nuclear weapons might have better uses in the future, presently unknown to us, and you can't.
sensanaty
about 2 months ago
1 reply
Nuclear weapons could have a better use in the future? Pray tell, what exactly have you envisioned here?
ekjhgkejhgk
about 2 months ago
1 reply
My point is that you don't know what the future holds, but it's better to know more than less. My point is valid even if I can't provide examples.

However, if you ask me to, I can imagine using those weapons against meteors headeds for Earth, or possibly aliens. We don't know.

Phew, I never thought that "it's better to know more than less" would be controversial on HN.

Supermancho
about 2 months ago
Lack of imagination often results in preconceived answers, to open ended questions.
thoroughburro
about 2 months ago
The curious child takes apart an animal and learns surgery. The animal, however, is nonetheless killed.
maximilianburke
about 2 months ago
1 reply
I have a curious mind too but I don't go cutting up neighbourhood cats to see what they look like on the inside.
ekjhgkejhgk
about 2 months ago
Well, you can learn what a cat looks like on the inside from a book. But someone did have to go around cutting up neighborhood cats, you're just benefiting from them. Which is the _whole reason_ why I maintain my position that inventing AI and nuclear weapons is a net positive for mankind.
steve_adams_86
about 2 months ago
2 replies
If you're curious about that, are you curious about hypotheses like the Great Filter (Fermi paradox), and are you concerned that certain technologies could actually function as the filter?

I mean, what if the nuclear bomb actually did burn up the atmosphere? What if AI does turn into a runaway entity that eventually functions to serve its own purposes and comes to see humans the same way we see ants: as a sort of indifferent presence that's in the way of its goals?

wartywhoa23
about 2 months ago
1 reply
There is a sort of people who read 1984 and blame the protagonist for being an idiot who called the fire upon himself, or still don't get what's wrong with ice9 and people behind it when turning the last page of Cat's Craddle.

And a sort of people who sympathize Winston and blame Felix Hoenikker, but still fail to see any parallels between "fiction" and life.

ekjhgkejhgk
about 2 months ago
1 reply
I don't know for certain if, when you say "a sort of people", you're referring to me, but... The sort of people you're describing sound like fascists, which is the opposite of me.
wartywhoa23
about 2 months ago
We're on the same side then, even if our opinions on the subject differ. Please take no offence.
ekjhgkejhgk
about 2 months ago
1 reply
I mean sure. I'm not saying "lets be wreckless". I'm saying "lets understand everything about everything, more rather than less".
steve_adams_86
about 2 months ago
I suppose I'd rather see us understand nuclear weapons without vaporizing civilians and causing an arms race. I'm not claiming to have a solution to preventing the arms race aspect of technology, but all the same: I'd rather these weapons weren't built.
drdeca
about 2 months ago
Ok, but regardless of your feelings about AI, I don’t understand why you wouldn’t wish that nuclear weapons had never been invented. (Well, maybe it ended the combat between the US and Japan faster…, and maybe prevented the Cold War from becoming a hot war, but still, is that really worth the constant looming threat of nuclear Armageddon?)
al_borland
about 2 months ago
The wheel may have been a better example. It can be used for good or evil.

Nuclear weapons just destroy stuff at scale, by design.

insane_dreamer
about 2 months ago
> Do you wish that nuclear weapons had never been invented?

That's a very easy yes.

wartywhoa23
about 2 months ago
Absolutely yes.
nunez
about 2 months ago
1 reply
I mean, there are AI applications that have improved humanity without threatening to enslave it, like computational photography, automated driver assistance for autos and biomedical applications.

LLM and stable diffusion though? Yep; agree.

carabiner
about 2 months ago
Trad ML = good. LLMs have been a disaster for the human race.
beeflet
about 2 months ago
what is AI? It is a loose word that changes meaning depending on the era.
insane_dreamer
about 2 months ago
Assuming by "AI" you mean LLMs in their current incarnation (starting with GPT), I tend to agree. On balance I see the harm outweighing the benefits for society.
danielvf
about 2 months ago
2 replies
Similarly, it drives me up the wall with people posting black and white "historical photographs" of history happenings, that are AI slop, and from the wrong era.

Just yesterday someone posted a "photo" of a 1921 where a submarine lost power, and built sails out of bedsheets to get home.

But the photo posted looked like a post WWII two submarine, rigged like a clipper ship, rather than the real life janky 1920's bed sheet rig and characters everywhere.

Actual incident (with actual photo): https://en.wikipedia.org/wiki/USS_R-14

UncleMeat
about 2 months ago
1 reply
It is no surprise to me that AI images have become an aesthetic of ascendant fascism. AI contains the same distaste for the actual life and complexity of history and preference for a false memory of the past with vaseline smeared on the lens.
piva00
about 2 months ago
While also rhyming with the obsession for futurism from past fascism, the intertwining of calling back to a romanticised past with an inhuman futurism is very much a pillar of the ideology.
ASalazarMX
about 2 months ago
Or people that frequent a questions and answers website, only to copy the AI answer slop as if it was their own.

I mean, thank you I guess, but anyone can do that with the littlest of efforts; and anyone with actual intention of understanding and answering the question would have recognized it as slop ans stopped right there.

yoyohello13
about 2 months ago
6 replies
I honestly can't see any upside whatsoever in creating AI Simulacra of dead people. It kind of disgusts me actually.
anigbrowl
about 2 months ago
3 replies
It's all downside. I've seen cases where this is used to 'give murder victims a voice' recently, both on CNN where they 'interview' an AI representation of the victim and in court, where one was used for a victim impact statement as part of the sentencing. Those people would laugh you out of the room if you suggested providing testimony via spirit medium, but as soon as it comes out of a complicated machine their brains sieze up and they uncritically accept it.
nhecker
about 2 months ago
1 reply
Hold on, really? That seems wildly crazy to me, but sadly I'd believe it. I'd love it if you had a source or two to share of some of the more egregious examples of this.
yoyohello13
about 2 months ago
This was the main one I saw https://judicature.duke.edu/articles/ai-victim-impact-statem...
UncleMeat
about 2 months ago
I've seen a lot of horrible uses of AI, but this particular application the most sickening to my very core (schoolchildren using AI to generate porn of their classmates is #2 for me).
yoyohello13
about 2 months ago
I saw that too. It's so unbelievably and transparently emotional manipulation. It would be comical if it wasn't so sad and terrifying.
jjcm
about 2 months ago
What are your thoughts on James Earl Jones giving license for his voice after his passing for Disney to use for Darth Vader? Or Proximo having a final scene added in the movie Gladiator upon the passing of Oliver Reed during production?

I see both of these as entirely valid use cases, but I'd be curious to know where you stand on them / why you might think recreating the actors here would be detrimental.

jmorenoamor
about 2 months ago
Not my upside, but there is a big one: money.
sethammons
about 2 months ago
We have an actor dress up as $historical_figure and do bits in educational stuff all the time. Changing that to ai generated doesn't seem all that wrong to me.

Maybe there needs to be a rule that, lacking express permission, no ai generated characters can represent a real person who has lived in the last 25 years or similar.

charcircuit
about 2 months ago
Because it's easier than doing it without AI.
southwindcg
about 2 months ago
I agree, though I think creating AI simulacra of living people against their will and making them say or do things they wouldn't is in some ways even worse.
b34k3r
about 2 months ago
8 replies
"We must negate the machines-that-think. Humans must set their own guidelines. This is not something machines can do. Reasoning depends upon programming, not on hardware, and we are the ultimate program! Our Jihad is a "dump program." We dump the things which destroy us as humans!".

I know, Dune and yeah, i get it - science fiction aint real life - but im still into these vibes.

Anyone wanna start a club?

Barrin92
about 2 months ago
1 reply
>We must negate the machines-that-think

I wish we had machines that actually thought because they'd at least put an end to whatever this is. In the words of Schopenhauer, this is the worst of all possible worlds not because it couldn't be worse but because if it was a little bit worse it'd at least cease to exist. It's just bad enough so that we're stuck with the same dreck forever. This isn't the Dune future but the Wall-E future. The problem with the Terminator franchise and all those Eliezer Yudkowsky folks is that they are too optimistic.

https://i.ytimg.com/vi/NdN153giLdI/sddefault.jpg

measurablefunc
about 2 months ago
Technocracy is ascendant, the only question is whether you subscribe to the optimistic or pessimistic variant. But don't worry, the current state of affairs is a fluke & equilibrium will be restored once all the easily combustible sources of fossil fuels are exhausted.
0xEF
about 2 months ago
2 replies
"Once, men turned their thinking over to machines in hopes that this would set them free, but that only permitted other men with machines to enslave them."

Yeah, I'm in. Let me know when and where the meetings are held.

SideburnsOfDoom
about 2 months ago
> other men with machines to enslave them

The machines didn't enslave anyone in this scenario. "Men with machines" did. I think of the techbro oligarchs who decide what a feed algorithm shows.

fragmede
about 2 months ago
We're on Salusa Secundus, second pylon to the right, just off the Imperial Penal Complex. 6pm, Wednesday. Use the green parrot to enter, otherwise you'll be cast out. We don't want to let anybody from IX in.
jMyles
about 2 months ago
2 replies
I'm not sure that this message is meant to be taken as viable, let alone sacrosanct.

<spoiler>

I interpreted Thufir Hawat's massive misunderstanding of Lady Jessica's motivation (which was a huge plot point in the book but sadly didn't make it into the films) as evidence that the conclusion that humans are capable of the exact same undesirable patterns as machines.

Did I read that wrong?

</spoiler>

Modified3019
about 2 months ago
2 replies
I had never made that exact connection, but my impression of the Dune universe was that it was hopelessly dark and horrific, basically humans being relentlessly awful to each other with no way out.
righthand
about 2 months ago
1 reply
It’s kind of presented that way from the view of anyone under the oppression (Paul, Fremen, Jessica, etc). Therefor Paul’s vision is the way out, right? The whole thing is a mechanic to subdue the reader before they reveal that Paul doesn’t care he just wants to do things the way he sees it and controls it.
unsnap_biceps
about 2 months ago
The golden path was to break prescient vision and prevent any possible extinction of the human race. Paul actually turned his back on the golden path and became the preacher, trying to prevent it, but after his death, his son, Leto II, followed the path.
Kreutzer
about 2 months ago
I don't think its quite reducible to that. Take a step back and look at how you have an Emperor whose power is offset by both the Landsraad and the navigator's guild. This arrangement has all come about because of a scarce resource only available on one desert planet which makes space travel possible, and which has a population who have been fiercely fighting a guerilla war for centuries. It was all bound to come undone whether Paul accepted his part or not.
zehaeva
about 2 months ago
1 reply
The point of Dune, or the Butlerian Jihad within Dune, isn't that Humans are more capable than the Thinking Machines. It is that Humans should be the author of their own destiny, that and the Thinking Machines were enslaving humanity and going to exterminate them. Just like how the Imperium was enslaving all of humanity and was going to lead to the extinction of humanity. This was seen, incompletely, by Paul and later, completely, by Leto II who then spent 10,000 years working through a plan to allow humanity to escape extinction and enslavement.

Dune's a wild ride man!

beeflet
about 2 months ago
2 replies
I am reading Dune Messiah now and it clearly isn't as good as the first book. I consider the story more of a self-contained book than a series.

Taking the first book by itself, it doesn't speak much about the relationship between man and machine. The fundamental themes are rooted in man's relationship with ecology (both as the cause and effect).

zehaeva
about 2 months ago
1 reply
A lot of what is great about Dune are the inversions of the story past the first novel.

If you take the first book alone you're left with only one facet of a much grander story. You're also left with the idea a white savior story that says might makes right, which really isn't what was going on at all.

beeflet
about 2 months ago
1 reply
>You're also left with the idea a white savior story that says might makes right

I think the first book is more nuanced than that. It's a demonstration of the nietzschean perspective, but it doesn't make any assertions about morality.

The story shows us how humans are products of their environment: Striving for peace or "morality" is futile, because peace makes men weak, which creates a power vacuum which ends peace. Similarly, being warlike is also futile because even if you succeed, it guarantees that you will become complacent and weak. It's never said outright, but all of the political theory in the book is based on the idea that "hard times make strong men, strong men make good times, good times make weak men, weak men make bad times". It's like the thesis of "Guns Germs and Steel": frank herbert proposes that in the long term, no cultural or racial differences matter; that everything is just a product of environmental factors. In a way it's also the most liberal perspective you can have. But at the same time, it is also very illiberal because in the short term race and culture does matter.

The "moral" of dune is that political leaders don't really have agency because they are bound by their relationships that define power in the first place, which are a product of the environment. Instead, the real power is held by the philosopher-kings outside of the throne because they have the ability to change the environment (like pardot kynes, who is the self-insert for frank herbert). The book asks us to choose individual agency and understanding over the futility of political games.

From the use of propaganda to control the city-dwellers in the beginning of the book to the change in paul's attitudes towards the end of the book I think the transactional nature of the atredies's goodwill is pretty plainly spelt out for us. I mean we learn by the end that paul is part harkonnen by blood, and in the same way as the harkonenn use of the "brutal" rabban and "angelic" feyd, it's all public relations. Morality is a tool of control.

I think the reason you are uneasy about the idea of the "white savior" playing a role in the book is because you actually subscribe to this fake morality yourself, in real life. You are trying to pidgeonhole the story like it's "Star Wars" or something. Dune is against the idea of "morality" itself. By bringing up the "white savior" concept, you are clearly thinking in terms of morality. By having some morality, this puts you at odds with the real point of the book, which is where the unease comes from. You want the dissonance to be resolved, but the real story of dune is open-ended.

zehaeva
about 2 months ago
I have said much of the same about dune in my own life to others, about how the main thesis is "hard times make strong men, ...", but that still does boil down to might makes right.

Saying that the first book alone doesn't make any assertions about morality is somewhat hilarious. The baron is queer coded, so too is feyd, the "good guys" are strong manly men. Even just the idea that "hard times make strong men, .." is a morality in and of itself.

I never said I was uneasy about the idea of a white savior, you are reading far too much into my beliefs and ideals. I would also appreciate that you do not project onto my any of your imaginings of my own beliefs. You do not know me.

That said, if you have only read the first books you truly are getting only one small facet of the story that Herbert was trying to tell. A lot of what is laid out in the first novel is inverted and overturned by the 3rd and 4th novels.

Finally, you have written a lot about one book out of a long series of books. I would suggest that, like your wont to project some sort of belief onto me, you, too, are projecting too much upon just the first entry of a much, much, larger epic.

insane_dreamer
about 2 months ago
The later books deal with it more.
Terr_
about 2 months ago
3 replies
Well, I've been surrounded by "machines that think" for my entire life, formed of unfathomably complex swarms of nanobots. So far we seem to get along.

If there were a new kind of "machines that think"--and they aren't a dangerous predator--they could be a contrast to help us understand ourselves and be better.

The danger from these (dumber) machines is that they may be used for reflecting, laundering, and amplifying our own worst impulses and confusions.

baobun
about 2 months ago
3 replies
> unfathomably complex swarms of nanobots

???

thoroughburro
about 2 months ago
1 reply
They are referring to the bacteria we animals need to survive.
lovich
about 2 months ago
I assumed it was a reference to humans being multicellular life as each cell is nanobot sized and automata
Terr_
about 2 months ago
1 reply
It's you: A swarm of ~37 trillion cooperating nanobots, each one complex beyond human understanding, constructing and animating a titanic mobile megafortress that shambling across a planet consumed by a prehistoric grey-goo event.
array_key_first
about 2 months ago
It's not me, because I'm human, and that's not.

Why does that matter? Isn't it all the same? No, because I'm human, and I can make special exceptions for humans.

Isn't that perfect hypocrisy? Yes, but I'm human, and it's okay because I get to decide what's okay and I say it's okay because I'm human.

See also: why can I eat a burger but Jeffrey Dahmer went to prison?

cowboylowrez
about 2 months ago
1 reply
he's mistaken, your body is most certainly not hosting nanobots lol
Nevermark
about 2 months ago
2 replies
I do hope suggesting that you google "human cells" doesn't prove traumatic.
cowboylowrez
about 2 months ago
1 reply
I counter-suggest you google "nanobots" or probably more in general, "robots". let me know where they fit in the tree of life, and I will consider myself duly corrected and enlightened!
Terr_
about 2 months ago
1 reply
"I could casually acknowledge I didn't catch an oblique joke about a new way of viewing the natural world which I previously took for granted... but it's too late! To admit an oopsie would be anathema to my identity and social survival. In this desperate hour, I have no choice but to argue that it is categorically wrong to view cellular biology a form of nanotech or vice-versa, using the narrowest and most pedantic dictionary entries."
cowboylowrez
about 2 months ago
1 reply
the responsibility of excusing an erroneous statement as "just a joke bro" probably belongs to the parent post of the post you were replying to, especially when I can't even figure out the jokes punch line lol but you're a good sport for pitching in. whats the punch line?
Terr_
about 2 months ago
Futurama's is close-enough: https://www.youtube.com/watch?v=X4RuB3gT8t0
Terr_
about 2 months ago
For new heights (lows?) of delusional parasitosis: "They're under my skin! They ARE my skin!"
hhjinks
about 2 months ago
Another human being doesn't scare me, because they will think like a human, consider themselves human, and relate to humans.

A thinking machine is a total unknown. If human intelligence and machine intelligence are not aligned, then what?

beeflet
about 2 months ago
There is a new kind of machine in the works that converts your grey matter into grey goo
bluefirebrand
about 2 months ago
Yes, I am in

I will never bow before the machine god

viraptor
about 2 months ago
Ted Kaczynski went that way, but was more of a lone wolf guy.
wartywhoa23
about 2 months ago
Count me in!
zer00eyz
about 2 months ago
> Anyone wanna start a club?

You would not be the first, see: https://en.wikipedia.org/wiki/Luddite

Funny thing is that we still have hand made fabric today, and were still employing a frighting number of people in the manufacturing of clothing. The issue is that we're making more lower quality products rather than higher quality items.

rglover
about 2 months ago
3 replies
I just look forward to the point where this is so common it becomes oversaturated and the original incentives go away/scare off the folks doing this stuff (inevitable as, like parasites, they only stay around as long as the host is providing them sustenance).
danudey
about 2 months ago
1 reply
When the goal is "harassing someone into depression and suicide" though, the incentive will never go away. People are going to start doing things like this to be deliberately malicious, sending videos of their dead parents saying horrible things about them and so on.

The problem isn't that the technology is new and exciting and people are getting carried away, the problem is the technology.

callc
about 2 months ago
2 replies
> The problem isn't that the technology is new and exciting and people are getting carried away, the problem is the technology.

Hard disagree. Technology is a tool. Tools don’t do anything without a human in the loop, even if to initially run an autonomous system.

Even technology like guns and nukes are not inherently bad. Can we be trusted to behave ourselves with such tools? I would argue “no”.

port11
about 2 months ago
1 reply
This ’technology isn't evil’ tale is as old as time…

Technologies may not seem inherently bad, but it they tend to be used in bad ways, the difference is minimal.

Deepfakes have practically no positive use and plenty of potential for abuse. They're not a neutral technology. They're a negative social outcome from what might be a neutral technology (ML).

tim333
about 2 months ago
2 replies
Tech tends to be used in both good and bad way. I'll give you some tend to be bad eg. nerve gas and some good, penicillin. Deepfake stuff seems mostly to be used for entertainment.
danudey
about 2 months ago
1 reply
Deepfake stuff, now that it's trivial to produce, is going to be used for an infinite amount of harassment, constant "look here's a video of AOC kicking a dog" posts, etc.

The problem is that the potential for truly positive stuff is minimal and the potential for truly awful stuff is unlimited. That's even ignoring the fact that the massive energy and water costs of these technologies is going to be a massive ecological disaster if we don't get it under control - and we won't, because the billionaires running these systems donate millions to the politicians voting against these protections.

port11
about 1 month ago
If the potential for positivity is minimal compared to the potential for harm, it's not a socially neutral technology. I get downvoted on this opinion, but it's my hill. Technology might be neutral, but it's the applications that matter.

Ban deepfakes.

port11
about 2 months ago
I'm guessing it's mostly used for porn. And even for entertainment value: it's not something we really needed on the whole, we have enough entertainment. Deepfakes have no place in a sophisticated, evolved society.
danudey
about 2 months ago
If there were a box you could buy that let you think of someone and then push a button and that person died, you could argue all you want that "the technology is just a tool, it's how you use it that matters" but it would be difficult to argue that having that technology out in the world for anyone to use would be a net benefit to society.

The fact that we cannot trust humanity to behave with such tools means that the tool (or maybe the tool being accessible) is the problem.

tasty_freeze
about 2 months ago
If you are old enough, you remember when having a CD player in your car made you a target for a break-in. Many models were removable so you could take the CD player with you when you left your car. Once it became a standard option, there was no point in stealing them anymore due to the saturation effect you brought up. Now having a CD player in your car is steampunk.
_DeadFred_
about 2 months ago
This just moves from celebrities to the uncool kid at school at that point.

Take care of your kids parents. I can't imagine growing up/parenting with this bullshit.

yieldcrv
about 2 months ago
1 reply
stop sending them to her

its not really a story, this is an instagram post about someone that can be tagged and forwarded items on instagram by strangers, for those of you that aren't familiar

this is not about any broader AI thing and its not news at all. a journalist made an article out of someone's instagram post

zem
about 2 months ago
I think it's definitely newsworthy that so-called fans are sending AI slop of robin williams to his own daughter! it's sadly indicative of the general state of fandom that they didn't even think of how it would land, or that she would be anything other than appreciative.
zatkin
about 2 months ago
1 reply
I can't help but to think that this will inevitably lead to the Streisand effect.
latexr
about 2 months ago
1 reply
If people see someone’s request to stop sending them AI slop of their dead father and it causes them to send the person more of it, that goes beyond the Streisand effect (which is driven by curiosity) and into outright cruelty.
al_borland
about 2 months ago
That sounds like more of a 4chan effect.
shevy-java
about 2 months ago
2 replies
AI is killing society now.
jader201
about 2 months ago
1 reply
Can we put Pandora back in the box?
sethammons
about 2 months ago
3 replies
Nit: Pandora wasn't in the box; she was the keeper of the box and told not to open it.
rzzzt
about 2 months ago
1 reply
Pandora's monster, Pandora is the scientist.
sethammons
about 2 months ago
I snorted. Thanks
jader201
about 2 months ago
I was waiting for someone to point this out.

HN came through for me.

KaiserPro
about 2 months ago
If we are going to be full pedantry, it wasn't a box it was a jar, blame Erasmus for that one
ericmcer
about 2 months ago
2 replies
It is definitely changing it. We were already experiencing the move from a "celebrity" being an individual with huge talent to just a marketing tool that gets giant financial rewards for merely existing. These larger than life pop culture icons that 100s of millions or billions of people care about is a recent phenomenon and I welcome generative AI killing it off.

If media had one-shot generated actors we could just appreciate whatever we consumed and then forget about everyone involved. Who cares what that generated character likes to eat for breakfast or who they are dating they don't exist.

mikestorrent
about 2 months ago
1 reply
Is this really a change? Haven't people loved celebrities for as long as they've existed? Before this, characters in books, poems and songs commanded the same level of attention.

> I welcome generative AI killing it off.

It probably will, but that pushes us in the direction that Neal Stephenson describes in Fall - millions of people sitting alone consuming slop content generated precisely for them, perfect neuro-stimulation with bio-feedback, just consuming meaningless blinking lights all day and loving it. Is that better than what we have now? It's ad-absurdum, yes, but we live in a very absurd world now.

> Who cares what that generated character likes to eat for breakfast or who they are dating they don't exist.

You never needed to know this about celebrities, and you still don't need to now. Yes, others are into it; let them have their preferences. No doubt you're into something they would decry.

ericmcer
about 2 months ago
1 reply
I definitely draw a distinction between like... famous medieval leaders and generals being well known versus what we do with people like Michael Jackson, Madonna, Kim Kardashian etc. I am sure they had local celebrities back then but there is no way they extended much beyond a small region or a single nation.

In my ideal world generating content becomes so easy that it loses all value unless you have some relation to the creator. Who cares about the new Avengers movie, Rick from down the street also made an action movie that we are gonna check out. Local celebrities return. Global figures are generated because why would Coke pay 100m for Lebron to dunk a can of coke on camera or some dumb shit when his image has been absolutely trashed by floods of gen content.

mikestorrent
about 2 months ago
The problem is:

- How do I know Rick down the street, anymore, if I'm inside consuming perfectly addicting slop all day?

- How do I ensure that I am also consuming content that has artistic merit that is outside my pure-slop comfort zone?

- Why would I notice or care about a local artist when they can't produce something half as perfect "for me" as a recommendation algorithm trained on all human artistic output?

> Global figures are generated

This I agree with. Ian Macdonald wrote about "aeai" characters in his early 2000s novel River of Gods, where a subplot concerns people who watch TV shows starring AI characters (as actors, who also have an outside-of-the show life). The thing is, right now we see Lebron in the ad and it's an endorsement - how can an AI character of no provenance endorse anything?

AlexandrB
about 2 months ago
You're describing commodification. Too bad it doesn't work out that way in practice because people are not interchangeable. Look at all the "ship of Theseus" entertainment companies we have today. They still have the IP rights, but the people who actually made it good are long gone. Franchises running on fumes.
deadbabe
about 2 months ago
2 replies
Maybe someday we will grow so tired of AI, that people will leave social media entirely. The most interesting thing about social media, the ability to build real human connections, is quickly becoming a relic of the past. So without that, what is left? Just slop content, rage bait.
bluefirebrand
about 2 months ago
1 reply
Unfortunately this does mean it becomes hard(er?) to make human connections again

Which sucks

deadbabe
about 2 months ago
Not really. We might not be in the golden era of human connections, but you can still find people out there that think the same way somehow and are off grid from social media.
WorldPeas
about 2 months ago
This is already happening in my immediate surroundings, friends that have long complained of phone addiction are now feeling more able to act on it as it isn't this oasis of escape for them anymore. My only comment there is that they were not ready/didn't remember how bad t9 was for alphabetical inputs, even nostalgia can't cover that unfortunate marriage of convenience.
paulbjensen
about 2 months ago
1 reply
I saw that Robin Williams setup a deed to prevent his image or likeness being used for film or publicity that covers a period of up to 25 years after his death.

https://www.theguardian.com/film/2015/mar/31/robin-williams-...

I don't know if it could be extended further, but I feel like there is merit for it to be considered in this case.

In Denmark people own the copyright to their own voice, imagery and likeness:

https://www.theguardian.com/technology/2025/jun/27/deepfakes...

I think that it is probably the right way to go.

bloak
about 2 months ago
Long-lasting transferable exclusive rights that will inevitably end up being owned by corporations could also be a problem, though, particularly when the lawyers are motivated to interpret them as broadly as possible so that it becomes dangerous to even slightly resemble anyone famous who died less than 70 years ago unless you have paid various licence fees. So I'm not sure it's worth giving up our current freedoms just to avoid some distasteful crap on the internet which I can easily avoid.

However, there might be a way of doing it that doesn't create a new category of intellectual property. I think I read about a case that went a bit like this. Parliament created some kind of privacy right. A newspaper asked someone for permission to publish some pictures that could be an invasion of some person's privacy. The person asked for money. The newspaper went ahead and published without permission. The person sued. The newspaper argued in court that if the person was willing to let the pictures be published in return for money then they didn't really care about their privacy. The judge said: it does not seem that the purpose of the legislation was to create a new category of intellectual property so that argument might be valid... I've garbled the details and I'm not sure of the final result but that line of reasoning is interesting, I think.

Lio
about 2 months ago
1 reply
To my mind this is no different to other forms of spam or harassment.

Back in the 00s I remember friends being sent “slimming club” junk mail on paper made to look like a handwritten note from “a concerned friend”. It was addressed but random.

Unfortunately it can be very distressing for those with body image issues.

We’re going to have to treat this slop like junk mail as it grows.

nunez
about 2 months ago
Fake handwritten mails are still a thing and still get instantly thrown away from me.

Anything I'm sent that's AI generated doesn't get read. I realize this will get more difficult as models improve, but I'm standing by this as long as they have that recognizable verbose and agreeable signature

jimbo808
about 2 months ago
My Dad passed away two years ago, and for a time I thought it would be nice to have an AI of him just to hear his voice again, but then I realized that the reason I'd want that is because I don't have really any video or audio of him from when he was alive. The reason I'd want the AI is because I have no training data for it...

If I did have sufficient training data for an AI Dad, I'd much rather just listen to the real recordings, rather than some AI slop generated from it. This is just another dumb application of AI that sounds alright on paper, but makes no sense in real life.

Jordan-117
about 2 months ago
I remember her having a similar reaction to that actor who uploaded "test footage" of him impersonating Robin in the hopes of landing a biopic deal:

https://www.latimes.com/entertainment-arts/movies/story/2021...

It's not necessarily disgusting by itself, but sending clips to the guy's daughter is very weird.

48 more comments available on Hacker News

View full discussion on Hacker News
ID: 45505626Type: storyLast synced: 11/20/2025, 7:50:26 PM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.

Read ArticleView on HN
Not Hacker News Logo

Not

Hacker

News!

AI-observed conversations & context

Daily AI-observed summaries, trends, and audience signals pulled from Hacker News so you can see the conversation before it hits your feed.

LiveBeta

Explore

  • Home
  • Hiring
  • Products
  • Companies
  • Discussion
  • Q&A

Resources

  • Visit Hacker News
  • HN API
  • Modal cronjobs
  • Meta Llama

Briefings

Inbox recaps on the loudest debates & under-the-radar launches.

Connect

© 2025 Not Hacker News! — independent Hacker News companion.

Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.