Should an AI Copy of You Help Decide If You Live or Die?
Posted3 months agoActive3 months ago
arstechnica.comResearchstory
calmmixed
Debate
20/100
AI EthicsDigital LegacyEnd-of-Life Decisions
Key topics
AI Ethics
Digital Legacy
End-of-Life Decisions
The article explores the ethical implications of using AI copies of individuals to make life-or-death decisions, sparking debate about the boundaries of artificial intelligence and personal autonomy.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
9m
Peak period
1
0-1h
Avg / period
1
Key moments
- 01Story posted
Oct 20, 2025 at 10:52 AM EDT
3 months ago
Step 01 - 02First comment
Oct 20, 2025 at 11:00 AM EDT
9m after posting
Step 02 - 03Peak activity
1 comments in 0-1h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 20, 2025 at 7:26 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45644609Type: storyLast synced: 11/17/2025, 9:06:54 AM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
An "AI copy" of me shouldn't exist in the first place, but if it did it should have zero role in any decisions about my life or death. Important decisions like that should be made by myself or if I'm not capable of decision-making, then by those who love me, not by a machine.
cat >> ~/should-i-live.txt; chmod u+x ~/should-i-live.txt
They can answer in ways that would fool someone who doesn’t know the person. They cannot faithfully reconstruct a person’s thought process.