Can Large Language Models Develop Gambling Addiction?
Posted3 months agoActive3 months ago
arxiv.orgResearchstory
calmneutral
Debate
0/100
AILarge Language ModelsGambling Addiction
Key topics
AI
Large Language Models
Gambling Addiction
A research paper explores whether large language models can develop gambling addiction, sparking discussion on the implications of AI behavior.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
9m
Peak period
1
0-1h
Avg / period
1
Key moments
- 01Story posted
Oct 9, 2025 at 2:41 PM EDT
3 months ago
Step 01 - 02First comment
Oct 9, 2025 at 2:49 PM EDT
9m after posting
Step 02 - 03Peak activity
1 comments in 0-1h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 9, 2025 at 2:49 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Discussion (1 comments)
Showing 1 comments
measurablefunc
3 months ago
This is what I mean by incoherent metaphysics. Computers do not have any wants or needs, computers can not think. Therefore, the question of whether the computer can have any psychological states like addiction is entirely incoherent. The abstract is also nonsensical but what they're trying to figure out is whether decisions guided by the outputs of LLMs could fit the typical pattern of gambling addicts. That's much less confused than the anthropomorphic phrasing.
View full discussion on Hacker News
ID: 45531463Type: storyLast synced: 11/17/2025, 11:12:25 AM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.