Modeling Others' Minds as Code
Posted2 months agoActive2 months ago
arxiv.orgResearchstory
calmpositive
Debate
40/100
Artificial IntelligenceCognitive ModelingSocial Interactions
Key topics
Artificial Intelligence
Cognitive Modeling
Social Interactions
Researchers propose modeling human behavior in social interactions as 'behavioral programs' in code, sparking discussion on its applications and implications for understanding human cognition.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
48m
Peak period
20
2-4h
Avg / period
6.2
Comment distribution37 data points
Loading chart...
Based on 37 loaded comments
Key moments
- 01Story posted
Oct 20, 2025 at 9:54 AM EDT
2 months ago
Step 01 - 02First comment
Oct 20, 2025 at 10:42 AM EDT
48m after posting
Step 02 - 03Peak activity
20 comments in 2-4h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 21, 2025 at 6:13 AM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45643976Type: storyLast synced: 11/20/2025, 5:45:28 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
If you want to get a bit meaner, you could profitably replace some people with the empty python script.
"People who think they can be replaced by AI will be replaced."
In other words, the cheerleaders are so dumb that they probably could be replaced.
https://www.amazon.com/Remember-Me-to-God/dp/B000LQ2SHG
talks about how people in low social positions (say a Bank Teller) have no opportunities to distinguish themself but have opportunities to make mistakes that they'll be held accountable for. Whereas if you are in a high social position you get to grade your own paper, get credit for your successes, and "fail up" when you screw up.
Given that neural networks get it wrong some of the kind they might be better to fill the high status positions (make up crazy stuff to say for Satya Nadella and Eric Schmidt for instance)
In reality you will find pockets of utter incompetence in nearly every organization of considerable size. And I don't mean people who sometimes have bad days (who hasn't?) or struggle with particularly hard tasks (who doesn't?).
I mean long-time employee who lack the ability to wield the core tools and lack the core skills needed in their job. Imagine a blacksmith that doesn't know how to use a hammer and while they can talk very entertainingly and deeply about metals they certainly seem to fail at doing anything with it.
Now you may think I am exaggerating. I am not. Anyone in this thread who has worked in first level IT support will be probably agree. Now I am an educator, with a strong believe that nobody (aside those affected by certain medical conditions) is outside of learning and becoming better. I am known for my extreme patience and have won my provinces teaching price. Take this into account when I continue describing here.
We are talking about secretaries whose main tool (as a fraction of their workday) is the email client and calendar functionality, yet they fail to grasp the fundamental "IT for seniors" concepts of even the most basic version of the software they interact with more than 6 hours a day. In fact it is worse, they know they are bad and still file repeated advice into the mental equivalent of a paper shredder. I know of a person who has been doing this for 10 years now. Don't get me wrong, they somehow manage to not have it falling apart, but it is even exhausting to look at it from afar.
What would you think of a truck driver that after years on the job repeatedly asked you how to start the ignition?
- They could be replaced by a 12-line Python script
Predictably HN-misanthropic is more like it.
The kicker is it's often not the guy you think it is.
https://web.archive.org/web/20081204045017/http://www.thinkg...
Whoever bought that shirt could probably use some social skills coaching. It's not a good idea to wear a shirt that indiscriminately broadcasts contempt in all directions. I get the purchasers probably confused it for humor, but there's an important difference between humor that works on a viewer TV show and and humor embedded in the interaction of you with another real person.
I had this though recently at Walmart, after seeing the third such shirt (a visual pun meaning "fuck you"). Geeks often have the same attitude problems.
Ideally, but that still doesn't really solve the problem. It's not really practical to counter an indiscriminate broadcast of contempt with point to point interactions. People who don't know you or don't know you well will always see your shirt, if you wear it out.
You want to do the opposite: indiscriminately broadcast a kind personality, then deploy the sarcasm in point to point contexts "that never [leave] any doubt that it is a joke".
This made me realize just now the functional* couples I know would never wear somethings so blunt.
This is all purely anecdotal but just sharing my personal observations :)
*I need a better word that just describes the objective truth with no baggage please help lol
Aren't there already materials (made for people with autism) that catalog these scripts and make them explicit?
Edit: e.g. https://suelarkey.com.au/promoting-social-understanding-soci...
> Pond party later. Too many ducks, too many voices, too many rules. None written down.
Still a good duck, always a good duck!
I really really want this other part of my unconscious behavior modeled well. Would be very useful.
By examining the common attacks on distracted people you can build a simple rule set that accounts for a large part of unconscious behavior. The attack I love to hate is the “subscribe now” popup. It inserts into your OODA loop at exactly the moment when your mind is engaged with important or interesting concepts. It is designed to compromise your decision making. I would use that as the foundation of my model because it sets out not only the behavior but the conditions under which the behavior is active.
Another set of rules can be inferred from ways phishing tricks people. (Activating urgency, fear, irritation, authority, avarice)
A third source of rules might be inferred from the practices of illusionists and cup and ball scams. Attention is finite, I’ve got it here so it’s not available in the important direction.
Unless you're asserting that AI models and brains do share some things in common?
Also, you can think of it as the subject (or subjects) programming the modeling agent: if the modeled mind is able to recognize it's being modeled, it's reasonable to consider the possibility that it can influence how those inferred scripts are created just by shaping its own behavior.
It can't be like an EEG, right? "Please be quiet and predictable while I model you, sir".
Of course, what humans think of predictability might not hold water. One could think he's behaving in a random way but in reality following well-known patterns (unknown to him).
It's an interesting problem. Absolutely terrifying stuff.
I don't know if anyone is working on modelling the human existence (via LMMs) in this way... It feels like a Frankensteinian and eccentric project idea, but certainly a fun one!
5 more comments available on Hacker News