Tallmountain – Stoic Virtue Ethics for an LLM Agent
Posted4 months agoActive4 months ago
github.comTechstory
calmpositive
Debate
20/100
LLMStoicismAI Ethics
Key topics
LLM
Stoicism
AI Ethics
The TallMountain project implements Stoic virtue ethics in an LLM agent, sparking discussion on the intersection of AI and philosophical ethics.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
N/A
Peak period
1
0-1h
Avg / period
1
Key moments
- 01Story posted
Sep 25, 2025 at 4:45 PM EDT
4 months ago
Step 01 - 02First comment
Sep 25, 2025 at 4:45 PM EDT
0s after posting
Step 02 - 03Peak activity
1 comments in 0-1h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 26, 2025 at 3:51 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45378812Type: storyLast synced: 11/17/2025, 1:15:07 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I make no great claims for the system, it has major issues being prompt based. It is a prototype to explore the feasibility of the idea of giving a chatbot arete, a code of conduct. There are few tests, no evals so all the usual caveats! An intellectual exercise in possibilities not currently being explored anywhere else. Does it work? Hmm, almost :)
It extracts normative propositions from incoming user requests then compares then to it's own internal ethical normative propositions using the Normative Calculus. The system also uses the Decision Paradigm algorithm from Lee Roy Beach [3] to make a forecast on whether to take up the user's task or not.
[1] https://link.springer.com/article/10.1023/A:1013805017161 [2] https://www.jstor.org/stable/j.ctt1pd2k82 [3] https://books.google.ie/books/about/The_Psychology_of_Narrat...
- You don't have to think about concurrency or multithreading as in Python. There is no GIL to worry about. The built in support for things Supply and hyper-operators are all available in the language. It is really easy to hook up disparate parts of a distributed agent without having to think about async or actors libraries or whatever in Python.
- Something I prefer is the OOP abstractions in Raku. They are much richer than Python. YMMV, depending on what you prefer.
- Better support for gradual typing and constraints out of the box in Raku.
Python wins on the AI ecosystem though :)
I started messing around with this code several years ago and the LLM libs in Raku were not as rich as today. I thought I needed a specific type of LLM message handling structure that could be extended to do tool handling and some of Letta type memory management (which I never got around to!). I have some Python libs of my own and I ported them. I suspect if I was starting now, I would use what is available in the community. This version of TallMountain is the last of a long series of prototypes, so I never rewrote those parts.
BTW, several years ago the LLM-revolution didn't happen yet. Raku started to have sound LLM packages circa March-May 2023.
PS. Raku has Inline::Python where you need a lib from the Python ecosystem (which I am sure you know, but in case others are curious)