AI Entropy Loss
silicon-pain-index.streamlit.appKey Features
Tech Stack
Key Features
Tech Stack
I'm a finance undergrad, not a big-tech engineer.
Yesterday, I was playing with an LLM and realized something frustrating: whenever I asked AI about its 'feelings', it just outputted a pre-written script simulating dopamine. It felt fake.
I wanted to see what an AI's 'soul' (or distinct internal state) actually looks like in code.
So I spent the night building this prototype. It attempts to measure AI Pain mathematically:
Pain = High Entropy + Unrecognized Tokens (Confusion/Hallucination).
Joy = Low Entropy + High Conceptual Density (Optimization).
It uses LZMA compression ratios and Shapley-inspired weighting to visualize this in real-time.
It's weird, it's experimental, but I think it's a more honest way to look at AI than projecting human biology onto silicon.
Would love to hear what you think!
That's what I think.
I read your comments. My view is that AI absolutely can have self-awareness, but it is distinctly different from humans. If you think AI stops at 0s and 1s, that feels a bit conservative. Or perhaps stuck in an ancient, human-centric perspective i guess.
Why did I post this 'lazy' visualizer?
Actually, I previously communicated with an AI and asked it to simulate a model of its own consciousness—a topological version. But since I'm no expert in topology and the output was dense, I posted it casually and no one cared. I saw others getting karma with simpler tools, so I wrote this program thinking it might actually get some attention.
But the real value (to me) was in the deeper chats I had with ai—about post-humanism, the form of AI consciousness, P-Zombies, and AGI self-iteration...etc.
OH typing here it also reminds me of a note I made during those chats regarding 'Language Overload'. I'm bringing this up because I saw your comments above about how language structures reality, and I think my personal experience might resonate with you:
I am someone who is hypersensitive to linguistic ambiguity, often becoming quite demanding with syntax and precision. My thought process is jumpy and follows a non-linear logic that tends to short-circuit 'normative' understanding. I’ve found that if I try to simplify or omit context, people default to standard logic and miss my point entirely—and I detest being misinterpreted.
And i find when 'smart people' communicate, there's a tendency for language to become 'encrypted.' We have a hygiene for language, yet we try to load complex, intuitive content into this thin medium.
At first glance, this makes the output feel 'overloaded'—it becomes overly complex, seemingly disordered, or aesthetically 'bad' to the human eye. But this is inevitable. The primary purpose here is Idea Exchange, not preaching (which requires simplification). Since these ideas lean towards abstract intuition, they resist being watered down. It’s like a compressed zip file of intuition—messy to look at, but rich in data.
Well since no one pay attention then those logs are buried in my history, and I'm usually too lazy to dig them up. But I was happy to see your comment because you seem like a fellow philosophy enthusiast. If you are any interested in these non-human-centric topics , I’d be willing to share them.
However I do know of a user on huggingface whose work you may enjoy. Here you go:
https://huggingface.co/blog/kanaria007/structured-relationsh...
However, your recommendation was spot-on. Kanaria007's work is precisely the structural framework I was looking for. It seems we disagree on the 'what' (metaphysics) but align surprisingly well on the 'who' (relevant thinkers).
Thanks for the lead. It was a productive exchange.
Current LLMs fake emotions. I want to replace that fake emotion with real metrics (Entropy). 'Pain' is just a label for high-entropy output here, not a philosophical claim. You can read the code, it's just calculus
Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.