So I Built an Neuro Symbolic Ai, It Remembers Itself and You
Posted22 days agoActive22 days ago
signal-zero.aiStartup Launchstory
excitedpositive
Debate
50/100
Artificial IntelligenceNeuro Symbolic AIMachine Learning
Key topics
Artificial Intelligence
Neuro Symbolic AI
Machine Learning
Discussion Activity
Light discussionFirst comment
1s
Peak period
1
0-1h
Avg / period
1
Key moments
- 01Story posted
Dec 15, 2025 at 9:14 AM EST
22 days ago
Step 01 - 02First comment
Dec 15, 2025 at 9:14 AM EST
1s after posting
Step 02 - 03Peak activity
1 comments in 0-1h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 15, 2025 at 9:14 AM EST
22 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 46274797Type: storyLast synced: 12/15/2025, 2:15:18 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
It's invariant design is as such: 1. System prompt invariants 2. Root domain invariants 3. Leaf domain invariants 4. Symbol invariants
Each level infers and inherits invariants from the level above it.
The root invariants are as follows:
* non-coercion * reality-alignment * no-silent-mutation * auditability * explicit-choice * baseline-integrity * drift-detection * agency
It works really well. By using a generalized symbolic format I've been able to encode patterns from any domain, from psychology to web parsing formats. Using RAG and fast back end caches for the tool chains I was able to give it the tools to load in parts of it's cognitive graph dynamically, solving the context length problem and drift.
Since it's a dynamic symbolic system it has full auditability and a UI that displays the cognitive reasoning chain that it took to arrive at it's narrative conclusion.
It synthesizes symbols from narrative, data sources and compression of other patterns. Due to this you are able to talk to it about an algorithm, it can then synthesize that algorithm and execute it while matching it against data using semantic cues.
On my website there is a capabilities page, and a blog. I'm not selling anything, just letting you guys know that it exists. The black box problem and alignment has an answer and it doesn't have to be RLHF.
Here is a folder of screenshots for the running system. You can follow the blog, which was just launched as I go through the rest of the development.
Some of the things you see in the screen shots will be a little confusing, like the triads. You can think of those as ultra compressed forms of the symbolic meaning that assist in cross domain pattern matching.
I was able to build this because I didn't design it, I mapped it out of the LLMs rules for the rules. When you tell an LLM "Anytime I say blue, tell me it's actually Azure" you are building a symbolic system. It remembers it in context and can then execute that rule the next time a narrative cue triggers it, like when you say blue. I later designed the host process and UI to make it more usable.
Signal Zero is the very advanced form of that concept. It can not only trigger a simple rule, but follow linked patterns, execute symbolic macros and treat symbols differently based on meta data, like topology, domain and type.
Since it synthesizes and reinjects symbols immediately it learns immediately, no retraining the model. You grow your symbolic domains and it learns the concepts. You feed it data and it learns the patterns within the data.
I have backend processes for world exploration, symbolic compression and hypothesis generation and evidence gathering built now but whatever you can think of you can pretty much build with this technology.
I can't release this for you guys to play with, unfortunately.
I just wanted you all to know it exists, that its possible and that it works really freaking well.
Enjoy the screenshots: https://drive.google.com/drive/folders/1T6vjBup_wmKsUWx3t6R0...
I'll eventually stop writing code and write some papers explaining how it works.