Llms Forget. Redprint Remembers
Key topics
Instead of chasing bigger models and longer context windows, Redprint shifts the focus to compression, memory, and reasoning. It is designed to reduce the need for massive GPUs or infinitely scaling architectures.
The goal is to move past chatbots and tool-using agents toward something more persistent, auditable, and self-refining, all without relying on billions of parameters.
Top-level features of Redprint:
* Symbolic action-outcome chains (with execution feedback)
* Modular plug-ins for curiosity, compression, and evaluation
* Symbolic + vector memory fusion
* Tests that track agent learning over time
We’re not claiming AGI. But we are taking a real step toward agents that can reason about their actions and improve themselves.
Still early, but we wanted to surface the work now to find others exploring similar paths and avoid building in a vacuum.
Whitepaper coming soon. Select early access likely.
The authors present Redprint, a modular reasoning framework that enhances local LLMs with symbolic memory and reasoning capabilities, sparking interest in the community about its potential to create more persistent and self-refining AI agents.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
2h
Peak period
2
Day 3
Avg / period
1.3
Key moments
- 01Story posted
Oct 27, 2025 at 11:41 AM EDT
2 months ago
Step 01 - 02First comment
Oct 27, 2025 at 1:38 PM EDT
2h after posting
Step 02 - 03Peak activity
2 comments in Day 3
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 11, 2025 at 10:58 AM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
We’ve been experimenting with a symbolic reasoning layer that sits beneath local LLMs, aimed at compressing their context demands by shifting some of the burden into structured memory and logic. The core idea: instead of infinitely scaling GPUs and tokens, we scaffold a learning loop that binds action to outcome, reflection to memory, and memory to improvement.
The architecture includes:
* A lightweight Prolog or Datalog engine to track symbolic execution paths
* Vector and symbolic memory fusion to keep both nuance and structure
* Outcome tests that update a learning score over time, enabling the agent to refine future actions
* Curiosity modules that bias the agent toward resolving ambiguity or closing feedback loops
One example: we had an agent running a simple multi-step task loop with an objective scoring function. First run: 0 percent success. But as outcome chains were logged and the reasoning engine updated its knowledge base, the same model climbed into the 70 percent range over 60 runs. No retraining, no fine-tuning, just structured feedback and symbolic state retention.
Still early, but the goal is not just to build a better chatbot or prompt wrapper. We’re aiming for something more like a persistent local intelligence that reasons through problems, remembers its missteps, and adapts without external retraining.
Please chime in if you feel any interest in the subject. I'm not asking for money or hype or anything like that. Ideally someone who's ready to challenge what I have set forth and perhaps participate in the building.