Ask HN: How do you handle long-term memory with AI tools like Cursor and Claude?
No synthesized answer yet. Check the discussion below.
It's part of a larger process for working with LLMs that I call "Plans on Plans." I wrote about it on Medium.[0]
[0] https://levelup.gitconnected.com/you-are-bugs-improving-your...
Generic summaries or RAG results often feel useless because models optimize for "what would be useful to explain to anyone" rather than "what was significant in this specific context."
What worked for me: separate semantic context (the "why are we here" layer) from structured tracking (decisions, blockers, dependencies). The semantic layer captures salience — what mattered emotionally or strategically — and the tracking layer handles facts or even snapshots of the latest state of the process.
The model does not have to guess what was important. The memory architecture can encode it.