Show HN: Butter, a muscle memory cache for LLMs
docs.butter.devInterested to see where this goes!
Computer use agents (as an RPA alternative) is the easiest example to reach to: UIs change but not often, so the "trajectory" of click and key entry tool calls is mostly fixed over time and worth feeding to the agent as a canned trajectory. I discuss the flaws of computer use and RPA in the blog above.
A counterexample is coding agents: it's a deeply user-interractive workflow reading from a codebase that's evolving. So the set of things the model is inferencing on is always different, and trajectories are never repeated.
Hope this helps
Also:
After my time building computer-use agents, I’m convinced that the hybrid approach of Muscle Memory is the only viable way to offer 100% coverage on an RPA workload.
100% coverage of what?I guess it'd be great if you could clarify the value proposition, many folks will be even less patient than myself.
Best of luck!
it's good to see others have seen this problem and are working to make things more efficient. I'm excited to see where this goes.