Dograh
github.comKey Features
Key Features
Tech Stack
1) It would be great to provide different voice personas like vapi does maybe it's there already but couldn't find the config. 2) My agent reported some lag in getting responses during the call, perhaps that's just resource issue ?
Either Way you're to a great start and I look forward for this project to grow, starred the repo on GH,I think I was the 100th one :).
1. Having different voice personas selector like Vapi is in our pipeline. 2. The lag can be either because of system resource constraints, or due to LLM Inference Lags from the LLM inference providers. We are constantly trying to squeeze out every milisecond to combat the latency issues.
Thank you again for your kind words.
I hope you find product market fit and are able to do what you desire with this product. In the meantime, I am grateful that you are helping us advance towards the Star Trek Voice Computer being defictionalized!
Among many other useful and fun things, yes, the dream of having a Star Trek Voice Computer or the good HAL is not very far away. :)
We are happy to share some technical details for anyone interested. A lot of Dograh’s internal work went into extending the functionality of the pipeline by including custom Frames and Processors, creating a ReactFlow based visual agent builder and creating an Engine that can parse that Agent JSON and call conversational LLM loops with function calling. Also we enhanced the functionality by creating easier access to extracted variables, call transcripts and recordings - things that are needed in any production deployment.
One thing we are still trying to understand better: how teams handle long-running conversations while keeping context tight and cheap. Would love to hear how others have approached that.
But when we switched to OSS stacks (Pipecat, LiveKit), we realise that even with great OSS, the plumbing was still painful and necessary- no standard way to extract variables from conversations (name/date/order ID), no straightforward tracing of LLM calls, no way to run AI-to-AI test loops, and no fast workflow iteration - and every change meant another redeploy.
The infrastructure glue kept ballooning, and each time it felt like rebuilding the same system from scratch.
Dograh came out of that combination of cost pain and integration pain. Happy to dig deeper into anything.
Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.