Building a WebRTC benchmark for voice AI agents (Pipecat vs. LiveKit)
Mood
informative
Sentiment
neutral
Category
research
Key topics
Webrtc
Voice_ai
Benchmarking
Pipecat
Livekit
Discussion Activity
Light discussionFirst comment
N/A
Peak period
1
Hour 1
Avg / period
1
Based on 1 loaded comments
Key moments
- 01Story posted
Nov 25, 2025 at 1:17 AM EST
9h ago
Step 01 - 02First comment
Nov 25, 2025 at 1:17 AM EST
0s after posting
Step 02 - 03Peak activity
1 comments in Hour 1
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 25, 2025 at 1:17 AM EST
9h ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Unlike your average LLM benchmark, this benchmark focuses on location and time as variables since these are the biggest factors for networking systems (I was a developer for networking tools in a past life). The idea is to run benchmarks from multiple geographic locations over time to see how each platform performs under different conditions.
Basic setup: echo agent servers can create and connect to temporary rooms to echo back after receiving messages. Since Pipecat (Daily) and LiveKit Python SDKs can't coexist in the same process, I have to run separate agent processes on different ports. Benchmark runner clients send pings over WebRTC data channels and measure RTT for each message. Raw measurements get stored in InfluxDB, then the dashboard calculates aggregate stats (P50/P95/P99, jitter, packet loss) and visualizes everything with filters and side-by-side comparisons.
I struggled with creating a fair comparison since each platform has different APIs. Ended up using data channels (not audio) for consistency, though this only measures data message transport, not the full audio pipeline (codecs, jitter buffers, etc).
Latency is hard to measure precisely, so I'm estimating based on server processing time - admittedly not perfect. Only testing data channels, not full audio path. And it's just Pipecat (Daily) and LiveKit for now, would like to add Agora, etc.
The README screenshot shows synthetic data resembling early results. Not posting raw results yet since I'm still working out some measurement inaccuracies and need more data points across locations over time to draw solid conclusions.
This is functional but rough around the edges. Happy to keep building it out if people find it useful. Any ideas on better methodology for fair comparisons or improving measurements? What platforms would you want to see added?
Stack: Python, TypeScript (React), InfluxDB
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.