Orchestrating 5000 Workers Without Distributed Locks: Rediscovering Tdma
Key topics
But why do these processes need to talk to each other at all?
THE INSIGHT
What if orchestrators never run simultaneously?
Runner-0 executes at T=0s, 10s, 20s... Runner-1 executes at T=2s, 12s, 22s... Runner-2 executes at T=4s, 14s, 24s... Runner-3 executes at T=6s, 16s, 26s... Runner-4 executes at T=8s, 18s, 28s...
Time-Division Multiple Access (TDMA). Same pattern GSM uses for radio.
GO IMPLEMENTATION
type Runner struct { ID, TotalRunners int CycleTime time.Duration }
func (r Runner) Start() { slot := r.CycleTime / time.Duration(r.TotalRunners) offset := time.Duration(r.ID) slot
for {
time.Sleep(time.Until(computeNextSlot(offset)))
r.reconcile() // Check workers, start if needed
}
}Each runner gets 2s in a 10s cycle. No overlap = zero coordination.
SQLITE CONFIG
PRAGMA journal_mode=WAL; dbWrite.SetMaxOpenConns(1) // One writer dbRead.SetMaxOpenConns(10) // Concurrent reads
With TDMA, busy_timeout never triggers.
THE MATH
Capacity = SlotDuration / TimePerWorker = 2000ms / 10ms = 200 workers per runner
5 runners = 1000 workers 25 runners = 5000 workers (25s cycle, 12.5s avg latency)
For batch jobs running hours, 10s detection latency is irrelevant.
BENCHMARKS (real data from docs/papers)
System | Writes/s | Latency | Nodes | Use Case etcd | 10,000 | 25ms | 3-5 | Config ZooKeeper | 8,000 | 50ms | 5 | Election Temporal | 2,000 | 100ms | 15-20 | Workflows Airflow | 300 | 2s | 2-3 | Batch TDMA-SPI | 40 | 5s avg | 1-5 | Batch
WHAT YOU GAIN: - Zero consensus protocols (no Raft/Paxos) - Single-node deployment possible - Deterministic behavior - Radical simplicity
WHAT YOU SACRIFICE: - Real-time response (<1s) - High frequency (>1000 ops/sec) - Arbitrary scale (limit ~5000 workers)
UNIVERSAL PATTERN
Wireless Sensor Networks: DD-TDMA (IEEE 2007) - same pattern Kubernetes Controllers: Reconcile every 5-10s (implicit TDMA) Build Systems: Time-slice job claims vs SELECT FOR UPDATE
WHY ISN'T THIS COMMON?
1. Cultural bias: Industry teaches "add consensus layer" as default 2. TDMA sounds old: It's from 1980s telecoms (but old ≠ bad) 3. SQLite underestimated: Actually handles 50K-100K writes/sec on NVMe 4. Most examples optimize for microservices (1000s ops/sec), not batch
WHEN NOT TO USE: Microservices (<100ms latency needed) Real-time systems (trading, gaming) >10,000 operations/sec required
GOOD FOR: Batch processing ML training orchestration ETL pipelines (hourly/daily) Video/image processing Anything where task duration >> detection latency
THE REAL LESSON
Modern distributed systems thinking: 1. Assume coordination needed 2. Pick consensus protocol 3. Deal with complexity
Alternative: 1. Can processes avoid each other? (temporal isolation) 2. Can data be partitioned? (spatial isolation) 3. Is eventual consistency OK?
If yes to all three: you might not need coordination at all.
CONCLUSION
I built a simple orchestrator for batch workers and rediscovered a 40-year-old telecom pattern that eliminates distributed coordination entirely.
The pattern: TDMA + spatial partitioning + SQLite The application to workflow orchestration seems novel.
If Kubernetes feels like overkill, maybe time-slicing is enough.
Sometimes the best distributed system is one that doesn't need to be distributed.
--- Full writeup: [blog link] Code: [github link]
Discussion: Anyone else using time-based scheduling for coordination-free systems? What about high clock skew networks?
Discussion Activity
Light discussionFirst comment
39m
Peak period
2
0-1h
Avg / period
1.3
Key moments
- 01Story posted
Dec 25, 2025 at 1:14 PM EST
9 days ago
Step 01 - 02First comment
Dec 25, 2025 at 1:53 PM EST
39m after posting
Step 02 - 03Peak activity
2 comments in 0-1h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 26, 2025 at 6:02 AM EST
8 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Discussion hasn't started yet.