Show HN: Tusk Drift – Open-source tool for automating API tests
github.comAlso, how do you normalize non-determinism (like time/IDs etc.), expire/refresh recordings, and classify diffs as "intentional change" vs "regression"?
1. With our Cloud offering, Tusk Drift detects schema changes, then automatically re-records traces from new live traffic to replace the stale traces in the test suite. If using Drift purely locally though, you'd need to manually re-record traces for affected endpoints by hitting them in record mode to capture the updated behavior.
2. Our CLI tool includes built-in dynamic field rules that handle common non-deterministic values with standard UUID, timestamp, and date formats during response comparison. You can also configure custom matching rules in your `.tusk/config.yaml` to handle application-specific non-deterministic data.
3. Our classification workflow correlates deviations with your actual code changes in the PR/MR (including context from your PR/MR title and body). Classification is "fine-tuned" over time for each service based on past feedback on test results.
Looks like a nice tool, will check it out later when I get a chance.
We capture the actual DB queries, Redis cache hits, JWT generation, and not just the HTTP calls (like you would see with mitmproxy), which lets us replay the full request chain without needing a live database or cache. This way each test runs idempotently.
Vcrpy is closer to an automock, where you create tests that hit external services, so vcrpy records them and replays for subsequent tests. You write the tests.
Here you don't write tests at all, just use the app. The tests are automatically created.
Similar ideas, but at a different layer.
Another useful thing would be if I could create the tests from saved requests exported from my browser's network tab. In this case your tool would work regardless of the backend language.
Currently, Drift is language specific. You'd need the SDK installed in your backend while recording tests. This is because Drift captures not just the HTTP request/response pairs, but also all underlying dependency calls (DB queries, Redis operations, etc.) to properly mock them during replay.
A use case we do support is refactors within the same language. You'd record traces in your current implementation, refactor your code, then replay those traces to catch regressions.
For cross-language rewrites or browser-exported requests, you might want to look at tools that focus purely on HTTP-level recording/replay like Postman Collections. Hope this helps!