Latency Profiling in Python: From Code Bottlenecks to Observability
Key topics
The debate around latency profiling in Python heats up as commenters weigh in on the feasibility of full function tracing versus sampling profilers. Veserv sparks the discussion by suggesting that efficient instrumentation can enable full function traces in production with minimal overhead, prompting hansvm to counter that the overhead can be substantial for certain programs. As the conversation unfolds, it becomes clear that the choice of profiling tool and language can significantly impact performance, with some arguing that Python is ill-suited for low-latency applications like algorithmic trading, while others point out that optimized Python can hold its own against other languages. The thread also raises eyebrows about the legitimacy of the original blog post, with some commenters suspecting it was AI-generated.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
7d
Peak period
10
156-168h
Avg / period
5.5
Based on 11 loaded comments
Key moments
- 01Story posted
Dec 2, 2025 at 7:55 AM EST
about 1 month ago
Step 01 - 02First comment
Dec 8, 2025 at 9:34 PM EST
7d after posting
Step 02 - 03Peak activity
10 comments in 156-168h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 9, 2025 at 2:24 PM EST
about 1 month ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
[1] https://functiontrace.com/
"This license allows you to use and share this software for noncommercial purposes for free and to try this software for commercial purposes for thirty days."
This is not an open source license. "Open Source" is a trademarked term meaning without restrictions of this kind; it is not a generic term meaning "source accessible".
You can also just use perf, but it does require an extra package from the python build (which uv frustratingly doesn't supply)
I used FunctionTrace as a example and evidence for my position that tracing Python is low overhead with proper design to bypass claims like: “You can not make it that low overhead or someone would have done it already, thus proving the negative.” I am not the author or in any way related to it, so you can bring that up with them.
Python is a bad choice for a system with such latency requirements. Isn't C++/Rust preferred language for algorithmic trading shops?
I don't disagree with you that Python might be a wrong choice for algorithmic trading, but I do think it depends. We did our stuff with turbodbc rather than pyodbc which is used everywhere else, specifically because we analysed our bottlenecks.