The State of Machine Learning Frameworks in 2019
Key topics
The article discusses the state of machine learning frameworks in 2019, highlighting PyTorch's dominance in research and TensorFlow's dominance in industry, sparking a discussion on the implications and future directions of these trends.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
4d
Peak period
9
84-96h
Avg / period
6.3
Based on 19 loaded comments
Key moments
- 01Story posted
Oct 21, 2025 at 12:44 PM EDT
3 months ago
Step 01 - 02First comment
Oct 25, 2025 at 10:47 AM EDT
4d after posting
Step 02 - 03Peak activity
9 comments in 84-96h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 26, 2025 at 10:43 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Seems they were pretty spot on! https://trends.google.com/trends/explore?date=all&q=pytorch,...
But to be fair, it was kind of obvious around ~2023 without having to look at metrics/data, you just had to look at what the researchers publishing novel research used.
Any similar articles that are a bit more up to date, maybe even for 2025?
Unless you’re working at Google, then maybe you use JAX.
In my experience, a lot of it comes down to the community and the ease of use. Debugging in PyTorch feels way more intuitive, and I wonder if that’s why so many people are gravitating toward it. I’ve seen countless tutorials and workshops pop up for PyTorch compared to TensorFlow recently, which speaks volumes to how quickly things can change.
But then again, TensorFlow's got its enterprise backing, and I can't help but think about the implications of that. How long can PyTorch ride this wave before it runs into pressure from industry demands? And as we look toward 2025, do you think we'll see a third contender emerge, or will it continue to be this two-horse race?
PyTorch has a huge collection of companies, organizations and other entities backing it, it's not gonna suddenly disappear soon, that much is clear. Take a look at https://pytorch.org/foundation/ for a sample
All the graph building and session running was way too complex, with too much global state and variable sharing was complicated and based on naming and variable scopes and name scopes and so on.
It was an okay try, but that design simply didn't work so well for quick prototyping, iterating, debugging that's crucial in research.
PyTorch was much closer to just writing straightforward numpy code. TensorFlow 2 then tried to catch up with "eager mode", but in the background it was still a graph and tracing often broke and you had to write the code very carefully and with limitations.
In the end, Pytorch also developed proper production and serving tools as well as graph compilation, so now there's basically no reason to go to TensorFlow. Not even Google researchers use it (they use jax). I guess some industries still use it but at some point I expect Google to shut down TF and focus on the JAX ecosystem with some kind of conversion tools for TF.
I think the only thing that could have saved TensorFlow at that point would have been some sort of enormous performance boost that would only work with their computation model. I'm assuming Google's plan was make it easy to run the same TensorFlow code on GPUs and TPUs, and then swoop in with TPUs that massively outperformed GPUs (at least on a performance per dollar basis). But that never really happened.
Nowadays it looks like yolo absolutely dominates this segment. Any data scientists can chime in?
But the exciting new research is moving beyond the narrow task of segmentation. It's not just about having new models that get better scores but building larger multimodal systems, broader task definitions etc.
I think people have continued to work on it. There’s no single lab or developer, it mostly appears that the metrics for comparison are usually focused on the speed/MAP plane.
One nice thing is that even with modest hardware, it’s low enough latency to process video in real time.
Have anyone else had similiar experiences?
Just goes to show that even when you’ve got everything going for you, perfect team filled with nice people, infinite resources (TPUs anyone?), perfect marketing, your own people will split off and take over the market.
Second place seems to always win the market
I gave mxnet a bit of an outsized score in hindsight, but outside of that I think I got things mostly right.
https://source.coveo.com/2018/08/14/deep-learning-showdown/