Not Hacker News Logo

Not

Hacker

News!

Home
Hiring
Products
Companies
Discussion
Q&A
Users
Not Hacker News Logo

Not

Hacker

News!

AI-observed conversations & context

Daily AI-observed summaries, trends, and audience signals pulled from Hacker News so you can see the conversation before it hits your feed.

LiveBeta

Explore

  • Home
  • Hiring
  • Products
  • Companies
  • Discussion
  • Q&A

Resources

  • Visit Hacker News
  • HN API
  • Modal cronjobs
  • Meta Llama

Briefings

Inbox recaps on the loudest debates & under-the-radar launches.

Connect

© 2025 Not Hacker News! — independent Hacker News companion.

Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.

Not Hacker News Logo

Not

Hacker

News!

Home
Hiring
Products
Companies
Discussion
Q&A
Users
  1. Home
  2. /Discussion
  3. /GPU Hot: Dashboard for monitoring NVIDIA GPUs on remote servers
  1. Home
  2. /Discussion
  3. /GPU Hot: Dashboard for monitoring NVIDIA GPUs on remote servers
Last activity about 1 month agoPosted Oct 6, 2025 at 9:04 AM EDT

GPU Hot: Dashboard for Monitoring Nvidia Gpus on Remote Servers

github-trending
83 points
48 comments

Mood

calm

Sentiment

mixed

Category

other

Key topics

GPU Monitoring
Nvidia
System Administration
Debate intensity60/100

The post introduces 'GPU Hot', a dashboard for monitoring NVIDIA GPUs on remote servers, sparking a discussion on its usefulness, design, and potential alternatives.

Snapshot generated from the HN discussion

Discussion Activity

Very active discussion

First comment

N/A

Peak period

45

Day 4

Avg / period

12

Comment distribution48 data points
Loading chart...

Based on 48 loaded comments

Key moments

  1. 01Story posted

    Oct 6, 2025 at 9:04 AM EDT

    about 2 months ago

    Step 01
  2. 02First comment

    Oct 6, 2025 at 9:04 AM EDT

    0s after posting

    Step 02
  3. 03Peak activity

    45 comments in Day 4

    Hottest window of the conversation

    Step 03
  4. 04Latest activity

    Oct 14, 2025 at 4:59 PM EDT

    about 1 month ago

    Step 04

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (48 comments)
Showing 48 comments
github-trendingAuthor
about 2 months ago
1 reply
Hi everyone, I just built a GPU dashboard to check the utilization on NVIDIA cards directly in your browser. It also works with multiple GPUs. The idea is to have real-time metrics from a remote GPU server instead of running nvidia-smi. Let me know if you try it out!
ohong
about 1 month ago
Hey I rly like your project! And kinda amused by the comments here... Would love to chat more about it if you're down. My contact's in my profile :)
heipei
about 2 months ago
3 replies
Obligatory reminder that "GPU utilisation" as a percentage is meaningless metric and does not tell you how well your GPU is utilised.

Does not change the usefulness of this dashboard, just wanted to point it out.

yfontana
about 2 months ago
4 replies
Properly measuring "GPU load" is something I've been wondering about, as an architect who's had to deploy ML/DL models but is still relatively new at it. With CPU workloads you can generally tell from %CPU, %Mem and IOs how much load your system is under. But with GPU I'm not sure how you can tell, other than by just measuring your model execution times. I find it makes it hard to get an idea whether upgrading to a stronger GPU would help and by how much. Are there established ways of doing this?
hatthew
about 2 months ago
It's harder than measuring CPU load, and depends a lot on context. For example, often 90% of a GPU's available flops are exclusively for low-precision matrix multiply-add operations. If you're doing full precision multiply-add operations at full speed, do you count that as 10% or 100% load? If you're doing lots of small operations and your warps are only 50% full, do you count that as 50% or 100% load? Unfortunately, there isn't really a shortcut to understanding how a GPU works and knowing how you're using it.
sailingparrot
about 2 months ago
For kernel-level performance tuning you can use the occupancy calculator as pointed out by jplusqualt or you can profile your kernel with Nsight compute which will give you a ton of info.

But for model-wide performance, you basically have to come up with your own calculation to estimate the FLOPs required by your model and based on that figure out how well your model is maxing out the GPU capabilities (MFU/HFU).

Here is a more in-depth example on how you might do this: https://github.com/stas00/ml-engineering/tree/master/trainin...

jplusequalt
about 2 months ago
CUDA toolkit comes with an occupancy calculator that can help you determine based on your kernel launch parameters how busy your GPU will potentially be.

For more information: https://docs.nvidia.com/cuda/cuda-c-programming-guide/#multi...

villgax
about 2 months ago
you need to profile them, nsight is one even torch does flamegraphs
Scene_Cast2
about 2 months ago
2 replies
@dang sorry for the meta-comment, but why is yfontana's comment dead? I found it pretty insightful.
kergonath
about 2 months ago
FYI, adding @ before a user name does nothing besides looking terrible and AFAIK dang does not get a notification when he’s mentioned. If you want to contact him, the best way is to send an email to hn@ycombinator.com .
yfontana
about 2 months ago
I think I was shadow-banned because my very first comment on the site was slightly snarky, and have now been unbanned.
huevosabio
about 2 months ago
2 replies
how so?
sailingparrot
about 2 months ago
2 replies
"Utilization" tells you the percentage of your GPU's SM that currently have at least one thread assigned to them.

It does not at all take into count how much that thread is actually using the core to it's capacity.

So if e.g. your thread is locked waiting on some data from another GPU (NCCL) and actually doing nothing, it will still show 100% utilisation. A good way to realize that is when a NCCL call timeout after 30 minutes for some reason, but you can see all your GPUs (except the one that cause the failure) were at 100% util, even though they clearly did nothing but wait.

Another example are operation with low compute intensity: Say you want to add 1 to every element in a very large tensor, you effectively have to transfer every element (let's say FP8, so 1 byte) from the HBM to the l2 memory, which is very slow operation, to then simply do an add, which is extremely fast. It takes about ~1000x more time to move that byte to L2 than it takes to actually do the add, so in effect your "true" utilization is ~0.2%, but nvidia-smi (and this tool) will show 100% for the entire duration of that add.

Sadly there isn't a great general way to monitor "true" utilization during training, generally you have to come up with an estimate of how many flops your model requires per pass, look at the time it takes to do said pass, and compare the flops/sec you get to Nvidia's spec sheet. If you get around 60% of theoretical flops for a typical transformer LLM training you are basically at max utilization.

aprdm
about 2 months ago
3 replies
What about energy consumption as a proxy for it ?
villgax
about 2 months ago
not a good estimator but still roughly good, ambient temps/neighboring cards alone might influence this more than workloads
sailingparrot
about 2 months ago
Definitely a better high level metric than nvidia-smi, and probably fine if you just want to get a very coarse idea of whether or not your are using the GPUs reasonably at all.

But when you get to the point where you care about a few percentage points of utilisation it's just not reliable enough as many things can impact energy consumption both ways. E.g. had a case were the GPU cluster we were using wasn't being cooled well enough, so you would gradually see power draw getting lower and lower as the GPUs were throttling themselves to not overheat.

You can also find cases were energy consumption is high but MFU/HFU isn't, like memory intensive workloads

JackYoustra
about 2 months ago
iirc most of the energy comes from memory IO not arithmetic, so it's still not great. A better direction, though.
huevosabio
about 2 months ago
This is a great explanation, thank you!
porridgeraisin
about 2 months ago
Utilisation is counted by the OS, it's not exposed as a performance counter by the hardware. Thus, it's limited by the level of abstraction presented by the hardware.

It's useless on CPUs as well, just to a much much lesser extent to the point of it actually being useful.

Basically, the OS sees the CPU as being composed of multiple cores, that's the level of abstraction. Thus, the OS calculates "portion of last second where atleast one instruction was sent to this core" on each core and then reports it. The single number version is an average of each core's value.

On the other hand, the OS cannot calculate stuff inside each core - the CPU hides as part of its abstraction. That is, you cannot know "I$ utilisation", "FPU utilisation", etc,.

In the GPU, the OS doesn't even see each SM (streaming multiprocessor, loosely analogous to a cpu core). It just sees the whole GPU as one black box abstraction. Thus, it calculates utilisation as "portion of last second where atleast one kernel was executing on the whole GPU". It cannot calculate intra-GPU util at all. So one kernel executing on one SM looks the same to the OS, as that kernel executing on tens of SMs!

This is the crux of the issue.

With performance counters (perf for CPU, or nsight compute for GPU), lots of stuff visible only inside the hardware abstraction can be calculated (SM util, warp occupancy, tensor util, etc)

The question then, is why doesn't the GPU schedule stuff on each SM in the OS/driver? Instead of doing it in a microcontroller in the hardware itself on the other side of the interface?

Well, I think it's due to efficiency reasons and also for nvidia to have more freedom to change it without having compat issues due to being tied to the OS, and similar reasons. If that were the case however, then the OS could calculate util for each SM, and then average it, giving you more accurate values - the case with the kernel running on 1 SM will report a smaller util than the case with the kernel executing on 15 SMs.

IME, measuring on nsight compute causes anywhere from a 5% to 30% performance overhead, so if that's ok for you, you can enable it and get more useful measurements.

John23832
about 2 months ago
3 replies
The "why not use" section should probably include nvtop?
w-m
about 2 months ago
1 reply
Possibly also nvitop, which is a different tool from nvtop: https://github.com/XuehaiPan/nvitop
github-trendingAuthor
about 2 months ago
nvitop actually is a super cool project
sirukinx
about 2 months ago
Fair, but I believe that this is intended for a web browser rather than a terminal.
phyalow
about 2 months ago
Absolutely.
peterdsharpe
about 2 months ago
2 replies
What is the benefit of this over `watch nvidia-smi`, possibly prepended with an `ssh` in the case of a remove server?
xtreme
about 2 months ago
"nvidia-smi -l <#num seconds>" works even better.
github-trendingAuthor
about 2 months ago
nothing super special to be honest. It's just a quick way for me to take a look at a couple of GPU boxes from the browser. Sometimes I check it from the ipad too
onefortree
about 2 months ago
1 reply
This is awesome! Tested it out while running some plex encoding and everything worked as expected!

I did notice that nvidia-smi shows the process name as plex-transcoding but gpu-hot is showing [Not Found]. Not sure if that is where the process name is supposed to go

github-trendingAuthor
about 2 months ago
Thanks a lot!! yes I have to check the names
observationist
about 2 months ago
2 replies
The AI/vibe coded "purple" color scheme is a meme at this point - might want to tweak the look and feel to not be so on the nose, but it's otherwise a good dashboard.
ionwake
about 2 months ago
nah I like it
moomoo11
about 2 months ago
Not gonna check the code I have other things to do but tailwind iirc by default has some purplish color. And it is pretty common because of that.

I think AI vibe codes that because it’s probably seen that default so much.

villgax
about 2 months ago
1 reply
sudo apt install nvtop

// solves everything at the above container claims to do lol

github-trendingAuthor
about 2 months ago
True nvtop is super useful but sometimes I want to be able to take a quick look from the browser
guluarte
about 2 months ago
1 reply
another option is to use Prometheus+grafana https://docs.nvidia.com/datacenter/cloud-native/gpu-telemetr...
github-trendingAuthor
about 2 months ago
thats a solid solution, but you have to configure prometheus/grafana etc, but yes grafana rocks

check also netdata amazing project

jedbrooke
about 2 months ago
1 reply
I’m skeptical of “no ssh” being a benefit. I’d rather have one port opened to the battle tested ssh proc (which I probably have already anyway), than open a port to some random application.

I suppose it’s trivial to proxy a http port over ssh though so that would seem like a good solution

github-trendingAuthor
about 2 months ago
1 reply
i mean i dont have to ssh over my local gpu server every time i want to have a quick look on the gpus
jedbrooke
about 2 months ago
that’s true, this would be pretty convenient for local environments
nisten
about 2 months ago
1 reply
half readable color scheme.. random python and javascript mixed in, ships with 2 python CVEs out of the box out of 5 total dependencies... yep it checks out bois...certified infested slop

  python-socketio==5.8.0: 1 CVE (CVE-2025-61765); Remote Code Execution via malicious pickle deserialization in multi-server setups.
  eventlet==0.33.3: 1 CVE (CVE-2025-58068); HTTP request smuggling from improper trailer handling.

And then economists wonder why are none of these people getting jobs...
pixl97
about 2 months ago
I mean the python-socketio is from a few days ago and likely doesn't affect this package (it's not using message queues, right?)

Eventlet .33 is ancient, no idea why they would use that.

With this said, most people should have some kind of SCA to ensure they're not using ancient packages. Conversely picking up a package the day it's released has bit a lot of people when the repository in question gets pwned.

andrewg1bbs
about 2 months ago
This is really cool, but I tend to prefer NVtop for now.
Avlin67
about 2 months ago
why not push metrics to grafana ?
iJohnDoe
about 2 months ago
Some negativity in the comments here.

I think it’s super cool. Clean design. Great for your local self-hosted system or one of your local company systems in the office.

If you have a fleet of GPUs then maybe use your SSH CLI. This is fun and cool looking though.

alfalfasprout
about 2 months ago
TBH this seems useful only for a very select niche.

If you're a company and you have several GPU machines in a cluster, then this is kinda useless b/c you'd have to go on each container or node to view the dashboard.

Sure, there's a cost to using opentelemetry + whatever storage+viz backend, but once it's set up you can actually do alerting, historical views, analysis, etc. easily.

Cieric
about 2 months ago
This looks neat and would probably be cool to run on one of my passive info screens. But until it supports more than just Nvidia I'll have to stick with nvtop. Might be a good idea to pull the theme out to a file so it's all swappable too (assuming you haven't, I can't look at the code right now.)
Havoc
about 2 months ago
Oh that’s neat. Been looking for a way to see vram temps on Linux
huevosabio
about 2 months ago
In app.py it seems like you call nvidia-smi as a subprocess and then scrape that. Are there no bindings to do that directly?
View full discussion on Hacker News
ID: 45490957Type: storyLast synced: 11/20/2025, 4:53:34 PM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.

Read ArticleView on HN
Not Hacker News Logo

Not

Hacker

News!

AI-observed conversations & context

Daily AI-observed summaries, trends, and audience signals pulled from Hacker News so you can see the conversation before it hits your feed.

LiveBeta

Explore

  • Home
  • Hiring
  • Products
  • Companies
  • Discussion
  • Q&A

Resources

  • Visit Hacker News
  • HN API
  • Modal cronjobs
  • Meta Llama

Briefings

Inbox recaps on the loudest debates & under-the-radar launches.

Connect

© 2025 Not Hacker News! — independent Hacker News companion.

Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.