WiFi DensePose
github.comKey Features
Tech Stack
Key Features
Tech Stack
For example, their diagram has several CSI sources. Does the user need 3 or more CSI sources?
I'm capable of pointing an LLM at a GitHub repository, what I want is real documentation written by a human to address users' needs, not emoji-filled docs that read like ad copy.
from wifi_densepose import WiFiDensePose
# Initialize with default configuration
system = WiFiDensePose()
# Start pose estimation
system.start()
# Get latest pose data
poses = system.get_latest_poses()
print(f"Detected {len(poses)} persons")
# Stop the system
system.stop()
AI solves the Emporer’s Nose problem: you have no data whatsoever going in and you estimate the result!After a bit more browsing, I found:
# Hardware Settings
WIFI_INTERFACE=wlan0
CSI_BUFFER_SIZE=1000
HARDWARE_POLLING_INTERVAL=0.1
So maybe it uses one WiFi interface to collect CSI from multiple BSSIDs? Does 802.11 support this well? (I assume you can get one-way CSI data, single-in-multiple-out, from a beacon if you really want to.) Does commodity hardware support this? Do the drivers support this?But I’d be rather impressed if that’s all that’s needed to get poses without any calibration for the actual positions of all involved devices especially if the CSI available is all of this form. This whole repo smells a bit like it’s almost 100% vibes and no content.
Wasn’t 802.11bf supposed to make real channel state information available for vendor-neutral use? What happened to it?
> Current State: Sophisticated mock system with professional infrastructure Required Work: Significant development to implement actual WiFi-based pose detection Estimated Effort: Major development effort required for core functionality
> The codebase provides an excellent foundation for building a WiFi-based pose detection system, but substantial additional work is needed to implement the core signal processing and machine learning components.
https://github.com/ruvnet/wifi-densepose/tree/main/docs/revi...
Over 1k stars. Has a single person tried running it? Even the author?
‘One of the most captivating aspects of AI models like GPT-4 is their ability to "hallucinate" – generating completely new ideas and concepts that go beyond mere data processing. This capability underscores AI's potential to create, not just analyze.’
So now the question is: Does this repo actually contain anything useful at all? Or is it just one big AI vibecoding project that amassed 1.3K stars based on sounding really amazing from the README? I’m leaning toward the latter.
There are no usable instructions for actually trying this out, as far as I can see. It does claim to have a section for deploying and scaling with Kubernetes, which is hilarious for something that is supposedly working with WiFi routers.
I’m continually amazed at how much leverage people are getting out of letting vibecoding tools run absolutely wild and then posting it to GitHub. I wouldn’t be surprised if the author was leveraging this in job interviews based on the almost certainly correct assumption that many interviewers will assume it’s real without checking anything. This kind of trick won’t work at a real company or with a serious hiring manager, but if you can impress a recruiter and get in front of a checked out hiring manager who just wants to build their empire this kind of thing can work. For a while.
Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.