Sniffly – Claude Code Analytics Dashboard
Posted4 months agoActive4 months ago
github.comTechstory
calmmixed
Debate
60/100
AI-Assisted CodingCode AnalyticsClaude Code
Key topics
AI-Assisted Coding
Code Analytics
Claude Code
The Sniffly dashboard analyzes Claude Code usage data, sparking discussions about AI-assisted coding, code quality, and the implications of AI-generated code.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
1m
Peak period
5
6-8h
Avg / period
2.1
Comment distribution19 data points
Loading chart...
Based on 19 loaded comments
Key moments
- 01Story posted
Aug 31, 2025 at 5:13 AM EDT
4 months ago
Step 01 - 02First comment
Aug 31, 2025 at 5:14 AM EDT
1m after posting
Step 02 - 03Peak activity
5 comments in 6-8h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 1, 2025 at 4:53 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45081711Type: storyLast synced: 11/20/2025, 4:38:28 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
For example, error type distribution or intervention rates. This can tell me how efficient I am when using Claude.
But currently, the error type is a bit too broad, and I haven’t discovered much yet.
Who cares who actually "typed" it? Shit code will be shit code regardless of author, there is just more of it now compared to before, just like there was more 10 years ago compared to 20 years, as the barriers for getting started is lowered time and time again. Hopefully, it'll be a net-positive, just like previous times, it's never been easier to write code to solve your own specific personal problems.
Developers who have strict requirements on the code they "produce" will make the LLM fit with their requirements when needed, and "sloppy" developers will continue to publish spaghetti code, regardless of LLMs existence.
I don't get the whole "vibe-coding" thing because clearly most of the code LLMs produce is really horrible, but with good prompting, strict reviews and not accepting bad changes just to move forward lets you mold the code into something acceptable.
(I have not looked at this specific project's code, so not sure this applies to this project, but is more of a general view obviously)
But now the signals are much harder to read. People post polished looking libraries / tools with boastful convincing claims about what they can do for you, yet they didn't even bother to check if they work at all, just wasting everyone's time.
It's a weird new world.
if anything the smelly readme's that look all like claude wrote them, are a good tell to check deeper
On the whole meta discussion thing, i have been reading HN for at least 15 years. Posts with lots of comments are meta discussions. HN is not really a place to discuss technics of a project.
However, not all code requires the same quality standards (think perfectionism). The tools in this project are like blog posts written by an individual that haven’t been reviewed by others, while an ASF open-source project is more like a peer-reviewed article. I believe both types of projects are valid.
Moreover, this kind of project is like a cache. If no one else writes it, I might want to quickly vibe-code it myself. In fact, without vibe coding, I might not even do it at all due to time constraints. It's totally reasonable to treat this project as a rough draft of an idea. Why should we apply the same standards to every project?
In fact, their approach to using vibe coding in production comes with many restrictions and requirements. For example: 1. Acting as Claude's product manager (e.g., asking the right questions) 2. Using Claude to implement low-dependency leaf nodes, rather than core infrastructure systems that are widely relied upon 3. Verifiability (e.g., testing)
BTW, their argument for the necessity of vibe coding does make some sense:
As AI capabilities grow exponentially, the traditional method of reviewing code line by line won’t scale. We need to find new ways to validate and manage code safely in order to harness this exponential advantage.
Nobody should publish slop code AI assisted or not tbh