Curated LLM Prompts for Debugging with Runtime Dom Snapshots
Posted4 months ago
github.comTechstory
calmpositive
Debate
0/100
Artificial IntelligenceDebugging ToolsLLM Prompts
Key topics
Artificial Intelligence
Debugging Tools
LLM Prompts
A GitHub repository provides curated LLM prompts for debugging with runtime DOM snapshots, aiming to facilitate the use of large language models in debugging tasks.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
N/A
Peak period
1
Start
Avg / period
1
Key moments
- 01Story posted
Sep 2, 2025 at 5:41 PM EDT
4 months ago
Step 01 - 02First comment
Sep 2, 2025 at 5:41 PM EDT
0s after posting
Step 02 - 03Peak activity
1 comments in Start
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 2, 2025 at 5:41 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45109454Type: storyLast synced: 11/17/2025, 10:07:49 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Unlike static code analysis, this captures what users actually see - computed styles, real DOM state, actual layout calculations.
The repo includes battle-tested prompts for: - Bug analysis and form debugging - UX audits and conversion optimization - Test automation and code review - Integration examples with Playwright/Cypress
Each prompt is designed to get specific, actionable fixes rather than generic advice.
Would love feedback from the community on which prompts work best for your workflows.