Show HN: We built an AI tool for working with massive LLM chat log datasets
Today we’re launching Hyperparam, a browser-native app for exploring and transforming multi-gigabyte datasets in real time. It combines a fast UI that can stream huge unstructured datasets with an army of AI agents that can score, label, filter, and categorize them. Now you can actually make sense of AI-scale data instead of drowning in it.
Example: Using the chat, ask Hyperparam’s AI agent to score every conversation in a 100K-row dataset for sycophancy, filter out the worst responses, adjust prompts, regenerate, and export your dataset V2. It all runs in one browser tab with no waiting and no lag.
It’s free while it’s in beta if you want to try it on your own data.
Discussion Activity
Light discussionFirst comment
6m
Peak period
1
Hour 1
Avg / period
1
Based on 1 loaded comments
Key moments
- 01Story posted
11/19/2025, 5:02:53 PM
2h ago
Step 01 - 02First comment
11/19/2025, 5:09:13 PM
6m after posting
Step 02 - 03Peak activity
1 comments in Hour 1
Hottest window of the conversation
Step 03 - 04Latest activity
11/19/2025, 5:09:13 PM
2h ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
No human has the patience to sift through all that text, so we need better tools to help us understand and analyze it. That's why I built Hyperparam to be the first tool specifically designed for working with LLM data at scale. No one else seemed to be solving this problem.
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.