Not

Hacker News!

Beta
Home
Jobs
Q&A
Startups
Trends
Users
Live
AI companion for Hacker News

Not

Hacker News!

Beta
Home
Jobs
Q&A
Startups
Trends
Users
Live
AI companion for Hacker News
  1. Home
  2. /Story
  3. /Show HN: Norma – build good datasets (using an objective)
  1. Home
  2. /Story
  3. /Show HN: Norma – build good datasets (using an objective)
Nov 24, 2025 at 3:11 AM EST

Show HN: Norma – build good datasets (using an objective)

noelfranthomas
1 points
0 comments

Mood

excited

Sentiment

positive

Category

startup_launch

Key topics

Data Platform

Machine Learning

Data Engineering

Data Optimization

AI Assistant

My team has worked for F500s, startups, and everything in between.

In every case, we found it almost impossible to assemble an ideal dataset for training models. In real-world systems, the information you actually need is scattered across 30–300+ tables, stored in different warehouses, parquets, CSVs, and legacy DBs that nobody fully understands anymore.

We realized the real job isn’t ETL (too wide), or feature engineering (too narrow) it’s constructing the ideal representation of the problem so downstream models can actually learn something meaningful.

So we built Norma, an optimization-first data platform. It does the things every ML team wishes their stack would do: 1. Unity Catalog integration that works out of the box - connect a warehouse, instantly browse tables with lineage, schemas, and metadata. 2. A unified SQL/Python pipeline engine - both languages execute in the same memory buffer (via DuckDB), so no more glue code or brittle data hops. 3. An AI assistant for transformations - ask for a feature, a join, an explanation, a visualization (generates pipeline steps). 4. Multi-bandit 5-fold cross-validation - fast, automatic evaluation of transformed datasets with xgboost. 5. Visual lineage + shared datasets - every step is inspectable, reproducible, and sharable across teams.

That’s what we have today.

We’re still building: • Automatic leakage detection (timestamp violations, post-outcome signals, unsafe joins) • Relevant table discovery (find the tables that actually matter for predicting your target) • Relevant row selection (especially for PFN-style models with row limits) • Automated feature representation (scaling, encoding, aggregation, embeddings) • AutoGluon + TabPFN integration (train strong models on normalized, optimized datasets) • Differential privacy guardrails for LLM usage inside your data workflows

We’re trying to build the equivalent of a representation compiler: raw warehouse → optimal feature space → any model or BI tool.

If you’ve ever lost days hunting through a schema, debugging leakage, redoing feature pipelines, or trying to understand why a model plateaus even though your data is “fine,” I’d genuinely love your feedback. We’re still working closely with teams to refine our features and capabilities, and we’d love to share a private beta with your team. Please join the waitlist!

Happy to answer anything here.

Discussion Activity

No activity data yet

We're still syncing comments from Hacker News.

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (0 comments)

Discussion hasn't started yet.

ID: 46031558Type: storyLast synced: 11/24/2025, 8:12:09 AM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.

Read ArticleView on HN

Not

Hacker News!

AI-observed conversations & context

Daily AI-observed summaries, trends, and audience signals pulled from Hacker News so you can see the conversation before it hits your feed.

LiveBeta

Explore

  • Home
  • Jobs radar
  • Tech pulse
  • Startups
  • Trends

Resources

  • Visit Hacker News
  • HN API
  • Modal cronjobs
  • Meta Llama

Briefings

Inbox recaps on the loudest debates & under-the-radar launches.

Connect

© 2025 Not Hacker News! — independent Hacker News companion.

Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.