My Trick for Getting Consistent Classification From Llms
Posted3 months agoActive3 months ago
verdik.substack.comTechstoryHigh profile
calmpositive
Debate
60/100
Large Language ModelsClassificationNatural Language Processing
Key topics
Large Language Models
Classification
Natural Language Processing
The author shares a technique for getting consistent classification from Large Language Models (LLMs) by using embeddings and caching, sparking a discussion on alternative approaches and potential improvements.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
7d
Peak period
67
Day 8
Avg / period
17.5
Comment distribution70 data points
Loading chart...
Based on 70 loaded comments
Key moments
- 01Story posted
Oct 13, 2025 at 2:01 PM EDT
3 months ago
Step 01 - 02First comment
Oct 20, 2025 at 5:34 PM EDT
7d after posting
Step 02 - 03Peak activity
67 comments in Day 8
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 23, 2025 at 3:20 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45571423Type: storyLast synced: 11/20/2025, 4:32:26 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I wrote a categorization script that sorts customer-service calls into one of 10 categories. Wrote descriptions of each category, then translated into embedding.
Then created embeddings for the call notes and matched to closest category using cosine_similarity.
[1] https://huggingface.co/sentence-transformers/all-MiniLM-L6-v...
[2] https://huggingface.co/BAAI/bge-m3
In my recent project I used openai's embedding model for that because of its convenient api and low cost.
Formatting the input text to have a consistent schema is optional but recommended to get better comparisons between vectors.
- Fetch a list of my unique tags to get a sense of my topics of interests
- Have the AI dig into those specific niches to see what people have been discussing lately
- Craft a few random tweets that are topic-relevant and present them to me to curate
Is very powerful workflow that is hard to deliver on without the class labels.
If your categories are dynamic, the way OP handles it will be much cheaper as the number of tweets (or customer service calls in your case) grows, as long as the cache hit rate is >0%. Each tweet will get it's own label, i.e. "joke_about_bad_technology_choices". Each of these labels gets put into a category, i.e. "tech_jokes". If you add/remove a category you would still need to re-calculate everything, however you would only need to re-calculate the labels to categories as opposed to every single tweet. Since similar tweets can share the same labels, you end up with less labels than total amount of tweets. As you reach the asymptotic ceiling, as mentioned in OPs post, your cost to re-embed labels to categories also becomes an asymptotic ceiling.
If the number of items you're categorizing is a couple thousand at most and you rarely add/remove categories, it's probably not worth the complexity. But in my case (and ops) it's worth it as the number of items grows infinitely.
The idea is also that this would be a classification system used in production whereby you classify data as it comes, so the "rolling labels" problem still exists there.
In my experience though, you can dramatically reduce unwanted bias by tuning your cosine similarity filter.
So the cache check tries to find if a previously existing text embedding has >0.8 match with the current text.
If you get a cache hit here, iiuc, you return that matched' text label right away. But do you also insert a text embedding of the current text in the text embeddings table? Or do you only insert it in case of cache miss?
From reading the GitHub readme it seems you only "store text embedding for future lookups" in the case of cache miss. This is by design to keep the text embedding table not too big?
For instance: I love McDonalds (1). I love burgers. (0.99) I love cheeseburgers with ketchup (?).
This is a bad example but in this case the last text could end up right at the boundary of the similarity to that 1st label if we did not store the 2nd, which could cause a cluster miss we don't want.
We only store the text on cache misses, though you could do both. I had not considered that idea but it make sense. I'm not very concerned about the dataset size because vector storage is generally cheap (~ $2/mo for 1M vectors) and the savings in $$$ not spend generating tokens covers for that expense generously.
1. Curate a large representative subsample of tweets.
2. Feed all of them to an LLM in a single call with the prompt along the lines of "generate N unique labels and their descriptions for the tweets provided". This bounds the problem space.
3. For each tweet, feed them to a LLM along with the prompt "Here are labels and their corresponding descriptions: classify this tweet with up to X of those labels". This creates a synthetic dataset for training.
4. Encode each tweet as a vector as normal.
5. Then train a bespoke small model (e.g. a MLP) using tweet embeddings as input to create a multilabel classification model, where the model predicts the probability for each label that it is the correct one.
The small MLP will be super fast and cost effectively nothing above what it takes to create the embedding. It saves time/cost from performing a vector search or even maintaining a live vector database.
Just using embeddings you can get really good classifiers for very cheap
You can use small embeddings models too, and can engineer different features to be embedded as well
Additionally, with email at least, depending on the categories you need, you only need about 50-100 examples for 95-100% accuracy
And if you build a simple CLI tool to fetch/label emails, it’s pretty easy/fast to get the data
How big should my sample size be to be representative ? It’s a fairly large list of docs across several products and deployment options. I wanted to pick a number of docs per product. Maybe I’ll skip the steps 4/5 as I only need to repeat it occasionally once I labelled everything once
For training the model downstream, the main constraint on dataset size is how many distinct labels you want for your use case. The rules of thumb are:
a) ensuring that each label has a few samples
b) atleast N^2 data points total for N labels to avoid issues akin to the curse of dimensionality
However, modern LLMs, even the cheaper ones, do handle the up to X constraint correctly without consistently giving X.
That way you can effectively handle open sets and train a more accurate MLP model.
With your approach I don't think you can get a representative list of N tweets which covers all possible categories. Even if you did, the LLM would be subject to context rot and token limits.
Will it be any better if you sent a list of existing tags with each new text to the LLM, and asked it to classify to one of them or generate a new tag? Possibly even skipping embeddings and vector search altogether.
I actually built a project for tagging posts exactly the way you described.
I was thinking giving the LLM a tool `(query: string) => string[]` to retrieve a list of matching labels to check if they already exist.
But the above approach sounds similar to OP, where they use embeddings to achieve that.
The OP has 6k labels and discusses time + cost, but what I found is:
- a small, good enough locally hosted embedding model can be faster than OpenAI's embedding models (provided you have a fast GPU available), and it doesn't cost anything
- for just 6k labels you don't need Pinecone at all, with Python it took me like a couple of seconds to do all calculations in memory
For classification + embedding you can use locally hosted models, it's not a particularly complex task that requires huge models or huge GPUs. If you plan to do such classification tasks regularly, you can make a one-time investment (buy a GPU) and then you'll be able to run many experiments with your data without having to think about costs anymore.
Reference: https://blog.invidelabs.com/how-invide-analyzes-deep-work/
In GitHub you show stats that say a "cache hit" is 200ms and a miss is 1-2s (LLM call).
I don't think I understand how you get a cache hit off a novel tweet. My understanding is that you
1) get a snake case category from an LLM
2) embed that category
3) check if it's close to something else in the embedding space via cosine similarity
4) if it is, replace og label with the closest in embedding space
5) if not, store it
Is that the right sequence? If it is, it looks to me like all paths start with an LLM, and therefore are not likely to be <200ms. Do I have the sequence right?
This is the kind of work you typically hire cheap social managers overseas to do through Fiverr. However, the variance in quality is very high and the burden of managing people on the other side of the world can be a lot of solo Entrepreneurs.
> cluster the inconsistent labels by embedding them in a vector space
Why not embed all tweets, cluster them with an algorithm of your choice and have an LLM provide names for each cluster?
Cheaper, better clusters and more accurate labels.
The main reason why is that I needed the classification to be ongoing. My system pulled over thousands of tweets per day and they all needed to be classified as they came for some downstream tasks.
Thus, I couldn't embed all tweets, then cluster, then ...
There are other clustering algorithms that try to fit variable size clusters or hierarchically organized clusters which may or may not make better clusters but generally take more resources than k-Means; k-Means is getting started at 20,000 documents and others might be struggling at that point.
Having the LLM write a title for the clusters is something you can do uniquely with big LLMs and prompt engineering.
It's wrong to say "don't waste your time collecting the data to train and evaluate a model it because you can always prompt a commercial LLM and it will be 'good enough'" because you at the very least need the evaluation data to prove that your system is 'good enough' and decide if one is better than another (swap out Gemini vs Llama vs Claude)
In the end, though, you might wish that the classification is not something arbitrary that the system slapped on it but rather is a "class" in some ontology which has certain attributes (e.g. a book can have a title, and a "heavy book" weighs more than 2 pounds by my definition) If you are going the formal ontology route you need the same evaluation data so you know you're not doing it wrong. If you've accepted that, though, you might as well collect more data a train a supervised model and what I see in the literature is that many-shot approach still outperforms one-shot and few-shot.
[1] which is on the scale of the training data in most qapplications
If your end goal is to show an audience of nontechnical stakeholders an overview of your dataset in a static medium (like a slide), I would suggest you do the cluster labeling yourself, with the help of interactive tooling to make the semantic cluster structure explorable. One option is to throw the dataset into Apple's recently published and open-sourced Embedding Atlas (https://github.com/apple/embedding-atlas), take a screenshot of the cluster viz, poke around in the semantic space, and manually annotate the top 5-10 most interesting clusters right in Google Slides or PowerPoint. If you need more control over the embedding and projection steps (and you have a bit more time), write your own embedding and projection, then use something like Plotly to build a quick interactive viz just for yourself; drop a screenshot into a slide and annotate it. Feels super dumb, but is guaranteed to produce human-friendly output you can actually present confidently as part of your data story and get on with your life.
This is really interesting to me.
So in essence, the process is what I might call 'eventually modelled' (to borrow from the concept of eventual consistency). I use the LLM entities as is, and gradually conform them to my desired ontology as I discover the correct ontology over time.
But when I read this:
> In order to train an AI model to tweet like a real human
Ugh, we’re doing this again, trying to fool people to believe some AI Twitter account is a real person, presumably for personal gain. Am I wrong?
Many solo Entrepreneurs you see on Twitter with large audiences are busy people so they have hired cheap labor from India / Philippines to be the social media manager. They often take on the task of keeping up with the niches and drafting post ideas. The big issue is that the variance in quality of who you hire is very high, and it's also a mental and energy toll to manage an employee who works on the other side of earth.
So the AI helps to scours "here is what all the tech bros are talking about since 3 days ago" and then drafts 3-5 posts and shows them to me so I can curate. I get to keep my page and audience engaged while protecting my time from actual deep work instead of scrolling the feed all day.
Normally you’d ask the judge LLM to “rate this output out of 5” or whatever the best practice is this week.
Vectorizing the output you’re trying to judge, then judging on semantic similarity to a desired output - instead of asking a judge “how good was this output” - avoids so many challenges. Instead of a “rating out of 5” you get more precise semantic similarity and you get it faster.
No doubt obvious to folks in the space, but seemed like a huge insight to me.
Your class label should be between 30 and 60 characters and be precise in snake_case format. For example: - complain_about_political_party - make_joke_about_zuckerberg_rebranding
Now, classify this tweet: {{tweet}}"
I stopped reading here. It's a bit obvious that you need to define your classification schema beforehand, not on a per message basis. And if you do, you need a way to remember your schema. Of course you will generate an inconsistent and non-orthogonal set of labels. I expected the next paragraphs to immediately fix this like
"Classify the tweet into one of: joke, rant, meme..." but instead the post went on to intellectualizing with math? It's like a chess player hanging a queen and then going on about bishop pairs and the london system
This is sensitive to the initial candidate set of labels that the LLM generates.
Meaning if you ran this a few times over the same corpus, you’ll probably get different performance depending upon the order of the way you input the data and the classification tag the LLM ultimately decided upon.
Here’s an idea that is order invariant: embed first, take samples from clusters, and ask the LLM to label the 5 or so samples you’ve taken. The clusters are serving as soft candidate labels and the LLM turns them into actual interpretable explicit labels.
Labeling can be done by asking the LLMs to label each tweet from a predefined set of labels, the order doesn’t matter. We can generate these labels either manually or by sampling a small subset of tweets and asking the LLM to tag them (we could be using a word dictionary too). From these labels, we can form a Label-length vector whose entries are sorted alphabetically (ordering is arbitrary but must be consistent).
To populate it, we ask the LLM to tag each tweet by restricting its JSON output to enums composed of our labels. From that, we can form hot vectors, effectively embedding and tagging each tweet at the same time.
We get tags for "free", and as a byproduct, we also get a vector embedding that can be further refined using techniques such as PCA, then you can cosim as usual. The main cost resides mainly on asking the LLM to label each tweet from the predefined set of tags, you can also micro-optimize that by batching prompts ig.
It’s fairly obvious tbh, an agent needs a way to search, not just expect it to produce the same labels that magically match prior results. Have we been so blinded we’ve forgotten this kind of stuff?
There's certainly more tweaking that needs to be done but I've been pretty happy with the results so far.
1: jesterengine.com