Honda: 2 Years of ML vs 1 Month of Prompting - Heres What We Learned
Postedabout 2 months agoActiveabout 2 months ago
levs.fyiTechstoryHigh profile
calmpositive
Debate
60/100
Large Language ModelsText ClassificationMachine Learning
Key topics
Large Language Models
Text Classification
Machine Learning
Honda's experience with using LLMs for text classification of warranty claims shows promising results, sparking discussion on the potential and limitations of LLMs in similar applications.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
4d
Peak period
83
96-108h
Avg / period
33
Comment distribution99 data points
Loading chart...
Based on 99 loaded comments
Key moments
- 01Story posted
Nov 10, 2025 at 8:11 AM EST
about 2 months ago
Step 01 - 02First comment
Nov 14, 2025 at 7:48 AM EST
4d after posting
Step 02 - 03Peak activity
83 comments in 96-108h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 15, 2025 at 6:10 AM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45875618Type: storyLast synced: 11/20/2025, 6:24:41 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
https://www.anthropic.com/engineering/contextual-retrieval
They also found improvements from augmenting the chunks with Haiku by having it add a summary based on extra context.
That seems to benefit both the keyword search and the embeddings by acting as keyword expansion. (Though it's unclear to me if they tried actual keyword expansion and how that would fare.)
---
Anyway what stands out to me most here is what a Rube Goldberg machine it is. Embeddings, keywords, fusion, contextual augmentation, reranking... each adding marginal gains.
But then the whole thing somehow works really well together (~1% fail rate on most benchmarks. Worse for code retrieval.)
I have to wonder how this would look if it wasn't a bunch of existing solutions taped together, but actually a full integrated system.
And, agreed, each individual technique seems marginal but they really add up. What seems to be missing is some automated layer that determines the best way to chunk documents into embeddings. My use case is mostly normalized mostly technical documents so I have a pretty clear idea of how to chunk to preserve semantics. But I imagine that for generalized documents it is a lot trickier.
LLMs still beat a clarifier, because they're able to extract more signals than a text embedding.
It's very difficult to beat an LLM + prompt in terms of semantic extraction.
If all the bullshit hype and marketing would evaporate already (“LLMs will replace all jobs!”), stuff like this would float to the top more and companies with large data sets would almost certainly be clamoring for drop-in analysis solutions based on prompt construction. They’d likely be far happier with the results, too, instead of fielding complaints from workers about it (AI) being rammed down their throats at every turn.
Ask away. Best method I’ve found so far for this.
Now when I ask questions about design decisions, the LLM refers to the original paper and cites the decisions without googling or hallucinating.
With just these two things in my local repo, the LLM created test scripts to compare our results versus the paper and fixed bugs automatically, helped me make decisions based on the paper's findings, helped me tune parameters based on the empirical outcomes, and even discovered a critical bug in our code that was caused by our training data being random generated versus the paper's training data being a permutation over the whole solution space.
All of this work was done in one evening and I'm still blown away by it. We even ported our code to golang, parallelized it, and saw a 10x speedup in the processing. Right before heading to bed, I had the LLM spin up a novel simulator using a quirky set of tests that I invented using hypothetical sensors and data that have not yet been implemented, and it nailed it first try - using smart abstractions and not touching the original engine implementation at all. This tech is getting freaky.
It helps to give it a little context and suggest where to look in the repo. The tools also have mechanisms where you can leave directions and notes in the context for the project. Updating that over time as you discover where the LLM stumbles helps a lot.
In my current role this seems like a very interesting approach to keep up with pop culture references and internet speak that can change as quickly as it takes the small ML team I work with to train or re-train a model. The limit is not a tech limitation, it’s a person-hours and data labeling problem like this one.
Given I have some people on my team that like to explore this area I’m going to see if I can run a similar case study to this one to see if it’s actually a fit.
Edit: At the risk of being self deprecating and reductive: I’d say a lot of products I’ve worked on are profitable/meaningful versions of Silicon Valley’s Hot Dog/Not Hot Dog.
But I think it's still an interesting result, because related and similar tasks are everywhere in our modern world, and they tend to have high importance in both business and the public sector, and the older generation of machine learning techniques for handling these tasks we're both sophisticated and to the point where very capable and experienced practitioners might need an R&D cycle just to conclude if the problem was solvable with the available data up to the desired standard.
LLM's represent a tremendous advancement in our ability as a society to deal with these kinds of tasks. So yes, it's a limited range of specific tasks, and success is found within a limited set of criteria, but it's a very important tasks and enough of those criteria are met in practice that I think this result is interesting and generalizable.
That doesn't mean we should fire all of our data scientists and let junior programmers just have at it with the LLM, because you still need to put together a good day to say, makes sense of the results, and iterate intelligently, especially given that these models tend to be expensive to run. It does however mean that existing data teams must be open to adopting LLMs instead of traditional model fitting.
“ Fun fact: Translating French and Spanish claims into German first improved technical accuracy—an unexpected perk of Germany’s automotive dominance.”
Given that it was inside a 9-step text preprocessing pipeline, it would be surprising if the AI had that much autonomy.
Does it make text more clear? How exactly? Does the German language is more descriptive? Does it somehow expands context?
So many questions in this fun fact.
Looks like they were limited by AWS Bedrock options.
But no, they want to pay $0.1 per request to recognize if a photo has a person in it by asking a multimodal LLM deployed across 8x GPUs, for some reason, instead of just spending some hours with CLIP and run it effectively even on CPU.
There have been some developments in the image-of-text/other-than-photograph area though recently. From Meta (although they seem unsure of what exactly their AI division is called): https://arxiv.org/abs/2510.05014 and Qihoo360: https://arxiv.org/abs/2510.27350 for instance.
I think you've just identified, in a set-theoretic complementary manner, the TAM for GenAI.
This is the bottleneck in my experience. Going for the expensive per-request LLM gets something shipped now that you can wow the execs with. Setting up a whole process to gather and annotate data, train models, run evals, and iterate takes time. The execs who hired those expensive AI engineers want their results right now, not after a process of hiring more people to collect and annotate the data.
I won’t lie that I’ve been unreasonably annoyed that I have to use a lot more compute than I need, for no other reason than an LLM API exists and it’s good enough in a relatively small throughput application.
> We didn’t just replace a model. We replaced a process.
That line sticks out so much now, and I can't unsee it.
> That’s not a marginal improvement; it’s a different way of building classifiers.
They've replaced an em-dash with a semi-colon.
/s if it wasn't obvious
https://arxiv.org/abs/2510.15061
I thought maybe they did it on purpose at first, like a cheeky but too subtle joke about LLM usage, but when it happened twice near the end of the post I just acknowledged, yeah, they did the thing. At least it was at the end or I might have stopped reading way earlier.
I think we're getting into reverse slop discrimination territory now. LLMs have been trained on so much of what we consider "good writing", that actual good writing is now attributed by default to LLMs.
I'm afraid that people will draw the wrong conclusion from "We didn’t just replace a model. We replaced a process." and see it as an endorsement of the zero-shot-uber-alles "Prompt and Pray" approach that is dominant in the industry right now and the reason why an overwhelming faction of AI projects fail.
If you can get good enough performance out of zero shot then yeah, zero shot is fine. Thing is that to know it is good enough you still have to collect and annotate more data than most people and organizations want to do.
This has been the bottleneck in every ML (not just text/LLM) project I’ve been part of.
Not finding the right AI engineers. Not getting the MLops textbook perfect using the latest trends.
It’s the collecting enough high quality data and getting it properly annotated and verified. Then doing proper evals with humans in the loop to get it right.
People who only know these projects through headlines and podcasts really don’t like to accept this idea. Everyone wants synthetic data with LLMs doing the annotations and evals because they’ve been sold this idea that the AI will do everything for you, you just need to use it right. Then layer on top of that the idea that the LLMs can also write the code for you and it’s a mess when you have to deal with people who only gain their AI knowledge through headlines, LinkedIn posts, and podcasts.
This isn't my first CV project, but it's the most successful one. And that chiefly because my client pulled out their wallets and let an army of annotators create all the train data I asked for, and more.
Supervised learning. Took a while to make that work well.
And then every few years someone comes up with a way to distill data out of unsupervised examples. GPT is these days the big example of that, but there was "ImageNet (unlabeled)" and LAION before that too. The issue is that there is just so much unsupervised data.
Now LLMs use that pretty well (even though stuffing everything into an LLM is getting old, and as this article points out, in any specific application they tend to get bested by something like XGBoost with very simple models)
The next frontier is probably "world models", where you first train unsupervised, not to train your model but to predict the world. THEN you train the model in this simulated, predicted world. That's the reason Yann Lecun really really wants to go down this direction.
You can't blame the users for that though, for instance, OpenAI's ChatGPT uses 'Ask Anything' as their home page prompt. Zero specialization, expert at anything. And people totally believe it.
Having the agent, and treating it carelessly, helps one believe this.
Making it is another story.
1. Data collection technique.
2. Data annotation(labelling).
3. Classfier can learn on your "good" negatives — quantitaively depending on the machine residuals/margin/contrastive/triplet losses — i.e. learn the difference between a negative and positive for a classifier at train time and the optimization minima is higher than at test time.
4. Calibration/Reranking and other Post Processing.
My guess is that they hit a sweet spot with the first 3 techniques.
An overwhelming amount of software projects fail, AI just helps them get there faster.
The text says, "...no leaks..." The case statement says, "...AND LOWER(claim_text) NOT LIKE '%no leak%...'"
It would've properly been marked as a "0".
Perhaps I could say, it isn't just generated--it is also hallucinated!
It’s not X it’s Y. We didn’t just do A we did B.
There’s definitely a lot of hard work that has gone in here. It’s gotten hard to read because of these sentence patterns popping up everywhere.
At the same time, as a nonnative speaker of English, this is literally how we were taught to write eye-catching articles and phrases. :P
A lot of formulaic writing is what we were taught to do, especially with more formal things. (This is more of a sidenote to this example)
So in a hunt for LLMs, we also get hit.
I learned it from MtG and I do believe it's a very cool word and I hate that I can't use it without people raising their eyebrows.
overall I think things have gotten better. I noticed maybe 3 years before chatGPT hit the scene that I would frequent on a page that definitely didn't seem written by a native English speaker. The writing was just weird. I see less of that former style now.
Probably the biggest new trend I notice is this very prominent "Conclusion" block that seems to show up now.
Honestly I'd love to see some data on it. I suspect a lot of "that's LLM slop" isn't and others isn't noticed and lots of LLM tropes were rife within online content long before LLMs but we're now hypersensitive to certain things since they're overused by LLMs.
Also we may have already reached a point where people are exposed so much to it they start talking naturally like AI.
We've seen it before, with the advent of internet and short text messages on mobile phone and the evolution of the music genres the writing and speaking capacity of the general population has gown downhill over the last 3 decades. I was watching video archives from the 70's and 80's a few days ago. It was striking to see that bar a few illiterate ones most random people from any social class interviewed in the streets 40-50 years ago would talk in a much more intelligible, eloquent and pleasant way than the best public orators of the 2020's.
https://arxiv.org/abs/2510.15061
... Wait a minute!
(Even ironically sometimes observed in cases when the writing is disparaging of AI and the use of AI).
If the subject matter is AI, you should instantly pay attention and look for the signs it was AI assisted or generated outright.
Even if it took $10 to run everything to handle each request, that’s far cheaper than even a minimum wage employee when you consider all of the employment overhead.
[1] specifically https://www.warrantyweek.com/archive/ww20230817.html claims the expectation value of warranty claims for a car is around $650.
Being an automaker, I can almost smell the silos where data resides, the rigidly defended lines between manufactures, sales and post-sales, the intra-departmental political fights.
Then you have all the legacy of enterprise software.
And the result is this shitty warranty claims data.
Warranty data flows up from the technicians - good luck getting any auto technician to properly tag data. Their job is to fix a specific customer’s problem, not identify systematic issues.
There’s a million things that make the data inherently messy. For example, a technician might replace 5 parts before they finally identify the root cause.
Therefore, you need some sort of department to sit between millions of raw claims and engineering. I would be curious what kind of alternative you have in mind?
In fact there are companies such as Medallia which specialize in CX and have really strong classification solutions for specifically these use cases (plus all the generative AI stuff for closing the loop).
Their AI implementations are also awful. Just sample 100 contacts with someone who actually understands the business and see their reaction.
> Translating French and Spanish claims into German first improved technical accuracy—an unexpected perk of Germany’s automotive dominance.
It brings up an interesting idea that some languages are better suited for different domains.
i'm curious about some kind of notion of "prompt overfitting." it's good to see the plots of improvement as the prompts change (although error bars probably would make sense here), but there's not much mention of hold out sets or other approaches to mitigate those concerns.
> in domains where the taxonomy drifts, the data is scarce, or the requirements shift faster than you can annotate
It's not actually clear if warranty claims really meet these criteria.
For warranty claims, the difficulty is in detecting false negatives, when companies have a strong incentive and opportunity to hide the negatives.
Companies have been trusted to do this kind of market surveillance (auto warranties, drug post-market reporting) largely based on faith that the people involved would do so in earnest. That faith is misplaced when the process is automated (not because the implementors are less diligent, but because they are too removed to tell).
Then the backlash to a few significant injuries might be a much worse regime of bureaucratic oversight, right when companies have replaced knowledge with automation (and replacement labor costs are high).
* "2 years vs 1 month" is a bit misleading because the work that enabled testing the 1 month of prompting was part of the 2 years of ML work.
* xgboost is an ensemble method... add the llm outputs as inputs to xgboost and probably enjoy better results.
* vectorize all the text data points using an embedding model and add those as inputs to xgboost for probably better results.
Over the past couple of years people have made attempts with NLP (lets say standard ML workflows) but NLP and word temperature scores are hard to integrate into a reliable data pipeline much less a operational review workflow.
Enter LLM's, the world is a data gurus oyster for building an detection system on warranty claims. Passing data to Prompted LLM's means capturing and classifying records becomes significantly easier, and these data applications can flow into more normal analytic work streams.
8 more comments available on Hacker News