Meta Superintelligence Labs' First Paper Is About Rag
Posted3 months agoActive2 months ago
paddedinputs.substack.comTechstoryHigh profile
skepticalmixed
Debate
70/100
AI ResearchMeta AIRag (retrieval-Augmented Generation)
Key topics
AI Research
Meta AI
Rag (retrieval-Augmented Generation)
Meta Superintelligence Labs released their first paper on REFRAG, a new approach to Retrieval-Augmented Generation (RAG), sparking discussion on its significance and the lab's direction.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
26m
Peak period
139
Day 1
Avg / period
32
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 11, 2025 at 7:16 PM EDT
3 months ago
Step 01 - 02First comment
Oct 11, 2025 at 7:41 PM EDT
26m after posting
Step 02 - 03Peak activity
139 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 24, 2025 at 12:42 AM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45553577Type: storyLast synced: 11/20/2025, 8:23:06 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
IMO vector embedding is the most important innovation in computing of the last decade. There's something magical about it. These people deserve some kind of prize. The idea that you can reduce almost any intricate concept including whole paragraphs to a fixed-size vector which encapsulates its meaning and proximity to other concepts across a large number of dimensions is pure genius.
[1] https://en.wikipedia.org/wiki/Latent_semantic_analysis
[2] https://en.wikipedia.org/wiki/Singular_value_decomposition
The fact that dot product addition can encode the concept of royalty and gender (among all other sorts) is kind of magic to me.
Here, play around[1]
Or some that should be trivial Working in very high dimensions is funky stuff. Embedding high dimensions into low dimensions results in even funkier stuff[0] https://projector.tensorflow.org/
[1] https://www.cs.cmu.edu/~dst/WordEmbeddingDemo/
There is far less structure here than you are assuming, and that's the underlying problem. There is local structure and so the addition operation will work as expected when operating on close neighbors, but this does greatly limit the utility.
And if you aren't aware of the terms I'm using here I think you should be extra careful. It highlights that you are making assumptions that you weren't aware were even assumptions (an unknown unknown just became a known unknown). I understand that this is an easy mistake to make since most people are not familiar with these concepts (including many in the ML world), but this is also why you need to be careful. Because even those that do are probably not going to drop these terms when discussing with anyone except other experts as there's no expectation that others will understand them.
[0] https://ncatlab.org/nlab/show/monoid
This led me to do a bit more research, and I see indeed the queen result is in itself infact "cheating" a bit: https://blog.esciencecenter.nl/king-man-woman-king-9a7fd2935...
#TheMoreYouKnow
Having set of "king - male + female = queen" like relations, including more complex phrases to align embeddings.
It seems like terse, lightweight, information dense way to address essence of knowldge.
But similar ways to reduce huge numbers of dimensions to a much smaller set of "interesting" dimensions have been known for a long time.
Examples include principal component analysis/single value decomposition, which was the first big breakthrough in face recognition (in the early 90s), and also used in latent semantic indexing, the Netflix prize, and a large pile of other things. And the underlying technique was invented in 1901.
Dimensionality reduction is cool, and vector embedding is definitely an interesting way to do it (at significant computational cost).
Non-software devs are actually making functional programs for themselves for the first time ever. The value is crazy.
The value of AI is in having a scalable, human-like decision maker that you can plug into anything, anywhere. This has unlocked countless use cases for my team, that we could scarcely imagine a few years ago.
But it's not my job to convince you, my lived experience working with the tech is enough to convince me, and that's all I care about, to be honest. Everyone else will get there sooner or later.
You're missing the forest for the trees. Most people can't even make a block diagram, but they can explain what they have and what they want to do with it.
2. Wild claim that the companies that sell LLMs are actually downplaying their capabilities instead of hyping them
Again, personal, experience, but in my team ~40-50% of the PRs are generated by Codex.
Ready for the impending lay off fella?
https://www.infoworld.com/article/4061078/the-productivity-p...
The real value of AI isn't in helping coding. It's in having a human-like intelligence to automate processes. I can't get into details but my team is doing things that I couldn't dream of three years ago.
Meme thinking like this, repeating something you've heard as reflex without regard to whether it fits a situation, is the exact kind of unoriginality we can't allow to become the default mode of thinking.
However, in your moral crusade against using AI you are missing the big picture. No one is making you code with AI. But there are many things that you can only build if you use AI as a component.
The ability to plug a human-like decisionmaker into anything, anywhere massively expands what we can build. There are applications and use cases that you cannot even conceptualize without having the ability to plug AI in. This does not impacting critical thinking whatsoever.
Be original. Put your engineer hat on and think on what this new tool lets you build, that you couldn't beforehand.
A bit of this is true at every major lab. There's tons of untapped potential. But these organizations are very risk adverse. I mean why not continue with the strategy that got us to the point we're at in the first place. Labs used to hire researchers and give them a lot of free reign. But those times ended and AI progress also slowed down. Maybe if you want to get ahead you gotta stop thinking like everyone else
Well meta... you can "hold me hostage" for a lot cheaper than those guys. I'm sure this is true for hundreds of passionate ML researchers. I'd take a huge pay cut to have autonomy and resources. I know for a fact there's many working at Mets right now that would do the same. Do maybe if you're going to throw money at the problem, diversify a bit and look back at what made SV what it is today and what made AI take leaps forward
Doesn't really scream CEO of AGI to me.
Why do you say that?
1. She has 2 BAs, one in math and one in mechanical engineering.
2. She was an "Advanced Concepts Engineer at Zodiac Aerospace from 2012 to 2013".
3. She was a product manager at Tesla on the Model X
4. She was VP of product and engineering at Leap Motion.
Going from that fact that she wasn't a deep learning researcher to "her history was entirely non technical up until Open AI" is plain false. And plus, the job of CTO is 90%+ people management, and she appears more than smart enough and experienced enough to evaluate technical decisions of her team.
The right people to deliver immense progress dont exist right now.
The people are always there, you just need to find them and enable them.
I learnt the hard way that communications/image/signal processing research basically doesn’t care about Computer Architecture at the nuts and bolts level of compiler optimization and implementation.
When they encounter a problem whose normal solution requires excessive amounts of computation, they reduce complexity algorithmically using mathematical techniques, and quantify the effects.
They don’t quibble about a 10x speed up, they reduce the “big O()” complexity. They could care less whether it was implemented in interpreted Python or hand-optimized assembly code.
On one hand, I know there’s a lot of talent in AI today. But throwing hardware at the problem is the dumbest way forward.
WiFI adapters would be wheeled luggage if we had the same mentality during their development.
Then in parallel to that looking at compiler optimizations, and other higher-level algorithmic innovations such as Flash Attention (a classic at this point) which had a drastic impact on performance due to cache awareness, without changing the O() complexity.
I worked for a small research heavy AI startup for a bit and it was heart breaking how many people I would interact with in that general space with research they worked hard and passionately on only to have been beaten to the punch by a famous lab that could rush the paper out quicker and at a larger scale.
There were also more than a few instances of high-probability plagiarism. My team had a paper that had been existing for years basically re-written without citation by a major lab. After some complaining they added a footnote. But it doesn't really matter because no big lab is going to have to defend themselves publicly against some small startup, and their job at the big labs is to churn out papers.
I always felt like my works were being evaluated as engineering products, not as research.
I was reviewing a work once and I actually couldn't tell if the researchers knew that they ripped me off or not. They compared to my method, citing, and showing figures using it. But then dropped the performance metrics from the table. So I asked. I got them in return and saw that there was no difference... So I dove in and worked out that they were just doing 99% my method with additional complexity (computational overhead). I was pretty upset.I was also upset because otherwise the paper was good. The results were nice and they even tested our work in a domain we hadn't. Were they just upfront I would have gladly accepted the work. Though I'm pretty confident the other reviewers wouldn't have due to "lack of novelty."
It's a really weird system that we've constructed. We're our own worst enemies.
I'd modify this slightly. Their job is to get citations. Churning out papers really helps with that, but so does all the tweeting and evangelizing of their works. It's an unfortunate truth that as researchers we have to sell our works, and not just by the scientific merit that they hold. People have to read them after all. But we should also note that it is easier for some groups to get noticed more than others. Prestige doesn't make a paper good, but it sure acts as a multiplying factor for all the metrics we use for determining if it is good.Shareholders should be livid if they knew a single thing about what was going on.
https://en.wikipedia.org/wiki/Goodhart%27s_law
Naja naja has Least Concern conservation status, so there isn't much funding in doing a full count, but there are concerns as encroachment both reduces their livable habitat and puts them into more frequent contact with humans and livestock.
https://en.wikipedia.org/wiki/Perverse_incentive
Context: track athlete
Does it cease to be a good metric? No. After this you can likely come up with many examples of target metrics which never turn bad.
You're misunderstanding the root cause. Your example works as the the metric is well aligned. I'm sure you can also think of many examples where the metric is not well aligned and maximizing it becomes harmful. How do you think we ended up with clickbait titles? Why was everyone so focused on clicks? Let's think about engagement metrics. Is that what we really want to measure? Do we have no preference over users being happy vs users being angry or sad? Or are those things much harder to measure, if not impossible to, and thus we focus on our proxies instead? So what happens when someone doesn't realize it is a proxy and becomes hyper fixated on it? What happens if someone does realize it is a proxy but is rewarded via the metric so they don't really care?
Your example works in the simple case, but a lot of things look trivial when you only approach them from a first order approximation. You left out all the hard stuff. It's kinda like...
Edit: Looks like some people are bringing up metric limits that I couldn't come up with. Thanks!
I never said that. Someone said the law collapses, someone asked for a link, I gave an example to prove it does break down in some cases at least, but many cases once you think more about it. I never said all cases.
If it works sometimes and not others, it's not a law. It's just an observation of something that can happen or not.
But there are many "laws" used in the same form. They're eponymous laws[0], not scientific ones. Read "adage". You'll also find that word used in the opening sentence on the Wiki article I linked as well as most (if not all) of them in [0]
[0] https://en.wikipedia.org/wiki/List_of_eponymous_laws
*There are no /objective/ metrics*, only proxies.
You can't measure a meter directly, you have to use a proxy like a tape measure. Similarly you can't measure time directly, you have to use a stop watch. In a normal conversation I wouldn't be nitpicking like this because those proxies are so well aligned with our intended measures and the lack of precision is generally inconsequential. But once you start measuring anything with precision you cannot ignore the fact that you're limited to proxies.
The difference of when we get more abstract in our goals is not too dissimilar. Our measuring tools are just really imprecise. So we have to take great care to understand the meaning of our metrics and their limits, just like we would if we were doing high precision measurements with something more "mundane" like distance.
I think this is something most people don't have to contend with because frankly, very few people do high precision work. And unfortunately we often use algorithms as black boxes. But the more complex a subject is the more important an expert is. It looks like they are just throwing data into a black box and reading the answer, but that's just a naive interpretation.
Sure, if you get a ruler from the store it might be off by a fraction of a percent in a way that usually doesn't matter and occasionally does, but even if you could measure distance exactly that doesn't get you out of it.
Because what Goodhart's law is really about is bureaucratic cleavage. People care about lots of diverging and overlapping things, but bureaucratic rules don't. As soon as you make something a target, you've created the incentive to make that number go up at the expense of all the other things you're not targeting but still care about.
You can take something which is clearly what you actually want. Suppose you're commissioning a spaceship to take you to Alpha Centauri and then it's important that it go fast because otherwise it'll take too long. We don't even need to get into exactly how fast it needs to go or how to measure a meter or anything like that, we can just say that going fast is a target. And it's a valid target; it actually needs to do that.
Which leaves you already in trouble. If your organization solicits bids for the spaceship and that's the only target, you better not accept one before you notice that you also need things like "has the ability to carry occupants" and "doesn't kill the occupants" and "doesn't cost 999 trillion dollars" or else those are all on the chopping block in the interest of going fast.
So you add those things as targets too and then people come up with new and fascinating ways to meet them by sacrificing other things you wanted but didn't require.
What's really happening here is that if you set targets and then require someone else to meet them, they will meet the targets in ways that you will not like. It's the principal-agent problem. The only real way out of it is for principals to be their own agents, which is exactly the thing a bureaucracy isn't.
I've just taken another step to understand the philosophy of those bureaucrats. Clearly they have some logic, right? So we have to understand why they think they can organize and regulate from the spreadsheet. Ultimately it comes down to a belief that the measurements (or numbers) are "good enough" and that they have a good understanding of how to interpret them. Which with many bureaucracies that is the belief that no interpretation is needed. But we also see that behavior with armchair experts who try to use data to evidence their conclusion rather than interpret data and conclude from that interpretation.
Goodhart had focused on the incentive structure of the rule, but that does not tell us how this all happens and why the rule is so persistent. I think you're absolutely right that there is a problem with agents, and it's no surprise that when many introduce the concept of "reward hacking" that they reference Goodhart's Law. Yes, humans can typically see beyond the metric and infer the intended outcome, but ignore this because they don't care and so fixate on the measurement because that gives them the reward. Bureaucracies no doubt amplify this behavior as they are well known to be soul crushing.
But we should also be asking ourselves if the same effect can apply in settings where we have the best of intentions and all the agents are acting in good faith and trying to interpret the measure instead of just game it. The answer is yes. Idk, call it Godelski's Corollary if you want (I wouldn't), but it this relates to Goodhart's Law at a fundamental level. You can still have metric hacking even when agents aren't aware or even intending to do so. Bureaucracy is not required.
Yes if you run anything other than the 100m
> Context: track athlete
> Does it cease to be a good metric? No.
What do you mean? People start doping or showing up with creatively designed shoes and you need to layer on a complicated system to decide if that's cheating, but some of the methods are harder to detect and then some people cheat anyway, or you ban steroids or stimulants but allow them if they're by prescription to treat an unrelated medical condition and then people start getting prescriptions under false pretexts in order to get better times. Or worse, someone notices that the competition can't set a good time with a broken leg.
You are welcome to prove me wrong though. You might even restore some faith in humanity, too!
There seems to be 2 types
- Specification failure: signal is bad-ish, a completely broken behavior --> local optimal points achieved for policies that phenomenologically do not represent what was expected/desired to cover --> signaling an improvable reward signal definition
- Domain constraint failure: signal is still good and optimization is "legitimate", but you are prompted with the question "do I need to constraint my domain of solutions?"
This is of course inevitable if the goal cannot be directly measured but is composed of many constantly moving variables such as education or public health.
This doesn't mean we shouldn't bother having such goals, it just means we have to be diligent at pivoting the incentives when it becomes evident that secondary effects are being produced at the expense of the desired effect.
I agree with you, this doesn't mean we shouldn't bother with goals. They are fantastic tools. But they are guides. The better aligned our proxy measurement is with the intended measurement then the less we have to interpret our results. We have to think less, spending less energy. But even poorly defined goals can be helpful, as they get refined as we progress in them. We've all done this since we were kids and we do this to this day. All long term goals are updated as we progress in them. It's not like we just state a goal and then hop on the railroad to success.
It's like writing tests for code. Tests don't prove that your code is bug free (can't write a test for a bug you don't know about: unknown unknown). But tests are still helpful because they help evidence the code is bug free and constrain the domain in which bugs can live. It's also why TDD is naive, because tests aren't proof and you have to continue to think beyond the tests.
[0] https://news.ycombinator.com/item?id=45555551
The other day I was spending some time with a researcher from Deep Mind and I was surprised to find that while they were sharp and curious to an extent, nearly every ounce of energy they expended on research was strategic. They didn't write about research they were fascinated by, they wrote and researched on topics they strategically felt had the highest probability getting into a major conference in a short period of time to earn them a promotion. While I was a bit disappointed, I certainly didn't judge them because they are just playing the game. This person probably earns more than many rooms of smart, passionate people I've been in, and that money isn't for smarts alone; it's for appealing to the interests of people with the money.
You can see this very clearly by comparing the work being done in the LLM space to that being done in the Image/Video diffusion model space. There's much more money in LLMs right now, and the field is flooded with papers on any random topic. If you dive in, most of them are not reproducible or make very questionable conclusions based on the data they present, but that's not of very much concern so long as the paper can be added to a CV.
In the stable diffusion world it's mostly people driven by personal interest (usually very non-commericial personal interests) and you see tons of innovation in that field but almost no papers. In fact, if you really want to understand a lot of the most novel work coming out of the image generation world you often need to dig into PRs made by an anonymous users with anime themed profile pic.
The bummer of course is that there are very hard limits on what any researcher can do with a home GPU training setup. It does lead to creative solutions to problems, but I can't help but wonder what the world would look like if more of these people had even a fraction of the resources available exclusively to people playing the game.
Please do judge them for being parasitical. They might seem successful by certain measures, like the amount of money they make, but I for one simply dislike it when people only think about themselves.
As a society, we should be more cautious about narcissism and similar behaviors. Also, in the long run, this kind of behaviour makes them an annoying person at parties.
You dislike them because they don’t benefit you indirectly by benefiting society at large.
The incentive structure is wrong, incentivizing things that benefit society would be the solution not judging those that exist in the current system by pretending altruism is somehow not part of the same game.
As for whether that expectation is "selfish" on my part, I think that question has been debated for centuries in ethics, and I'm quite comfortable landing on the side that says not all disapproval is self-interest. In my own case, I'm not benefiting much either :)
To me this is an insane position to take or to expect from anyone, its some just world fallacy thing perpetuated by too much Hollywood.
I am going to flip the script for a minute. I am a killer, driver, pilot, mechanic one the best ones out there, I beat the game, I won. So let me just stop and change the world, for what?
We have a long history in science of seeing that sticking your neck out, taking risks, and being different are successful tools to progressing science[0]. Why? Because you can't make paradigm shifts by maintaining the current paradigm. We've also seen that this behavior is frequently combated by established players. Why? Because of the same attitude, ego.
So we've created this weird system where we tell people to think different and then punish them for doing so. Yeah, people are upset about it. I find that unsurprising. So yeah, fuck you, stop pulling the ladder up behind you. You're talking as if they just leave the ladder alone, but these are the same people who end up reviewing papers, grants, and are thus the gatekeepers of progress. Their success gives them control of the ladders and they make the rules.
[0] Galileo, Darwin, Gauss, Kepler, Einstein, and Turing are not the only members of this large club. Even more recently we have Karikó who ended up getting the 2023 Nobel prize in Medicine and Akerlof, Spence, Stiglitz who got the 2001 Nobel prize in economics for their rejected work. This seems to even be more common among Nobel laureates!
You can call this difference whatever you want, don't pretend that they are morally or effectively equivalent.
The key word there is only. Nothing in the post you suggested only. You have one vignette about one facet of this guy’s life.
I really dislike the resurgence in Puritanism.
Please read my sibling comment where I expand a bit on what I meant to say.
You consider the person who expects eventual ethical behavior from people that have 'won' capitalism (never have to labour again) to be privileged.
The problem is once people's livelihoods depend on their research output rather than the research process, the whole research process becomes steadily distorted to optimise for being able to reliably produce outputs.
Anyone who has invested a great deal of time and effort into solving a hard problem knows that the 'eureka' moment is not really something that you can force. So people end up spending less time working on problems that would contribute to 'breakthroughs' and more time working on problems that will publish.
https://jerrypournelle.com/reports/jerryp/iron.html
I genuinely thing science would be better served if scientist got paid modest salaries to pursue their own research interests and all results became public domain. So many Universities now fancy themselves startup factories, and startups are great for some things, no doubt, but I don't think pure research is always served by this strategy.
We made a mistake by making academia a business. The point was that certain research creates the foundation for others to stand on, but it is difficult to profit off those innovations and by making those innovations public then the society at large will profit by several orders of magnitude more than you would have if you could have. Newton and Leibniz didn't become billionaires by inventing calculus, yet we wouldn't have the trillion dollar businesses and half the technology we have today if they hadn't. You could say the same about Tim Burner Lee's innovation.
The idea that we have to justify our research and sell it as profitable is insane. It is as if being unaware of the past itself. Yeah, there's lots of failures in research, it's hard to push the bounds of human knowledge (surprise?). But there are hundreds, if not millions, of examples where that innovation results in so much value that the entire global revenue is not enough. Because the entire global revenue stands on this very foundation. I'm not saying scientists need to be billionaires, but it's fucking ridiculous that we have to fight so hard to justify buying a fucking laptop. It is beyond absurd.
[0] https://news.ycombinator.com/item?id=45422828
[1] https://news.ycombinator.com/item?id=43959309
I persist because I'm fantastic at politics while being good enough to do my job. Feels weird man.
I can't think of it ever really paying off. Bell Labs is the best example. Amazing research that was unrelated to the core business off the parent company. Microsoft Research is another great one. Lots of interesting research that .. got MS some nerd points? But has materialized into very very few actual products and revenue streams. Moving AI research doesn't help Meta build any motes or revenue streams. It just progresses our collective knowledge.
On the "human progress" scale it's fantastic to put lots of smart people in a room and let them do their thing. But from a business perspective it seems to almost never pay off. Waiting on the irrational charity of businesses executive is probably not the best way to structure thing.
I'd tell them to go become academics.. but all the academics I know are just busy herding their students and attending meetings
Also it is what big tech was doing until LLMs hit the scene
So I'm not sure what you mean by it never paying off. We were doing it right up till one of those things seemed to pay off and then hyper focused on it. I actually think this is a terrible thing we frequently do in tech. We find promise in a piece of tech, hyper focus on it. Specifically, hyper focus on how to monetizing it which ends up stunting the technology because it hasn't had time to mature and we're trying to monetize the alpha product instead of trying to get that thing to beta.
So this is actually what I'm trying to argue. It actually does pay off. It has paid off. Seriously, look again at Silicon Valley and how we got to where we are today. And look at how things changed in the last decade...Why is it that we like off the wall thinkers? That programmers used to be known as a bunch of nerds and weirdos. How many companies were started out of garages (Apple)? How many started as open source projects (Android)? Why did Google start giving work lifestyle perks and 20% time?
So I don't know what you're talking about. It has frequently paid off. Does it always pay off? Of course not! It frequently fails! But that is pretty true for everything. Maybe the company stocks are doing great[0], but let's be honest, the products are not. Look at the last 20 years and compare it to the 20 years before that. The last 20 years has been much slower. Now maybe it is a coincidence, but the biggest innovation in the last 20 years has been in AI and from 2012 to 2021 there were a lot of nice free reign AI research jobs at these big tech companies where researchers got paid well, had a lot of autonomy in research, and had a lot of resources at their disposal. It really might be a coincidence, but a number of times things like this have happened in history and they tend to be fairly productive. So idk, you be the judge. Hard to conclude that this is definitely what creates success, but I find it hard to rule this out.
Same problem, different step of the ladder[0] https://news.ycombinator.com/item?id=45555175
This is very true, and more than just in ai.
I think if they weren’t so metric focused they probably wouldn’t have hit so much bad publicity and scandal too.
Well for starters you need a leader who can rally the troops who "think(s) different" - something like a S Jobs.
That person doesnt seem to exist in the industry right now.
Quite the statement for anybody who follows developments (without excluding xAI).
In general we need to make it simpler for LLMs to take in different forms of embeddings. At least frameworks that simplify it.
It means you're reading into it too much and need to be let down, gently, from the hype train.
TL;DR
• MSI’s first paper, REFRAG, is about a new way to do RAG.
• This slightly modified LLM converts most retrieved document chunks into compact, LLM-aligned chunk embeddings that the LLM can consume directly.
• A lightweight policy (trained with RL) decides which chunk embeddings should be expanded back into full tokens under a budget; the LLM runs normally on this mixed input.
• The net effect is far less KV cache and attention cost, much faster first-byte latency and higher throughput, while preserving perplexity and task accuracy in benchmarks.
I wish more long posts followed this model of a scientific paper.
Doesn't this tie the two layers together in a way that they can't evolve separately?
111 more comments available on Hacker News