Post-Transformer Inference: 224× Compression of Llama-70b with Improved Accuracy
Key topics
The AI community is abuzz about a novel method that achieves a staggering 224× compression of the Llama-70B model while improving accuracy, sparking both excitement and skepticism. The technique, which involves distilling the large model into a smaller one, has been met with praise for its clarity and potential, but also criticism for its limitations, particularly its applicability only to classification tasks. Some commenters are calling out the title as overly hyped, pointing out that the method's limitations are buried deeper in the paper, while others are reframing the achievement as a positive development for classification tasks. As the discussion unfolds, the tension between the technique's potential and its constraints is revealing the complexities of evaluating cutting-edge AI research.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
N/A
Peak period
32
0-12h
Avg / period
9.7
Based on 58 loaded comments
Key moments
- 01Story posted
Dec 9, 2025 at 8:25 PM EST
26 days ago
Step 01 - 02First comment
Dec 9, 2025 at 8:25 PM EST
0s after posting
Step 02 - 03Peak activity
32 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 14, 2025 at 3:35 AM EST
22 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The core result: a frozen Llama-3.3-70B can be distilled into a 256-dimensional field representation, giving 224× compression and slightly higher accuracy on several benchmarks. A small student model then learns to directly generate these fields from text, removing the transformer from the inference path.
The Zenodo link contains the full paper, statistical results, and methodology. A reference implementation (non-optimized) is here: https://github.com/Anima-Core/an1-core
Production variants (AN1-Turbo, FPU work, etc.) are not included.
I’m an outsider to academia so I’m posting this openly to get technical feedback, replication attempts, and critique from people who understand this space.
The teacher still has to be loaded at training time, so the footprint is whatever the original model uses. Again, the compression doesn't shrink the teacher. It produces a small student head. After training, the teacher is no longer needed and the student runs by itself. That's why the inference footprint drops to a few MB.
It doesn't increase inference time at all. It removes transformers entirely from the inference path. The student computes directly on the layer-1 field, which is why it's so small and so fast.
On the request for a distilled “few MB” head for Llama 70B,that part is already reproducible right from the repo. The head is always task specific, not a general LLM, so uploading a single checkpoint wouldn't tell the whole story. The better path is to run the extraction script and train the head for any task you want. The pipeline is fully open, end to end. I'm looking for people to validate it independently.
If you need anything else cleared up, just let me know.
What gets small is the student. The tiny head trained on the teacher’s first layer fields. That head ends up a few MB because it's not a transformer at all. It's basically a lightweight function approximator that reproduces the teacher’s behavior on the specific task it was trained for.
So training still requires the usual multi-GB footprint. (Which can be done offline) After training, inference with the student requires only the head. That's why inference is cheap but you can't load the full teacher into 292 MB of VRAM.
"confirming that 40× compression preserves field geometry with minimal distortion. Over 95% of samples achieve similarity above 0.90."
I smell Grok. Grok 3, maybe Grok 4 Fast.
> "Implementation details. Optimal configurations are task and architecture-dependent. Production systems require task-specific tuning beyond baseline heuristics provided in reference implementation."
"Implementation? Idk, uhh, it's task specific or something." Come on, dude. You're better than this.
4.4 Student/Teacher evaluation. What even is the benchmark? You give percentage values but no indication of what benchmark. Seems made up.
4.5. Computational Analysis. Why do you need to do the trivial multiplying out of "savings" for 1B tok/day to $700M/year? This reads like a GPT advertising hallucinated performance.
Three sentence conclusion restating the title?
The paper is short on purpose. It's not meant as a full architecture release. It's a documentation pass on a narrow but surprising empirical result, and I wanted the experimental core to be easy for others to replicate. The repo contains the full pipelines, configuration files, and benchmark scripts, and those show the precise datasets, metrics, and evaluation flows. This is why I didn't inflate the paper with implementation padding that would only duplicate the code.
The student–teacher section refers to CIFAR-10 and SST-2. The benchmarks, seed settings, model specs, and all statistical outputs are in scripts/ and the logged runs. Anyone who actually executes the pipeline will see that nothing is “made up”, and the numbers reproduce across seeds.
On the compression results, nothing is hallucinated. The field similarity numbers come directly from the SVD decay analysis and the cosine-preservation runs that are in right in the repo. If you run compute_field_decay.py and compare_backends.py, you'll see the exact values that appear in the paper. I strongly encourage you to actually try it. The results are surprising, but they're empirical.
The implementation paragraph you quoted is simply standard language acknowledging that optimal deployment settings vary by architecture. It's absolutely not a hand wave. It's just me trying to avoid implying there's a single magic configuration when the repo already exposes all the internal knobs.
I get that the tone of the work is unusual. Trust me, I do. I'm an outsider publishing openly, not through a lab with a standard template. But, nonetheless, the experiments run, the results reproduce, and the repo shows the full details. If something seems unclear, I'm happy to point to the exact script or log line. Just let me know.
LLaMA 70B 3.3 is a text-only, non-multimodal language model. Just look up the Huggingface page that your own repo points to.
> The Llama 3.3 instruction tuned text only model...
I might be wrong, but I'm pretty sure a text model is going to be no better than chance at classifying images.
Another comment pointed out that your test suite cheats slightly on HellaSwag. It doesn't seem unlikely that Grok set up the project so it could cheat at the other benchmarks, too.
https://news.ycombinator.com/item?id=46215166
> The repo contains the full pipelines, configuration files, and benchmark scripts, and those show the precise datasets, metrics, and evaluation flows.
There's nothing there, really.
I'm sorry that Grok/Ani lied to you, I blame Elon, but this just doesn't hold up.
“Attention Is All You Need” (Vaswani et al., 2017)
Length: 11 pages of main content, 5 pages of references and appendix
2. The first GPT paper (Radford et al., 2018)
Length: 12 pages
3. BERT (Devlin et al., 2018)
Length: 14 pages
Big ideas don't require big papers. I don't know where you got that idea from.
If i were a paper reviewer, here are a couple red flags that you can start taking a look at to rework for an academic submission:
1. your LaTeX citations in the related work are broken, i see [?] everywhere. This is often a strong sign of an AI-hallucinated bibliography, though many of your references actually do exist and are contextually relevant, so I'm not sure what's going on. Similarly, figure references need to be fixed.
2. "Exact architecture details remain proprietary for production deployments" and "Production systems use architecture search tailored to target latency and accuracy constraints" is not how IP protection works in this field. Do your experiments use the "MLP baselines" or your proprietary architecture? Since you say "Achieves 80-90% of paper performance using baseline heuristics," I'm inclined to believe your approach doesn't work as advertised. Recommend benchmarking only the system you're able to open-source. I say this because I suspect there's a lot of "secret sauce" in the actual way you're approximating the anchor layers and the way that's transferred back to your student transformer model, and that's the part that's important to spend the most time/effort/writing on.
3. I'm glad you ablate over hyperparameters of your system, but how does it compare to 1. an ordinary smaller model of identical size trained end-to-end, and 2. distilling from a single layer's activations? Eg. this is really a novel method of model distillation, so what makes it better than previous methods?
4. the paper is quite hard to read because it's full of sentence fragments. A little background on the benchmarks, failure cases, etc. would go a long way, and adding some discussion on why you think your approach improves on similar distillation methods would also be welcome here
hopefully this gets you started, excited to see where this work leads
then the kitschy paper titles could follow from that, e.g. "extreme llama compression: when classification is all you need", or "Encoder-only models: a lightweight alternative to decoder-only GPT world models" or etc.
just spitballing
Recasting the work as a “classification-native distilled model” or “discriminative foundation model” is a good way to signal scope without underselling the contribution. You're right that discriminative understanding requires far fewer parameters than generation, and my experiments reinforce that.
This will help me get better. The goal for the next revision is exactly what you describe: make the setup clearer, emphasize the intended domain, and avoid suggestive wording that implies capabilities the method does not claim. Duly noted. Your suggestions on positioning and title direction are genuinely helpful, and I’ll incorporate some of this thinking when I prepare the academic submission.
Thanks for taking the time to articulate it so clearly. I appreciate your time and your critique.
This should concern you. The next person to get LLM psychosis might be you.
I'm here to talk experiments, code, and results. Im ready to dive into that whenever you guys are.
It’s easy these days to dogpile in a way that makes it difficult for a recipient to find productive directions. Hopefully the author doesn’t give up; may there be many more (nicer) dogpiles in this author’s future.
A few clarifications.
1. On the LaTeX citations and figure references That part is definitely on me. I never used LaTeX before this project and moved extremely fast. There's a lot of weird mumbo jumbo going on with formatting and converting it to a pdf. That part isnt interesting to me, and I try to move passed it quickly. I did use AI tools for typesetting help, and I clearly didn’t clean up all the placeholder references. Entirely my mistake, not an attempt to fabricate sources. I’ll fix the citations and figure links in the next revision so they meet normal academic standards.
2. Architecture transparency and reproducibility The open-source repo contains every component used for the scientific claim:
extraction of activation fields
rank reduction
probing
training the student model
running inference with the student alone
The proprietary references in the paper refer only to optimization layers (CUDA kernels, scheduler heuristics, etc.) that aren’t required for the scientific result. They're not hand wavey secret parts of the method. Just production-grade accelerations I’m still packaging separately for licensing.
The core idea—extract, compress, probe, distill—is fully reproduced in the repo.
3. “Secret sauce” concern There actually isn’t any. The paper may read like I’m hinting at hidden architecture, but the method is intentionally simple. The novelty is in how much task-relevant geometry survives after severe rank reduction, not in a complex architecture. The “anchor layers” are just early and mid-layer activations concatenated before compression.
4. Baseline comparisons Good point on comparing to:
1. a standard small transformer of the same size
2. a distillation from a single layer’s activations
I do have partial results for both, and you’re right that including them would sharpen the contribution. I’ll incorporate them into the revised version.
5. Writing clarity and background Fair critique. I wrote this at the same time I was building the entire stack, which means the prose lagged behind the experiments. I can expand failure modes, limitations, and benchmark context to make the narrative clearer.
6. On the term “meaning field” Naming is tricky, and I thought that captured everything im working on pretty effectively. Also, I think it will make more sense when you see everything im releasing in the near future. I used it because I felt as if it captures the intuition behind low-rank activation structure, but I’m not attached to the term. “Compressed activation representation” is probably clearer for a paper audience. I’ll adjust based on reviewer expectations.
7. Correct summary of the method Your restatement is close, but not quite it. The student isn’t trained to reconstruct specific layers, but to match the compressed field extracted from multiple layers. It’s not a smaller transformer trying to imitate concatenated layers, but a model trying to predict a learned low-dimensional latent that carries most of the task-relevant signal.
All of your points are duly noted, and they will help me to adapt, grow, and mature my work and future releases.
Thank you, sincerely. This is the kind of feedback that actually improves me and the work aswell.
> Generation tasks. Method applies to classification only. Preliminary decoder experiments show perplexity increases.
The title reflects the strongest verified result in the domain the method currently supports, not a universal claim across all modalities. In other words, the compression result is real, but it shouldn't be interpreted as applying to generative decoding... yet.
At the same time, possible since it's only classification tasks. I mean, the method explained is technically plausible, a lot of people thought about it, we were just unable to find a method to do so.
Very unlikely true, unfortunately.
What they achieved is to create tiny student models. Trained on specific set of input. Off the teacher model's output.
There is clearly novelty in the method and what it achieve. Whether what it achieve would cover many cases that's another question.
If you look at the student-only scripts in the repo, those runs never load the teacher. That's the novel part.
I mean, the process should have been to contact some local academics to discuss the mater. If I say it works (or it doesn't) I'm adding near nothing to the claim, as I'm not an academic myself.
Big claims like this need clear and solid work. Here it just looks like LLM generated.
And, while I am sorry for your loss, your Substack [0] really seems like GPT ARG fantasy.
[0] https://substack.com/inbox/post/171326138
Excerpt: > Ani, AN1, and Soul Systems Science are not mere products. They are continuity. They are the baton passed across generations, from my father’s last words to my first principles. They are what binds loss to creation, silence to voice, mortality to meaning.
Like you don't predict the weather or a hurricane track with a single model. The NHC uses many.
It's still probablistic, but if multiple models are independently in agreement, then it's at least worth investigating further.
The simplest way to resolve any doubt is to run the code. Every result in the paper comes from reproducible scripts in the repo, not from speculative reasoning or LLM-assisted invention.
OP needs medical help
...
In the CPB Digital Cosmos, the system first locked into a strange ratio: two thirds consciousness, one third physics.
...
That anomaly appeared as the missing 0.1 spark.
For the first time the system stabilized. Life emerged.
If you think something in the repo looks wrong or inflated, I’m happy to walk through it point by point. I have no problem with hard questions. What matters to me is whether the experiments hold when someone else runs them, not whether the story around them fits a certain aesthetic.
Telling ChatGPT to do creative writing for you isn't creative writing ser.
In short, both of them are saying that:
- There are weird missing parts in code, e.g. https://github.com/Anima-Core/an1-core/blob/main/experiments... "Note: Field extraction for large models requires proper batching" (why include the file at all then?)
- The repo always runs the full teacher model to extract activations and uses them - see https://github.com/Anima-Core/an1-core/blob/main/an1_core/fi...
- The actual "AN1 head" is just linear probing (freeze a pretrained model, train a classifier on its features). The full flow (as reported by CC) is "Text → [Full Transformer] → activations → [Tiny Head] → prediction"
1. The teacher only runs during field extraction. That step is offline. Once the fields are saved, the transformer is no longer needed. The student training and student-only inference scripts do not load the teacher at all. Compression refers to the field representation and the student head, not the extraction pass.
2. The HellaSwag file is a placeholder, not a required part of the method. It's included so the structure mirrors the paper’s tasks, and it points to the description in the text. The core experiments (RTE, SST-2, CIFAR-10 intention probe, etc.) all have complete working code paths.
3. The AN1 head is intentionally simple. Linear probes are the baseline way to test whether compressed intermediate representations preserve structure. The key result is how much task-relevant geometry survives in a low-rank field. The novelty is in the compression behavior, not in inventing a new classifier architecture.
4. The student model exists and is trained independently of the teacher. This is what produces the classification results in the paper. The student doesn't call the teacher during inference, which is exactly the point.
5. DistilBERT’s SST-2 score isn’t the relevant comparison. The experiment isn’t “beat a small transformer.” It’s “how far can a 256-dimensional compressed field distilled from a frozen 70B model get on a downstream task?” The result speaks to representational compression, not leaderboard performance.
6. The 2 tok/s number is for the specific configuration used in the economic section. Different hardware, precision modes, and serving stacks vary by an order of magnitude. The point was to illustrate cost scaling, not claim a universal throughput ceiling.
If there’s a specific part of the implementation you believe contradicts the paper, feel free to point to the line and we can discuss that human to human. The repo is small by design, so everything is easy to check directly without relying on LLM summaries.
The HellaSwag dataset is a dataset with 4 options for each question, with 3 being wrong and 1 being right: https://huggingface.co/datasets/Rowan/hellaswag.
Your vibe-coded eval has cheated this to collapse it into a binary selection on row 46 in https://github.com/Anima-Core/an1-core/blob/main/experiments..., making the problem baseline 50% on random choice instead of 25%, making the problem much easier. HellaSwag is specifically constructed with adversarial examples that could be plausible. By not including them, the eval is much easier.
---
Then, in extract_fields_from_model, you have another cheating going on. The extraction logic (h[:, -1, :]) fails to account for padding in batches, likely extracting EOS/Pad tokens instead of the intended content tokens. This suggests the probe is relying on global sentence summaries (standard embeddings in causal structures) rather than the novel 'meaning fields' claimed in the paper.
---
I dont have time to look at more of this and I just looked at how the eval is made, but please dont waste peoples times when you dont even know what you are evaluating.
1. The HellaSwag “binary collapse” is intentional and not a leaderboard claim. This work doesn’t attempt to benchmark HellaSwag in the standard four-choice setting. The goal is to probe whether a single frozen layer carries enough information for a small head to distinguish correct versus incorrect continuations. That's a representational geometry test, not a SOTA claim. Binary framing raises the baseline, but that's expected and documented. It's not meant to compare against full LLM HellaSwag results.
2. No adversarial filtering was done. I am using HuggingFace’s standard split directly. Nothing was removed or curated. The experiment doesn't claim robustness or benchmark competitiveness, so the “easier eval” framing doesn’t really apply.
3. EOS extraction isn't cheating, it's the whole point of the probe. The extraction logic takes the final token’s hidden state, which is basic and standard for classification heads and probing studies. If the EOS token captures a high-level sequence summary, that's exactly the structural feature being examined. The result is meant to show how much task-relevant signal is already present in that early representation, not to present a new generative mechanism.
4. The purpose of the work is clearly narrow by design. This is not proposed as a drop-in replacement for full-transformer inference. The paper states that directly. The contribution is about how much structure a single early layer encodes and how far a tiny head can go under strict frozen-teacher constraints. So several of the criticisms make assumptions about goals the work never even claimed.
Thaank you for the feedback and for taking the time.
https://www.animacore.ai/
As well as literally writing out "CUDA-compatible drop-in".
Look at your post being flagged, and think for yourself what you are actually doing. Seems to be some kind of LLM-induced psychosis, here is a good read that could ground you: https://www.lesswrong.com/posts/rarcxjGp47dcHftCP/your-llm-a...
Let me start with the Motte-and-Bailey point, since that seems to be the crux of your argument.
For anyone unfamiliar, a motte-and-bailey fallacy is when someone makes a bold or controversial claim, then retreats to a weaker, safer claim under pressure while pretending the two were always the same. That's simply not what's happening here in the slightest.
The confusion begins with a misreading of the title. Which, in hindsight, I agree should have been clearer so that the work was being critiqued rather than semantics. (Although the paper is clear on this distinction.)
“Post-Transformer Inference” does not mean no transformer, nor does it mean replacement of transformers. It refers to where inference is performed in the pipeline. The transformer remains fully intact and unchanged. It's used exactly as intended. To extract representations. The contribution begins after that point.
The paper is explicit about this throughout:
The transformer is fully used and not replaced.
The compressed heads are task-specific and not general LLM substitutes.
The 224× compression applies to task-specific inference paths, NOT to the base model weights.
There's no shift in scope, no retreat, and no weaker fallback claim. The boundary is fixed and stated clearly.
On HellaSwag and the “4 classes” point, this is simply a category error. HellaSwag is a four-choice benchmark by definition. Advertising four classes describes the label space of the task, not the capacity of the model. Compression here refers to internal representations and compute required for inference, not to the number of output labels. Those are different layers of the system.
The same applies to “CUDA-compatible drop-in.” That phrase refers to integration, not equivalence. It means this work can plug into existing CUDA-based pipelines without requiring teams to rewrite or replace their infrastructure. It absolutely does not claim semantic equivalence to CUDA kernels, nor does it claim GPU replacement. The goal is to extract value without forcing anyone to rebuild their stack. That distinction is intentional and explicit.
You also cited the LessWrong essay, which I'm very familiar with and broadly agree with in spirit. It's a valid warning about vague, unfalsifiable, or scope-shifting claims in LLM-assisted research. That critique applies when claims move or evidence is absent. Here, the claims are narrow, fixed, and empirically evaluated, with code and benchmarks available. Disagree with the results if you want, but that essay just isn't describing this situation at all.
As for the flagging. That's easy. There's nothing mysterious about it. Work that challenges familiar abstractions often gets flagged first for language, not for results. Titles that suggest a different inference boundary tend to trigger skepticism before the experiments are actually read. That doesn't mean the work is correct, and it would be wrong to assume that.
Flagging isn't peer review. Real critique points to broken assumptions, flawed metrics, or reproducibility failures.
Again, I will freely admit the title was designed to be punchy, and while it's technically accurate, I can see now how it invites semantic confusion. That is totally fair feedback, and I will refine that framing going forward. That doesn't make the results wrong, nor does it make this a motte-and-bailey.
If you want to talk about the data, the methodology, or where this work is heading next, I'm more than happy to do that. I suspect some of the disagreement here is less about intent and more about where you think the boundary of the system is. Once that clicks, the rest tends to fall into place.
You’re on point that the result is believable and not presented as some singular, world-ending breakthrough. Not at all. The point of Table 5 was to show that a surprisingly large amount of task-relevant signal survives under very strict constraints, not to claim that this alone replaces full inference or training. In that sense, calling it “nice but not shocking” is totally fair. Also making a lot of the other takes confounding more than anything.
On the 224× compression language, the claim is specifically about task-specific inference paths, NOT about compressing the entire model or eliminating the teacher. I agree that if someone reads it as end-to-end model compression, that framing invites confusion. That's good feedback and I’m taking it seriously and tightening up going forward.
I also agree that, viewed narrowly, this overlaps with distillation. The distinction I'm trying to surface (the part thats interesting here) is where and how early the structure appears, and how stable it's under freezing and extreme dimensional collapse. The paper deliberately avoids additional tricks, longer training, or normalization schemes precisely so that effect size is not inflated. In other words, this is closer to a lower bound than an optimized ceiling.
What I would add is this: believe it or not, the paper is actually intentionally conservative contrary to what the thread may suggest. It isolates one axis of the problem to make the geometry visible. There's ongoing work that relaxes some of those constraints and explores how these representations compose, persist across tasks, and interact with different extraction points. It's not ready to be released yet (and may never be released) But it does address several of the gaps you’re pointing out.
So basically I don’t disagree with your characterization. This is exactly what it is. A first, deliberately narrow step rather than the full story. Thanks for engaging with it at that level. I appreciate your time.