Vibethinker-1.5b
Postedabout 2 months agoActiveabout 2 months ago
github.comTechstory
calmmixed
Debate
60/100
Artificial IntelligenceLarge Language ModelsNlp
Key topics
Artificial Intelligence
Large Language Models
Nlp
The VibeThinker-1.5B AI model is released on GitHub, sparking discussion about its performance, real-world applications, and technical details.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
7h
Peak period
4
6-9h
Avg / period
2
Comment distribution14 data points
Loading chart...
Based on 14 loaded comments
Key moments
- 01Story posted
Nov 12, 2025 at 10:52 PM EST
about 2 months ago
Step 01 - 02First comment
Nov 13, 2025 at 6:05 AM EST
7h after posting
Step 02 - 03Peak activity
4 comments in 6-9h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 14, 2025 at 12:15 PM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45910410Type: storyLast synced: 11/20/2025, 12:26:32 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Is this hosted online somewhere so I can try it out?
On math questions, though, beside a marked tendency towards rambling thinking, it's just plain implausibly good for a 1.5B model. This is probably just rote learning, though. Otherwise this might well be a breakthrough.
Used their recommended temperature, top_k, top_p and so on settings
Overall it still seems extremely good for its size and I wouldn't expect anything below 30B to behave like that. I mean, it flies with 100 tok/sec even on a 1650 :D
from
```
excerpt of text or code from some application or site
```
What is the meaning of excerpt?
Just doesen't seem to work at a useable level. Coding questions get code that runs, but almost always misses so many things that finding out what it missed and fixing those takes a lot more time than handwriting code.
>Overall it still seems extremely good for its size and I wouldn't expect anything below 30B to behave like that. I mean, it flies with 100 tok/sec even on a 1650 :D
For it's size absolutely, I've not seen 1,5B models that form even sentences right most of the time so this is miles ahead of most small models, not just to the hinted at levels the benchmarks would you have believe
Overall if you're memory constrained it's probably still worth to try and fiddle around with it if you can get it to work. Speedwise if you got the memory a 5090 can get ~50-100tok/s for a single query with 32B-AWQ and way more if you have something parallel like open-webui
I gave it two tasks. "Create a new and original story in 500 words" and "Write a Python console game". Both of those resulted in an endless loop with the model repeating itself
I'm honest. Given that a 1B Granite nano model has only little problems (word count) with such tasks and given that VibeThinker is announced as a programming model it's disappointing to see a 1.6B model fail multiple times.
And it fails at one of the simplest coding tasks where a Granite model at nearly half the size has no problems.
It's probably an important discovery but seemingly only usable in an academic context.
So during the final they try to ensure the model doesn't get the right answer every time, but only 50% of time, so as to avoid killing all variability-- very sensible, and then they compute a measure of this, take the negative exponential of this measure and then they scale the advantage by this.
So a question matters in proportion to the variability of the answers. Isn't this more curriculum learning stuff than actually suppressing things that don't vary enough?
Basically focusing on questions that are still hard instead of trying to push the probability of problem it's often able to solve to 99.99%?
Also very reasonable, but this isn't how they describe it. Instead, from their description I would think they're sort of forcing entropy to be high somehow.
I think the way I'd have titled it would be something like "Dynamic curricula to preserve model entropy".