Qwen3 30b A3b Hits 13 Token/s on 4xraspberry Pi 5
Posted4 months agoActive4 months ago
github.comTechstoryHigh profile
excitedpositive
Debate
60/100
Distributed ComputingLlmsRaspberry PiEdge AI
Key topics
Distributed Computing
Llms
Raspberry Pi
Edge AI
The Qwen3 30B A3B model achieves 13 token/s on 4xRaspberry Pi 5 using distributed-llama, sparking discussion on its potential applications, scalability, and comparison to other hardware.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
60
6-12h
Avg / period
12.3
Comment distribution135 data points
Loading chart...
Based on 135 loaded comments
Key moments
- 01Story posted
Sep 6, 2025 at 6:59 AM EDT
4 months ago
Step 01 - 02First comment
Sep 6, 2025 at 9:07 AM EDT
2h after posting
Step 02 - 03Peak activity
60 comments in 6-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 9, 2025 at 11:57 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45148237Type: storyLast synced: 11/20/2025, 8:32:40 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
If that problem gets solved, even if for only a batch approach that enables parallel batch inference resulting in high total token/s but low per session, and for bigger models, then it would he a serious game changer for large scale low cost AI automation without billions capex. My intuition says it should be possible, so perhaps someone has done it or started on it already.
If we can get this down to a single Raspberry Pi, then we have crazy embedded toys and tools. Locally, at the edge, with no internet connection.
Kids will be growing up with toys that talk to them and remember their stories.
We're living in the sci-fi future. This was unthinkable ten years ago.
We're at the precipice of having a real "A Young Lady's Illustrated Primer" from The Diamond Age.
What a radical departure from the social norms of childhood. Next you'll tell me that they've got an AI toy that can change their diaper and cook Chef Boyardee.
I think there's something beautiful and important about the fact that parents shape their kids, leaving with them some of the best (and worst) aspects of themselves. Likewise with their interactions with other people.
The tech is cool. But I think we should aim to be thoughtful about how we use it.
Graphics cards with decent amount of memory are still massively overpriced (even used), big, noisy and draw a lot of energy.
Apple really is #2 and probably could be #1 in AI consumer hardware.
I'd try the whole AI thing on my work Macbook but Apple's built-in AI stuff isn't available in my language, so perhaps that's also why I haven't heard anybody mention it.
The hard part is identifying those filter functions outside of the code domain.
´´´Think about it like a multiple-choice test. If you do not know the answer but take a wild guess, you might get lucky and be right. Leaving it blank guarantees a zero. In the same way, when models are graded only on accuracy, the percentage of questions they get exactly right, they are encouraged to guess rather than say “I don’t know.”
As another example, suppose a language model is asked for someone’s birthday but doesn’t know. If it guesses “September 10,” it has a 1-in-365 chance of being right. Saying “I don’t know” guarantees zero points. Over thousands of test questions, the guessing model ends up looking better on scoreboards than a careful model that admits uncertainty."´´´´
The problem isn't getting your Killer A I App in front of eyeballs. The problem is showing something useful or necessary or wanted. AI has not yet offered the common person anything they want or need! The people have seen what you want to show them, they've been forced to try it, over and over. There is nobody who interacts with the internet who has not been forced to use AI tools.
And yet still nobody wants it. Do you think that they'll love AI more if we force them to use it more?
Nobody wants the one-millionth meeting transcription app and the one-millionth coding agent constantly, sure.
It a developer creativity issue. I personally believe the creativity is so egregious, that if anyone were to release a killer app, the entirety of the lackluster dev community will copy it into eternity to the point where you’ll think that that’s all AI can do.
This is not a great way to start off the morning, but gosh darn it, I really hate that this profession attracted so many people that just want to make a buck.
——-
You know what was the killer app for the Wii?
Wii Sports. It sold a lot of Wiis.
You have to be creative with this AI stuff, it’s a requirement.
You have to get into the highest 16-core M4 Max configurations to begin pulling away from that number.
The ROCm and Vulkan stacks are okay, but they're definitely not fully optimized yet.
Strix Halo's biggest weakness compared to Mac setups is memory bandwidth. M4 Max gets something like 500+ GB/s, and M3 Ultra gets something like 800 GB/s, if memory serves correctly.
I just ordered a 128 GB Strix Halo system, and while I'm thrilled about it, but in fariness, for people who don't have an adamant insistence against proprietary kernels, refurbished Apple silicon does offer a compelling alternative with superior performance options. AFAIK there's nothing like Apple Care for any of the Strix Halo systems either.
I have a Mac Mini M4 Pro 64GB that does quite well with inference on the Qwen3 models, but is hell on networking with my home K3s cluster, which going deeper on is half the fun of this stuff for me.
I was initially thinking this way too, but I realized a 128GB Strix Halo system would make an excellent addition to my homelab / LAN even once it's no longer the star of the stable for LLM inference - i.e. I will probably get a Medusa Halo system as well once they're available. My other devices are Zen 2 (3600x) / Zen 3 (5950x) / Zen 4 (8840u), an Alder Lake N100 NUC, a Twin Lake N150 NUC, along with a few Pi's and Rockchip SBC's, so a Zen 5 system makes a nice addition to the high end of my lineup anyway. Not to mention, everything else I have maxed out at 2.5GbE. I've been looking for an excuse to upgrade my switch from 2.5GbE to 5 or 10 GbE, and the Strix Halo system I ordered was the BeeLink GTR9 Pro with dual 10GbE. Regardless of whether it's doing LLM, other gen AI inference, some extremely light ML training / light fine tuning, media transcoding, or just being yet another UPS-protected server on my LAN, there's just so much capability offered for this price and TDP point compared to everything else I have.
Apple Silicon would've been a serious competitor for me on the price/performance front, but I'm right up there with RMS in terms of ideological hostility towards proprietary kernels. I'm not totally perfect (privacy and security are a journey, not a destination), but I am at the point where I refuse to use anything running an NT or Darwin kernel.
Love that AMD seems to be closing the gap on the performance _and_ power efficiency of Apple Silicon with the latest Ryzen advancements. Seems like one of these new miniPCs would be a dream setup to run a bunch of data and AI centric hobby projects on - particularly workloads like geospatial imagery processing in addition to the LLM stuff. Its a fun time to be a tinkerer!
NVDIA is so greedy that doling out $500 dollars will only you get you 16gb of vram at half the speed of a M1 Max. You can get a lot more speed with more expensive NVDIA GPUs, but you won’t get anything close to a decent amount of vram for less than 700-1500 dollars (well, truly, you will not get close to 32gb even).
Makes me wonder just how much secret effort is being put in by MAG7 to strip NVDIDA of this pricing power because they are absolutely price gouging.
Seems like at the consumer hardware level you just have to pick your poison or what one factor you care about most. Macs with a Max or Ultra chip can have good memory bandwidth but low compute, but also ultra low power consumption. Discrete GPUs have great compute and bandwidth but low to middling VRAM, and high costs and power consumption. The unified memory PCs like the Ryzen AI Max and the Nvidia DGX deliver middling compute, higher VRAMs, and terrible memory bandwidth.
Also I don't think power consumption is important for AI. Typically you do AI at home or in the office where there is lot of electricity.
Being able to quickly calculate a dumb or unreliable result because you're VRAM starved is not very useful for most scenarios. To run capable models you need VRAM, so high VRAM and lower compute is usually more useful than the inverse (a lot of both is even better, but you need a lot of money and power for that).
Even in this post with four RPis, the Qwen3 30 A3B is still an MOE model and not a dense model. It runs fast with only 3B active parameters and can be parallelized across computers but it's much less capable than a dense 30B model running on a single GPU.
> Also I don't think power consumption is important for AI. Typically you do AI at home or in the office where there is lot of electricity.
Depends on what scale you're discussing. If you want to get similar VRAM as a 512GB Mac Studio Ultra with a bunch of Nvidia GPUs like RTX 3090 cards you're not going to be able to run that on a typical American 15 AMP circuits, you'll trip a breaker half way there.
If you're going with a Mac Studio Max you're going to be paying twice the price for twice the memory bandwidth, but the kicker is you'll be getting the same amount of compute as the AMD AI chips have which is going to be comparable to a low-mid range GPU. Even midrange GPUs like the RX 6800 or RTX 3060 are going to have 2x the compute. When the M1 chips first came out people were getting seriously bad prompt processing performance to the point that it was a legitimate consideration to make before purchase, and this was back when local models could barely manage 16k of context. If money wasn't a consideration and you decided to get the best possible Mac Studio Ultra, 800GB/s won't feel like a significant upgrade when it still takes 1 minute to process every 80k of uncached context that you'll absolutely be using on 1m context models.
Depends on what you're doing, but at FP4 that goes pretty far.
I would recommend sticking to macOS if compatibility and performance are the goal.
Asahi is an amazing accomplishment, but running native optimized macOS software including MLX acceleration is the way to go unless you’re dead-set on using Linux and willing to deal with the tradeoffs.
On 5090 same model produces ~170 tokens/s.
More devices mean faster performance, leveraging tensor parallelism and high-speed synchronization over Ethernet.
The maximum number of nodes is equal to the number of KV heads in the model #70.
I found this[1] article nice for an overview of the parallelism modes.
[1]: https://medium.com/@chenhao511132/parallelism-in-llm-inferen...
I'm curious about the applications though. Do people randomly buy 4xRPi5s that they can now dedicate to running LLMs?
You'll be much better off spending that money on something else more useful.
Yeah, like a Mac Mini or something with better bandwidth.
Karpathy said in his recent talk, on the topic of AI developer-assistants: don't bother with less capable models.
So ... using an rpi is probably not what you want.
My use case is custom software that I build and host that leverages LLMs for example for domotica where I use my Apple watch shortcuts to issue commands. I also created a VS2022 extension called Bropilot to replace Copilot with my locally hosted LLMs. Currently looking at fine tuning these type of models for work where I work in finance as a senior dev
Have a great week.
Karpathy elides he is an individual. We expect to find a distribution of individuals, such that a nontrivial # of them are fine with 5-10% off the leading edge performance. Why? At least for free as in beer. At most, concerns about connectivity, IP rights, and so on.
[1] gpt-5 finally dethroned sonnet after 7 months
Interesting because he also said the future is small "cognitive core" models:
> a few billion param model that maximally sacrifices encyclopedic knowledge for capability. It lives always-on and by default on every computer as the kernel of LLM personal computing.
https://xcancel.com/karpathy/status/1938626382248149433#m
In which case, a raspberry Pi sounds like what you need.
For an LLM, size is a virtue - the larger a model is, the more intelligent it is, all other things equal - and even aggressive distillation only gets you this far.
Maybe with significantly better post-training, a lot of distillation from a very large and very capable model, and extremely high quality synthetic data, you could fit GPT-5 Pro tier of reasoning and tool use, with severe cuts to world knowledge, into a 40B model. But not into a 4B one. And it would need some very specific training to know when to fall back to web search or knowledge databases, or delegate to a larger cloud-hosted model.
And if we had the kind of training mastery required to pull that off? I'm a bit afraid of what kind of AI we would be able to train as a frontier run.
We could go back and forth on this all day.
The high end Pis aren’t $25 though.
One of the bigger problems with Pi 5, is that many of the classic Pi use cases don't benefit from more CPU than the Pi 4 had. PCIe is nice, but you might as well go CM5 if you want something like that. The 16GB model would be more interesting if it had the GPU/bandwidth to do AI/tokens at a decent rate, but it doesn't.
I still think using any other brand of SBC is an exercise in futility though. Raspberry Pi products have the community, support, ecosystem behind them that no other SBC can match.
Though I must admit to first noticing the trend decades before discovering Arduino when I looked at the stack of 289, 302, and 351W intake manifolds on my shelf and realised that I need the width of the 351W manifold but the fuel injection of the 302. Some things just never change.
though at what quality?
Intel pro B50 in a dumpster PC would do you well better at this model (not enough ram for dense 30b alas) and get close to 20 tokens a second and so much cheaper.
I get 8.2 tokens per second on a random orange pi board with Qwen3-Coder-30B-A3B at Q3_K_XL (~12.9GB). I need to try two of them in parallel ... should be significantly faster than this even at Q6.
fantastic! what are you using to run it, llama.cpp? I have a few extra opi5's sitting around that would love some extra usage
I’m mostly interested in the NPu to run a vision head in parallel with an LLM to speed up time to first token with VLLMs (kinda want to turn them into privacy safe vision devices for consumer use cases)
Using llama-bench, and Llama 2 7B Q4_0 like https://github.com/ggml-org/llama.cpp/discussions/10879 how does yours compare? Cuz I'm also comparing it with a few a few Ryzen 5 3000 Series mini-pcs for less than 150$, and that gets 8 t/s on this list and I've gotten myself
With my Rock 5B and this bench, I get 3.65 t/s. On my Orange Pi 5 (not B) 8GB LPDDR4 (not X), I get 2.44 t/s.
[1] https://github.com/llm-d/llm-d/
Meaning that if you can structure a range of situations and tasks clearly in natural language with a pseudo-code type of structure and fit it in model context then you can have an LLM perform a huge amount of work with Human-in-the-loop oversight & quality control for edge cases.
Think of office jobs, white colar work, where, business process documentation and employee guides and job aids already fully describe 40% to 80% of the work. These are the tasks most easily structured with scaffolding prompts and more specialized RLHF enriched data, and then perform those tasks more consistently.
This is what I decribe when I'm asked "But how will they do $X when they can't answer $Y without hallucinating?"
I explain the above capability, then I ask the person to do a brief thought experiment: How often have you heard, or yourself thought something like, "That is mindnumbingly tedious" and/or "a trained monkey could do it"?
In the end, I don't know anyone whose is aware of the core capabilities in the structured natural-language sense above, that doesn't see at a glance just how many jobs can easily go away.
I'm not smart enough to see where all the new jobs will be or certain there will be as many of them, if I did I'd start or invest in such businesses. But maybe not many new jobs get created, but then so what?
If the net productivity and output-- essentially the wealth-- of the global workforce remains the same or better with AI assistance and therefore fewer work hours, that means... What? Less work on average, per capita. More wealth, per work hour worked per Capita than before.
Work hours used to be longer, they can shorten again. The problem is getting there. To overcoming not just the "sure but it will only be the CEOs get wealthy" side of things to also the "full time means 40 hours a week minimum." attitude by more than just managers and CEOs.
It will also mean that our concept of the "proper wage" for unskilled labor that can't be automated will have to change too. Wait staff at restaurants, retail workers, countless low end service-workers in food and hospitality? They'll now be providing-- and giving up-- something much more valuable than white colar skills that are outdated. They'll be giving their time to what I've heard, and the term is jarring to my ears but it is what it is, I've heard it described as "embodied work". And I guess the term fits. And anyway I've long considered my time to be something I'll trade with a great deal more reluctance than my money, and so demand a lot money for it when it's required so I can use that money to buy more time (by not having to work) somewhere in the near future, even if it's just by covering my costs for getting groceries delivered instead of the time to go shopping myself.
Wow, this comment got away from me. But seeing Qwen3 30B level quality with 13tk/s on dirt cheap HW struck a deep chord of "heck, the global workforce could be rocked to the core for cheap+quality 13tk/s." And that alone isn't the sort of comment you can leave as a standalone drive-by on HN and have it be worth the seconds to write it. And I'm probably wrong on a little or a lot of this and seeing some ideas on how I'm wrong will be fun and interesting.
26 more comments available on Hacker News