Orangepi 6 Plus Review
Key topics
The OrangePi 6 Plus review sparked a lively debate about the merits of this single-board computer, particularly its power management and performance. Commenters were quick to point out that the board's 15W idle power consumption is alarmingly high, with some comparing it unfavorably to mini PCs that use significantly less power. While some defended the board's performance, noting its strong multi-core score, others argued that at $200 for the 16GB model, it's hard to justify the cost when comparable mini PCs offer similar performance at a similar price point. As the discussion unfolded, it became clear that the OrangePi 6 Plus is a niche product that may appeal to ARM64 enthusiasts, but its high power consumption and premium pricing make it a tough sell for more practical users.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
28
6-9h
Avg / period
10.7
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 27, 2025 at 7:51 AM EST
6d ago
Step 01 - 02First comment
Dec 27, 2025 at 8:55 AM EST
1h after posting
Step 02 - 03Peak activity
28 comments in 6-9h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 29, 2025 at 5:52 AM EST
4d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
> 15W at idle, which is fairly high
2. The review says single core Geekbench performance is 1290, same as i5-10500 which is also similar to N150, which is 1235.
Single core, yes. Multi core score is much higher for this SBC vs the N150.
You're probably right about "most workloads", but as a single counter-example, I added several seasons of shows to my N305 Plex server last night, and it pinned all eight threads for quite a while doing its intro/credit detection.
I actually went and checked if it would be at all practical to move my Plex server to a VM on my bigger home server where it could get 16 Skymont threads (at 4.6ghz vs 8 Gracemont threads at ~3ghz - so something like 3x the multithreaded potential on E-cores). Doesn't really seem workable to use Intel Quick Sync on Linux guests with a Hyper-V host though.
if you are talking about ancient hardware, yes, it's mostly driven by single core performance. But any console more recent than the 2000s will hugely benefit from multiple cores (because of the split between CPU and GPU, and the fact that more recent consoles also had multiple cores, too).
ARM actually has a spec in place called SystemReady that standardizes on UEFI, which should make bringup of ARM systems much less jank. But few have implemented it yet. I keep saying, the first cheap Chinese vendor that ships a SystemReady-compliant SBC is gonna make a killing.
Agree. When ARM announced the initiative, I thought that the raspberry pi people would be quick but they haven't even announced a plan to eventually support it. I don't know what the hold up is! Is it really that difficult to implement?
For 90% of use cases, ARM SBCs are not appropriate and will not meet expectations over time.
People expect them to be little PCs, and intend to use them that way, but they are not. Mini PCs, on the other hand, are literally little PCs and will meet the expectations users have when dealing with PCs.
Because they have a great watt/performance ratio along with a GPU that is very well supported by a wide range of devices and mainline kernel support. In other words a great general purpose SBC.
Meanwhile people are using ARM SBCs, with SoCs designed for embedded or mobile devices, as general purpose computers.
I will admit with RAM and SSD prices sky rocketing these ARM SBC look more attractive.
It has basically the same single-core performance as an N150 box
Random N150 result: https://browser.geekbench.com/v6/cpu/10992465
> BTW what's up with people pushing N150 and N300 in every single ARM SBC thread?
At this point I expect a lot of people have been enticed by niche SBCs and then discovered that driver support is a nightmare, as this article shows. So in time, everyone discovers that cheap x86-64 boxes accomplish their generic computing goals easier than these niche SBCs, even if the multi-core performance isn't the same.
Being able to install a mainline OS and common drivers and just get to work is valuable.
Why would the A720 at 2.8 GHz run circles around the N150 that boosts up to 3.6 GHz in single-threaded workloads, while the 12-core chip would wouldn't beat the 4-core chip in multithreaded workloads?
Obviously, the Intel chip wins in single-threaded performance while losing in multi-threaded: https://www.cpubenchmark.net/compare/6304vs6617/Intel-N150-v...
I can't speak to why other people bring up the N150 in ARM SBC threads any more than "AMD doesn't compete in the ~$200 SBC segment".
FWIW, as far as SBC/NUCs go, I've had a Pi 4, an RK3399 board, an RK3568 board, an N100 NUC from GMKTec, and a N150 NUC from Geekom, and the N150 has by far been my favorite out of those for real-world workloads rather than tinkering. The gap between the x86 software ecosystem and the ARM software ecosystem is no joke.
P.S. Stay away from GMKTec. Even if you don't get burned, your SODIMM cards will. There are stoves, ovens, and hot plates with better heat dissipation and thermals than GMKTec NUCs.
I noticed nuance is the first thing discarded in the recurring x86 vs Arm flamewars, with each side minimizing the strength of the "opposing" platform. Pick the right tool for the right job, there are use-cases where the Orange Pi 6 is the right choice.
The problem isn't support for the ARM architecture in general, it's the support for this particular board.
Other boards like the Raspberry Pi and many boards based on Rockchip SoCs have most of the necessary support mainlined, so the experience is quite painless. Many are starting to get support for UEFI as well.
I'm not a compiler expert... But it seems each ARM64 board needs its own custom kernel support, but once that is done, it can support anything compiled to ARM64 as a general target? Or will we still need to have separate builds for RPi, for this board, etc?
Once you get into the CPU though the Aarch64 registers become more standardized. You still have drivers and such to worry about and differing memory offsets for the peripherals - but since you have the kernel running it’s easier to kind of poke around until you find it. Pi 5 added someone complexity to this with the RP1 South Bridge which adds another layer of abstraction.
Hopefully that all makes sense. Basically the Pi itself is backwards while everything else should conform. It’s not Arm specific, but how the Pi does things.
Often an outright mediocre software development culture generally, that sees software as a pure cost centre, in fact. The "product" is seem to be the chip, the software "just" a side show.
The Rockchip stuff is better, but still has similar problems.
I’m not saying one approach is better than the other but there is definitely a lot of art in each camp. I know the one I innately prefer but I’ve definitely had eyebrows raised at me in a professional setting when expressing that view; Some places value upgrading dependencies while others value extreme stability at the potential cost of security.
Both are valid. The latter is often used as an excuse, though. No, your $50 wifi connected camera does not need the same level of stability as the WiFi connected medical device that allows doctor to remotely monitor medication. Yes, you should have a moderately robust way to update and build and distribute a new FW image for that camera.
I can't tell you the number of times I've gotten a shell on some device only to find that the kernel/os-image/app-binary or whatever has build strings that CLEARLY feature `some-user@their-laptop` betraying that if there's ever going to be an updated firmware, it's going to be down to that one guy's laptop still working and being able to build the artifact and not because a PR was merged.
Manufacturers hack it together, flash to device and publish the sources, but dont bother with upstreaming and move on.
Same story as android devices not having updates two years after release.
It's a problem that's inherit to mobile computing and will likely never change unless with regulation or an open standards device line somehow hitting it out of the park and setting new expectations a la PCs.
The problem is zero expectation of ever running anything other than the vendor supplied support package/image and how fast/cheap it is to just wire shit together instead of worrying about standards and interoperability with 3rd party integrators.
Any SBC could buy an extra flash chip and burn an outdated U-Boot with the manufacturer's DTB baked in. Then U-Boot would boot Linux, just like UEFI does, and Linux would read the firmware's fixed DTB, just like it reads x86 firmware's fixed ACPI tables.
But - cui bono?
You need drivers in your main OS either way. On x86 you are not generally relying on your EFI's drivers for storage, video or networking.
It's actually nice that you can go without, and have one less layer.
At some point the "good" boards get enough support and the situation slowly improves.
We reached the state where you dont need to spec-check the laptop if you want to run linux on it, the same will happen to arm sbc I hope.
For example I have an Orange Pi 5 Plus running the totally generic aarch64 image of Home Assistant OS [0]. Zero customization was needed, it just works with mainline everything.
There's even UEFI [1].
Granted this isn't the case for all boards but Rockchip at least seems to have great upstream support.
[0]: https://github.com/home-assistant/operating-system/releases
[1]: https://github.com/edk2-porting/edk2-rk3588
It supports NVMe SSDs same as an N100.
Maintenance is exactly the same; they both run mainline Linux.
Where the N100 perhaps wins is in performance.
Where the Orange Pi 5 Plus (and other RK3588-based boards) wins is in power usage, especially for always-on, low-utilization applications.
For power I don’t know about orange pi 5 but for many SBC power was a mixed bag. I had pretty bad luck with random SBC taking way more power for random reasons and not putting devices in idle mode. Even raspberry pi was pretty bad when it launched.
It’s frustrating because it’s hard to fix. With x64 you can often go into bios and enable power modes, but that’s not the case with arm. For example pcie4 can easily draw 2w+ when active. (The interface!)
See for example here:
https://github.com/Joshua-Riek/ubuntu-rockchip/issues/606
My n100 takes 6W and 8w (8 and 16gb). If pi5 takes 3w that’s not large enough to matter especially when it’s so inconsistent.
Now one place where I used to like rpi zero was gpio access. However I’m transitioning to rp2350 as it’s just better suited for that kind of work, easier to find and cheaper.
I never ran into that bug but I came to the Orange Pi 5 Plus in 2025, so there's a chance the issues were all worked out by the time I started using it.
Looking at a couple of reviews, the Orange Pi 5 Plus drew ~4W idle [0] while an N100 system drew ~10W [1].
1W over a year is 8.76kWh, which here costs ~$2. If those numbers hold (and I'm not saying they do necessarily but for the sake of argument) and with an estimated lifespan of 5 years, you might be looking at a TCO of $140 hardware + $40 power = $180 for an Orange Pi 5 vs. $140 hardware + $100 power = $240 for an N100. That would put an N100 at 33% more expensive. Even if it draws just 6W compared to 4W, that's $200 vs. $180, 11% more expensive.
I'm not saying the Orange Pi 5 Plus is clearly better but I don't think it's as simple as one might think.
[0]: https://magazinmehatronika.com/en/orange-pi-5-plus-review/
[1]: https://www.servethehome.com/fanless-intel-n100-firewall-and...
The shape of historically delivered ARM artifacts has been embedded devices. Embedded devices usually work once in one specific configuration. The shape of historically delivered ARM Linux products is a Thing that boots and runs. This only requires a kernel that works on one single device in one single configuration.
The shape of historically delivered x86 artifacts is socketed processors that plug into a variety of motherboards with a variety of downstream hardware, and the shape of historically delivered x86 operating systems is floppies, CDs, or install media that is expected to work on any x86 machine.
As ARM moves out of this historical system, things improve; I believe that for example you could run the same aarch64 Linux kernel on Pi 2B 1.2+, 3, and 4, with either UEFI/ACPI or just different DTBs for each device, because the drivers for these devices are mainline-quality and capable of discovering the environment in which they are running at runtime.
People commonly point to ACPI+UEFI vs DeviceTree as causes for these differences, but I think this is wrong; these are symptoms, not causes, and are broadly Not The Problem. With properly constructed drivers you could load a different DTB for each device and achieve similar results as ACPI; it's just different formats (and different levels of complexity + dynamic behavior). In some ways ACPI is "superior" since it enables runtime dynamism (ie - power events or even keystrokes can trigger behavior changes) without driver knowledge, but in some ways it's worse since it's a complex bytecode system and usually full of weird bugs and edge cases, versus DTB where what you see is what you get.
I believe some other distros also have UEFI booting/installers setup for PI4 and newer devices because of this, though there's a good chance you'll want some of the other libraries that come with Raspberry PI OS (aka Raspbian) still for some of the hardware specific features like CSI/DSI and some of the GPIO features that might not be fully upstreamed yet.
There's also a port of Proxmox called PXVirt (Formerly Proxmox Port) that exists to use a number of similar ARM systems now as a virtualization host with a nice ui and automation around it.
If ARM cannot outdo x86 on power draw anymore then it really is entirely pointless to use it because you're trading off a lot, and it's basically guaranteed that the board will be a useless brick a few years down the line.
Of course it is not. That's why almost every ARM board comes with it's own distro, sometimes bootloader and kernel version. Because "it is supported". /s
N100 boxes are cheap and use so little power, while having normal OS support and boot setup.
4b / 5 for the camera stuff.
i don’t think using these boards for just compute makes a lot of sense unless it’s for toy stuff like an ssh shell or pihole
Likewise my VPS @ Hetzner is running Aarch64. No drama. Only pain is how brutal the Rust cross-compile is from my x86 machine.
I mean, here's Geerling running a bunch of Steam games flawlessly on a Aarch64 NVIDIA GB10 machine: https://www.youtube.com/watch?v=FjRKvKC4ntw
(Those things are expensive, but I just ordered one [the ASUS variant] for myself.)
Meanwhile Apple is pushing the ARM64 architecture hard, and Windows is apparently actually quite viable now?
Personally... it's totally irrational, but I have always had a grudge against x86 since it "won" in the early 90s and I had to switch from 68k. I want diversity in ISAs
I know the concept has been around for a while but no idea if it actually means anything. I assume that people are targeting ones in common devices like Apple, but what about here?
I've not found Neon to be fun or easy to use, and I frequently see devices ignoring the NPU and inferring on CPU because it's easier. Maybe you get lucky and someone has made a backend for something specific you want, but it's not common.
"you cannot simply use standard versions of PyTorch or TensorFlow out of the box. You must use the NeuralONE AI SDK."
Neon is a SIMD instruction set for the CPU, not a separate accelerator. It doesn't need an SDK to use, it's supported by compiler intrinsics and assembly language in any modern ARM compiler.
https://www.arm.com/products/silicon-ip-cpu/ethos/arm-nn
Even if it worked though, they're usually heavily bandwidth bottlenecked and near useless for LLM inference. CPU wins every time.
Upstream the drivers to the mainline kernel or go bankrupt. Nobody should buy these.
Yet again, OrangePi crank out half-baked products and tech enthusiasts who quite understandably lack the deep knowledge to do more than follow instructions on how to compile stuff talk about it as if their specifications actually matter.
Yet again the HN discourse will likely gather around stuff like "why not just use an N1x0" and side quests about how the Raspberry Pi Foundation has abandoned its principles, or is just a cynical Broadcom psyop, or is "lagging behind" in hardware.
This stuff can be done better and the geek world should be done excusing OrangePi producing hardware abandonware time after time. Stop buying this crap and maybe they will finally start focussing on doing more than shipping support for one or two old kernels and last year's OS while kicking vague commitments about future support just far enough down the road that they can release another board first.
Please stop falling for it :-/
The reality is that they spam the market with a large number of products with little consistency, poor (if labyrinthine) documentation, random google drive links for firmware etc., and there are the same issues with hardware support.
I dunno, maybe the situation there is better than it was. But the broad picture is the same: better hardware but you are basically on your own.
Both ARM64 devices run headless, make use of GPIO, and have more than enough CPU. In fact, these are stable enough that I run BSDs on them and don't bother with Linux.
The Rock64 runs FreeBSD for SDR applications (e.g. ADS-B receiver). FreeBSD has stable USB support for RTL-SDR devices.
The RockPro64 runs NetBSD with ZFS with a PCIe SSD. NetBSD can handle ARM big.LITTLE well. I run several home lab workloads on this. Fun device.
I also have an N150 device running the latest Debian 13 as my main home lab server for home automation, Docker, MQTT broker, etc.
In short: SBCs are cheap enough that you can choose more than one, each for the right task, including IoT.
They seem uninterested in trying to get their hardware supported by submitting their patches for inclusion in the Linux kernel, and popular distros. Instead, you have to trust their repos (based in PRC).
I am somewhat amazed how you can manufacture such expensive high tech equipment yet are too cheap to setup a proper download service for the software, which would be very simple and cheap compared to making the hardware itself.
Maybe it is a Chinese mentality thing where the first question is always "What is the absolutely cheapest way to do this?" and all other concerns are secondary at best.
..which does not inspire confidence in the hardware either.
Maybe Chinese customers are different, see this, and think "These people are smart! Why pay more if you don't have to!".
My company hosts our docker images on quay.io and docker hub, but we also have a tarball of images that we post to our Github releases. Recently our release tooling had a glitch and didn't upload the tarballs, and we very quickly got Github issues opened about it from a user who isn't able to access either docker registry and has to download the tarball from Github instead.
It doesn't surprise me that a lot of these companies have the same "release process" as Wii U homebrew utilities, since I can imagine there's not a lot of options unless you're pretty big and well-experienced (and fluent in English).
it always work if you login into a Google account prior to downloading. If you don't, indeed the downloads will regularly fail.
That was not my experience, at least for very large files (100+ GB). There was a workaround (that has since been patched) where you could link files into your own Google drive and circumvent the bandwidth restriction that way. The current workaround is to link the files into a directory and then download the directory containing the link as an archive, which does not count against the bandwidth limit.
"Chinese repos" refer to the fact that the debian repos links for updates point to custom Huawei servers.
I mean, I'm sure there's some bad hardware out there too, but it's usually the software that is letting things down more than the hardware.
I always thought that one day we will get completely open source risc-v chips that if another company wants, they can create in their own chip-making process (I imagine it to be beyond extremely difficult but still it opens up a pathway)
what's the progress of risc-v nowadays?
Also Can you please link me other such projects like this, it would be good to have a bookmark/list of all such projects too
I would never buy one of these things without upstream kernel support for the SoC and a sane bootloader. Even the Raspberry Pi is not great on this front TBH (kernel is mostly OK but the fucked up boot chain is a PITA, requires special distro support).
I feel like rasp pi has the most community support for everything so I had the intution that most things would just work out of the box on it or it would have the best arm support (I assumed the boot chain to be that as well)
what do you mean by the boot chain being painful to work with and can you provide me some examples perhaps?
Ok that's mostly a joke, I'm just not up to date on what platforms exist these days that are done properly. Back in my day the Texas Instruments platforms (BeagleBoard) were decent. I think there are probably Rockchip-based SBCs today (Pine64 maybe?) that add up to something sensible but I dunno.
The thing with the boot chain is that e.g. the Pi has a proprietary bootloader that runs via the GPU. You cannot just load a normal distro onto the storage it needs to be a special build that matches the requirements of this proprietary bootloader. If your distro doesn't provide a build like that, well, hopefully you're OK with changing distro or ready to invest many hours getting your preferred distro working.
For projects like this I've gone back to $75 Lenovo SFFs. Good enough if you have wall power and beats dealing with the hyper-fragmentation of these niche ecosystems.
After the pandemic, the "25$" SBC suddenly became 100+ with low availability. The main thing that made rpis worth it is gone now, and they're all chasing number go up on benchmarks.
However, SD Cards are really terrible devices to run a general purpose computer on and they are designed for storing large files like photos, videos and mp3’s sequentially not the SWAP, logs, and databases that a full operating system is constantly writing and accessing in a random fashion.
I think if you are running a base 2gb, then maybe absolute value makes sense, but once you start hitting the larger RAM configutations, an M2 slot is a no brainer.
I think the cheapest working SBC is really the zero line.
Whenever I would have a problem, and it was more often than not, I would search for a solution and come across something that worked for rpi that I could try to port across.
Double the hardware spec matters little if you can’t get the software to even compile
You can get any software to compile on this SBC. On the Raspberry Pi platform you usually don't need to compile anything.
For what it's worth though the v5 did have Talos support, so you could just throw that on there, connect it to a cluster and have a decent arm node that is fanless and has 32gb
https://docs.siderolabs.com/talos/v1.12/platform-specific-in...
No thanks.
Seems this machine is more powerful than it, definitely attractive to me for a physical aarch64 self host runner.
I was pleased to learn that Radxa and Orange Pi have compatible similar boards.
I have wanted to see more RISC SBCs so I may toy with these but I rather wait for the software support to get much richer.
19 more comments available on Hacker News