486tang – 486 on a Credit-Card-Sized Fpga Board
Posted4 months agoActive4 months ago
nand2mario.github.ioTechstoryHigh profile
excitedpositive
Debate
60/100
FpgaRetro Computing486 Processor
Key topics
Fpga
Retro Computing
486 Processor
A developer has successfully implemented a 486 processor on a credit-card-sized FPGA board, sparking discussion about retro computing, FPGA capabilities, and potential applications.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
47m
Peak period
18
0-3h
Avg / period
4.8
Comment distribution57 data points
Loading chart...
Based on 57 loaded comments
Key moments
- 01Story posted
Sep 13, 2025 at 10:52 AM EDT
4 months ago
Step 01 - 02First comment
Sep 13, 2025 at 11:39 AM EDT
47m after posting
Step 02 - 03Peak activity
18 comments in 0-3h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 15, 2025 at 6:16 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45232565Type: storyLast synced: 11/20/2025, 4:56:36 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Quite apart from the increased complexity, the most important difference is that there's a minimum speed as well as a maximum speed for modern DDR RAM, which means there's usually quite a narrow window of achievable clock rates when getting an FPGA to talk to DDR3.
I suspect that's why the author chose to use the DDR for video: It's usually easy to keep plain old SDRAM in lockstep with a soft-CPU, since you can run it at anything between 133MHz (sometimes even more) and walking pace, so there's no need to deal with messy-and-latency-inducing clock domain crossing.
Streaming video data in bursts into a dual-clock FIFO and consuming it on the pixel clock is a much more natural fit.
https://github.com/viti95/FastDoom
1: https://en.wikipedia.org/wiki/Intel_Quark#Segfault_bug
And just today, I received the Intel Edison dev kit that I'd purchased on eBay.
The Galileo is a Quark X1000 SoC, two P54C cores. In-order, original 32-bit Pentium.
https://en.m.wikipedia.org/wiki/Intel_Galileo
.
The Edison is a modern System On Module about the size of an SD card but about 3x the thickness of one. It's far more capable: dual 64-bit Silvermont Atom cores, Super scalar out of order. And an additional Quark core as a system monitor, running an independent RTOS. There's also 4GB eMMC, 1GB RAM, WiFi, and Bluetooth on the module. Its quite a remarkable curiosity.
Ten years ago, Intel tried to catch up to ARM in tablets and smartphones, but it was already too late, and this entire segment of Intel was cancelled within a year or two.
https://en.m.wikipedia.org/wiki/Intel_Edison
Next up is building more recent Linux images for these via the Yocto Project and the now cancelled Intel Board Support Packages (BSP).
If you like low power tiny systems, there's a strange amount of fun to be had.
Looks like an opportunity for a cluster in a picture frame. Did you get them?
I have one, a 4 RPI Zero W cluster in an Ikea picture frame:
https://x.com/0xDEADBEEFCAFE/status/1163378341610688513
Are you kidding? Of course I got them! :-)
It was actually better than that: they were in bulk boxes of five, there were at least eight such boxes. I took two, just to have one to play with and to get them out of the elements (they were on shelf that's partially protected on one side by a metal storage shed, but the shelf is just standing outdoors)..
I stopped by the local makerspace a day or two later, to let them know. I don't know what became of them.
My boxes got shoved to the back of the project queue and it's been about a year now. I just got them out about a week ago, looking into building the new firmware images.
These Intel Galileo dev kits are quite a bit bigger than a typical Arduino. They have the standard dual row inline headers of an Arduino (I think its pin-compatible), but lots more IO. There's Ethernet, for instance. Nearly the size of a Pico-ITX form factor.
https://en.m.wikipedia.org/wiki/Pico-ITX
But, a plain answer: Via Eden boards. still use north/southbridge architecture, and are from the mid 2000's.
It's just modern Windows/Linux that have discontinued the ability. Or, perhaps you have 16/32 and 32/64 and are unable to do 16bit on 64bit machines- which still boils down to "operating system."
By far the biggest issue though is that even the Via Eden processor is significantly faster than a 486- and lots of software (especially games) from that era used no-op instruction loops for timing and timers. This results in games like The Incredible Machine's level timer running out in half a second or less.
Linux isn't really relevant given the time frame.
Also- DOSBox is an emulator vs VMs are hardware, no? I suspect A VM won't fix the "no-op loop for timing" issue- with modern processors' lowest clock being 600-800Mhz before it gets C6/C7'd, 30 years of IPC improvement, and the possibility of the CPU itself optimizing such loops (I'm unsure for various reasons): I expect the UX of "just limit how many scheduler slices it gets" to be nasty.
https://en.wikipedia.org/wiki/DOSBox#Hardware_emulation
They weren't even that bad considering the little power they needed.
https://retrodreams.ca/
Review - https://www.youtube.com/watch?v=L9UdU89DDvY
It says this however:
"Expected to ship to customers in February 2025."
I wonder if a typo or if batch 2 is already gone too...
Neither are a fit, SDRAM was a Pentium/K-6 standard (PC66); the DIMMs ran faster than a non-OC'ed 486 bus, which ran at half the clock of the CPU. 486 "natural fit" would be FPM or EDO, if you wanted to be era-correct.
There were probably some off the wall 486 motherboards back then that supported SDR (post-1993...), but those would have been towards the very end of the 486 consumer life cycle. These did exist in the 486 era, where they had the option to run (or had an embedded) 386 using FPM while there was an open 486 socket and the option, but not requirement to run EDO.
Anyway, this is someone's project, so they can do whatever the heck they want.
https://classic.sipeed.com/tangconsole
https://archive.is/UD0vH
What's the smallest SOC you could design to run DOOM? What power envelope would that consume (exclusing display/speakers/etc.) At that size and (optimized) transistor count, what speeds could we realistically achieve?
What would a massively-multicore (gpu-style with multi-hundreds or more of cores) one of these run like?
Every time I see a project like this, these thoughts run through my head.
> What's the smallest SOC you could design to run DOOM?
Depending on your definition of "modern", more than you think has been done. Intel's Quark were basically 486/Pentium hybrids but fabbed on a fairly modern (at the time) process. While Quark is no longer available as a standalone product, a derivative is part of every modern Intel processor in the form of the Intel ME system co-processor, and it's likely that a number of other Intel products (network cards, QAT accelerators, the ARC GPUs, etc) use them as system controllers as well (Quark essentially came into existence as a "formalization" of the multiple "micro-x86" implementations inside Intel being used as embedded controllers for various non-CPU products).
> What would a massively-multicore (gpu-style with multi-hundreds or more of cores) one of these run like?
This is close to what the original Xeon Phi was. Essentially 60-ish Pentium cores, with modern SMT and 512-bit vector units added. It worked ... OK? If the software development story had been better (e.g. actual first-class support in GCC) I think they could have been a much bigger success, but the need for ICC back in the ICC-costs-real-money days and initially very expensive hardware certainly held them back. At times I do miss some of their behavior.
Arguably a number of the RISC-V-based "AI accelerators" on the market are basically new spins on the same idea: a bunch of small cores, plus large vector/tensor units.
> x86 vs. ARM. Working with ao486 deepened my respect for x86’s complexity. John Crawford’s 1990 paper “The i486 CPU: Executing Instructions in One Clock Cycle” is a great read; it argues convincingly against scrapping x86 for a new RISC ISA given the software base (10K+ apps then). Compatibility was the right bet, but the baggage is real. By contrast, last year’s ARM7‑based GBATang felt refreshingly simple: fixed‑length 32‑bit instructions, saner addressing, and competitive performance. You can’t have your cake and eat it.