Linux Computer Designed with AI Boots on First Attempt
Key topics
The Linux computer designed with AI that booted on the first try has sparked a lively debate about what it means to be "designed by AI." While the original article claimed a remarkable achievement, commenters pointed out that the AI was used to route a PCB from an existing schematic, not create the design from scratch. Notably, the project's creators started with NXP's published schematics and CAD files, raising questions about the true extent of the AI's role. As one commenter quipped, "Designed with AI, not by AI," highlighting the nuance lost in the original headline.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
2h
Peak period
6
0-12h
Avg / period
3
Based on 24 loaded comments
Key moments
- 01Story posted
Dec 16, 2025 at 3:50 PM EST
17 days ago
Step 01 - 02First comment
Dec 16, 2025 at 5:23 PM EST
2h after posting
Step 02 - 03Peak activity
6 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 21, 2025 at 11:20 PM EST
12 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Took me a few reads to realize this wasn’t some sort of Irish slang
1. A schematic of a reference design with all components specified, and a library of components with correct footprints.
2. A block diagram with the major components, but nothing too specific. Free reign of Digikey.com.
3. "Computer, make me a linux board, and make it snappy!"
(I think 1 is closest)
The only reason people usually route PCBs is that defining the constraints for an autorouter is generally more work than just manually routing a small PCB, but within semiconductors autorouting overtook manual routing decades ago.
i guess maybe there are less degrees of freedom and more 'regularity' in the semiconductor space? sort of like a fish swimming in an amorphous ocean vs. having to navigate uneven terrain with legs and feet. the fish in some sense is operating in a much more 'elegant' space, and that is reflected in the (beautiful?) simplicity of fish vs. all the weird 'nonlinear' appendages sticking out of terrestrial animals - the guys who walk are facing a more complicated problem space.
i guess with pcbs you have 'weird' or annoying constraints like package dimensions, via size, hole size, trace thickness, limited layer count, etc.
With PCB its all still quite manageable, even something like whole PC motherboard is easily doable by two-three EEs specializing in different niches (power, thermals, high speed digital design).
For simple stuff where there is plenty of room, you can get great results with automation. For complex and dense elements, automation is very useful but is a tool wielded with caution in the context of a carefully considered strategy in emc, thermal, and signal integrity trade offs. When ther is strong cost pressure it adds a confounding element at every step as well.
In short, yes, it will boot. No, it will not be as performant when longevity, speed, cost, and reliability is exhaustively characterized.
Creative marketing speak. Its most likely true in a corporate environment with a teams trying to coordinate their little fiefdoms, but not the case for a single engineer. Overestimated by ~one order of magnitude.
>With just one week of AI-powered processing, augmented by 38.5 hours of human expert assistance, the Project Speedrun computer was completed.
40 hours of human expert supervising. For reference https://www.kickstarter.com/projects/1714585446/chronos-14-h... You can watch layout process time lapse of the most difficult part of this products PCB by creator Tesla500 https://www.youtube.com/watch?v=41r3kKm_FME
"Total time to layout ~38 hours." - _13 years ago_, nowadays most of the things one would struggle back then got automated. 40 hours for Zync to DDR3 interface, what is left are power supplies and low speed stuff. Overview of the project https://www.youtube.com/watch?v=jU2aHMbiAkU
It took Ben almost as long to cleanup after AI as it took Tesla500 to design SOM from the ground up when DDR3 was still quite new and state of the art.
>Engineers preferred larger polygons for power distribution than Quilter originally produced. Enlarging these pours required opening space, shifting traces, and re-routing small regions to accommodate the changes.
No kidding, their tool generated nice fat power traces up to the first tight spot, and then gave up and bam 2mil tracks (VDDA_1V8 VDD_1V8) :D almost un-manufacturable at jlcpcb/pcbway (they have asterisks at 2mil) and very bad for power distribution (brownouts).
>The goal was to match human comfort levels for power-distribution robustness.
nah, in this particular case the goal was making it manufacturable and able to function at all. Human replaced those hilarious 2 mil traces with proper 15 mil ones. And you cant just click on a track and brrrrt it from 2 to 15mil as they themselves admit:
>Enlarging polygons often required freeing routing channels, which triggered additional micro-moves and refinements
Human EE had to go in, rip out more than half (the actually time consuming half) of the generated garbage and lay it out manually. Those "micro-moves" involved completely re-arranging layer stack moving whole swaths of signals to different layers, shuffling vias etc.
>Once via delays were included, several tuned routes no longer met their targets. The team re-balanced these nets manually.
"re-balanced" being colloquialism for ripped all the actually difficult to route parts and re-did manually.
AI didnt even try to length match flash. Just autorouted like you would 8MHz Arduino board.
ENET_TD2 - what the hell happened there? :D Signal is doing a loop/knot over itself while crossing 3 layers, Ben was probably too tired of AI shenanigans at this point and didnt catch it instead elongating ENET_TD1 to length match this lemon.
Comparing SOM AI output vs human "expert assistance" there is very little left from the AI. Almost every important track was touched/re-done from scratch by human hand.
This is my impression after a quick glance. I didnt try looking for problems very hard, didnt look into component placement (caps, would required reading datasheets) or ran any FEM tools.
https://www.reddit.com/r/gaming/comments/ncmegl/doom_running...
Just imagine a Beowulf cluster of these.
Holy clickbait, batman! The hard parts were done for them! All the fast signals like DDR are on the SoM, designed by real humans who understand EE. To make it all even more of a lie, their design is basically a copy of the reference base-board for the SoM.
"Boots on first attempt" well, duh! the SoM is self-contained. It boots all by itself as is... so no wonder that it boots.
https://www.quilter.ai/blog/preparing-an-ai-designed-compute...