Amd Entered the CPU Market with Reverse-Engineered Intel 8080 Clone 50 Years Ago
Key topics
As AMD celebrates 50 years in the CPU market with its reverse-engineered Intel 8080 clone, the conversation turns to the future of x86 ISA and the possibility of it becoming open source or available for licensing. Some commenters, like ksec, envision a subset of x86 ISA being open-sourced for forward compatibility, while others, like holowoodman, argue that a subset would be incompatible with existing software, rendering it a new ISA. The debate highlights the complexities of x86 and the challenges of simplifying it, with some pointing out that software or microcode emulation could be a viable workaround. Meanwhile, recent developments, such as the x86 Ecosystem Advisory Group, suggest that industry players are already working together to standardize x86 features.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2m
Peak period
84
0-12h
Avg / period
14.6
Based on 102 loaded comments
Key moments
- 01Story posted
Dec 24, 2025 at 9:28 AM EST
9 days ago
Step 01 - 02First comment
Dec 24, 2025 at 9:30 AM EST
2m after posting
Step 02 - 03Peak activity
84 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 29, 2025 at 7:44 PM EST
3d ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
And x86 isn't that nice to begin with, if you do something incompatible, you might as well start from scratch and create a new, homogenous, well-designed and modern ISA.
So it would be faster and more efficient when sticking to the new subset and Nx slower then using the emulation path.
Most architectures other than x86 have fixed sized machine instructions now, making decoding fast and predictable.
i.e Software compiled for 86 should work on x86. The value for backward compatibility is kept with both Intel and AMD. If the market wants something in between they now have an option.
I know this isn't a sexy idea because HN or most tech people like something shiny and new. But I have always like the idea of extracting value from the "old and tried" solutions.
But thankfully I could install an old bin and lock it out from updating.
This isn't an issue in any way. Vendors have been routinely taking out rarely used instructions from the hardware and simulating them in the software for decades as part of the ongoing ISA revision.
Unimplemented instruction opcodes cause a CPU trap to occur where the missing instruction (s) is then emulated in the kernel's emulation layer.
In fact, this is what was frequently done for «budget» 80[34]86 systems that lacked the FPU – it was emulated. It was slow as a dog but worked.
>AMD and Intel Celebrate First Anniversary of x86 Ecosystem Advisory Group Driving the Future of x86 Computing
Oct 13, 2025
Standardizing x86 features
Key technical milestones, include:
https://www.tomshardware.com/pc-components/cpus/intel-termin...
My linked article is 2025.
But it's fortunate that they realised the main attraction to x86 is backwards-compatibility, so attempting to do away with that will lead to even less marketshare.
I suspect we'll see somebody -- a phone manufacturer or similar device -- make a major transition to RISC-V from ARM etc in the next 10 years that we won't even notice.
My biggest issue was the number of broken apps in Docker on Arm based Macs, and even then was mostly able to work around it without much trouble.
These days, even fairly low-level system software is surprisingly portable. Entire GNU/Linux distributions are developed this way, for the majority of architectures they support.
Some distributions like Debian or Fedora will make newer features (such as AVX/VEX) mandatory only after the patents expire, if ever. So a new entrant could implement the original x86-64 ISA (maybe with some obvious extensions like 128-bit atomics) in that time frame and preempt the patent-based lockout due to ISA evolution. If there was a viable AMD/Intel alternative that only implements the baseline ISA, those distributions would never switch away from it.
It's just not easy to build high-performance CPUs, regardless of ISA.
I was thinking more like if it falls to 10% of desktop/laptop/server market share, which is still waaaaaay more then the nearly-dead architectures you listed.
Things that have < 10% market share
- macOS
- all car manufacturers except Toyota
Things that history considers obliterated:
- The city of Pompeii
- districts of Hiroshima within the bomb's blast radius
Mature gallery of software to be ported from TSO to weak memory model is a soft moat. So is avx/simd mature dominance vs neon/sve. x86/64 is a duopoly and a stable target vs fragmented landscape of ARM. ARM's whole spiel is performance per watt, scale out type of thing vs scale up. In that sense the market has kind of already moved. With ARM if you start pushing for sustained high throughput, high performance, 5Ghz+ envelope, all the advantages are gone in favor of x86 so far.
What might be interesting is if let's say AMD adds an ARM frontend decoder to Zen. In one of Jim Keller's interviews that was shared here, he said it wouldn't be that big of a deal to make such a CPU for it to be an ARM decoding one. That'd be interesting to see.
Laptops. Apple already owned the high margin laptop market before they switched to ARM. With phones, tablets, laptops above 1k, and all the other doodads all running ARM, it's not that x86 will simply disappear. Of course not. But the investments simply aren't comparable anymore with ARM being an order of magnitude more common. x86 is very slowly losing steam, with their chips generally behind in terms of performance per watt. And it's not because of any specific problem or mistake. It's just that it no longer makes economic sense.
I like RISC-V (it's my job and I'm very involved in the community) but even now it isn't ready for laptops/desktop class applications. RVA23 is really the first profile that comes close and that was only ratified very recently. But beyond that there are a load of other things that are very much work in progress around the periphery that you need on a laptop. ACPI, UEFI, etc. If you know RISC-V, what does mconfigptr point to? Nothing yet!
Anyway the question was why would anyone switch from one proprietary ISA to another, as if nobody would - despite the very obvious proof that yes they absolutely would.
So this is kind of a useless question, because in such a timespan anything can happen. 20 years ago computers had somewhere around 512MB of RAM and a single core and had a CRT on desk.
Lunar Lake shows that x86 is capable of getting that energy efficiency
Panther Lake that will be released in around 30 days is expected to show significant improvement over Lunar Lake
So... why switch to ARM if you will get similar perf/energy eff?
AMD and Intel Celebrate First Anniversary of x86 Ecosystem Advisory Group Driving the Future of x86 Computing
Standardizing x86 features
Key technical milestones, include:
While my PoV is US centered, I feel that other nations should largely optimize for the same as much as possible. Many of today's issues stem from too much centralization of commercial/corporatist power as opposed to fostering competition. This shouldn't be in the absence of a baseline of reasonable regulation, just optimizing towards what is best for the most people.
Now apply that to weapons systems in conflict against an enemy that DOES have modern production that you (no longer) have... it's a recipe for disaster/enslavement/death.
China, though largely hamstrung, is already well ahead of your hypothetical 2005 tech breakpoint.
Beyond all this, it's not even a matter of just slower, it's a matter of even practical... You couldn't viably create a lot of websites that actually exist on 2005 era technology. The performance memory overhead just weren't there yet. Not that a lot of things weren't possible... I remember Windows 2000 pretty fondly, and you could do a LOT if you had 4-8x what most people were buying in RAM.
How do you maintain this production with a sudden influx of ballistic missiles at the production facility - or a complete naval blockade of all food calories to your country?
If society as a whole reverted to 2005, we would be fine.
In 2004 Iraq, we had guided missiles, night vision, explosives, satellites. What advantages would 3nm transistors give the enemy in combat?
see Ukraine drone warfare ... there's a lot going on there which is more than just miniaturized motors, etc. a lot is efficient power use of the semiconductors in those drones, the image processors attached to the cameras, etc. that i suspect relies on newer processes
If you took today's software and tried running it on a memory constrained, slow, 2005 era system, you'd be in for some pain.
Electron as bad as it can be, has allowed for a level of cross platform applications in practice that has never existed... It's bloated on several levels.
Most of that ease in being able to deliver software that works well enough and quickly doing so wouldn't be possible without the improvements in technology.
See tangentially related topic from yesterday: https://news.ycombinator.com/item?id=46362927
Another approach was Transmeta where the target ISA was microcoded, therefore done in "software".
"Apple created a chip which is not an X86! Its awesome! And the best thing about it is ... it does TSO does like an X86! Isn't that great?"
I think the last time I ran amd64 on my mac was months ago, a game.
There's also the CFINV instruction (architectural, part of FEAT_FLAGM), which helps with emulating the x86-64 CMP instruction.
Customer needs don't really matter in cases where monopolist (ab)uses the law to kill competition. That's the MAIN reason why monopolies are problematic.
The licensing deals that legitimized AMD's unlicensed clones came later.
Those worked in 4-bit slices, and you could use them as LEGO blocks to build your own design (e.g. 8, 12 ou 16 bits) with much fewer parts than using standard TTL gates (or ECL NANDs, if you were Seymour Cray).
The 1980 Mick & Brick book Bit-slice Microprocessor Design later gathered together some "application notes" - the cookbooks/crib sheets that semiconductor companies wrote and provided to get buyers/engineers started after the spec sheets.
AMD has introduced in 1975 both its NMOS 8080 clone and the bipolar bit-slice 2900 family.
I do not know which of these 2 AMD products was launched earlier, but in any case there was only a few months difference between them at most, so it cannot be said that AMD "was already in the CPU market". The launch of both products has been prepared at a time when AMD was not yet in the CPU market and Intel had been earlier than AMD both in the NMOS CPU market and in the market for sets of bipolar bit-slice components.
While Intel 8080 was copied by AMD, the AMD 2900 family was much better than the Intel 3000 family, so it has been used in a lot of PDP-11 clones or competitors.
For example, the registers+ALU component of Intel 3000 implemented only a 2-bit slice and few ALU operations, while the registers+ALU component of AMD 2900 implemented a 4-bit slice and also many more ALU operations.
Moral: Awesome productivity happens when IP doesn't get in the way.
I remember when in the early 90s the am386-40MHz came out. Everyone was freaking out how we are now breaking the sound barrier. There was a company Twinhead(?) that came out with these 386-40Mhz motherboards with buses so overclocked most video cards would fry. Only the mono Hercules cards could survive. We thought our servers were the shizzle.
I was interested in this and followed the links to the original interview at: https://web.archive.org/web/20131111155525/http://silicongen... which was interesting:
> "Xerox being more of a theoretical company than a practical one let us spend a whole year taking apart all of the different microprocessors on the market at that time and reverse engineering them back to schematic. And the final thing that I did as a project was to, we had gotten a pre-production sample of the Intel 8080 and this was just as Kim and I were leaving the company. On the last day I took the part in and shot ten rolls of color film on the Leica that was attached to the lights microscope and then they gave us the exit interview and we went on our way. And so that summer we got a big piece of cardboard from the, a refrigerator came in and made this mosaic of the 8080. It was about 300 or 400 pictures altogether and we pieced it together, traced out all the logic and the transistors and everything and then decided to go to, go up North to Silicon Valley and see if there was anybody up there that wanted to know about that kind of technology. And I went to AMI and they said oh, we're interested, you come on as a consultant, but nobody seemed to be able to take the project seriously. And then I went over to a little company called Advanced Micro Devices and they wanted to, they thought they'd like to get into it because they had just developed an N-channel process and this was '73. And I asked them if they wanted to get into the microprocessor business because I had schematics and logic diagrams to the Intel 8080 and they said yes."
From today's perspective, just shopping a design lifted directly from Intel CPU die shots around to valley semi companies sounds quite remarkable but it was a very different time then.
The difference with the 386, I think, is that AFAIK the second-sourced 8086 and 286 CPUs from non-Intel manufacturers still made use of licensed Intel designs. The 386 (and later) had to be reverse engineered again and AMD designed their own implementation. That also meant AMD was a bit late to the game (the Am386 came out in 1991 while the 80386 had already been released in 1985) but, on the other hand, they were able to achieve better performance.
It is, yes. I meant to mention that detail!
> The 386 (and later) had to be reverse engineered … That also meant AMD was a bit late to the game
There were also legal matters that delayed the release of their chips. Intel tried to claim breach of copyright with the 80386 name¹ and so forth, to try stymie the competition.
*> they were able to achieve better performance.
A lot of that came from clocking them faster. I had an SX running at 40Hz. IIRC they were lower power for the same clock then Intel parts, able to run at 3.3V, which made them popular in laptops of the time. That, and they were cheaper! Intel came out with a 3.3V model that had better support for cache to compete with this.
--------
[1] This failed, which is part of why the i386 (and later i486 and number-free names like Pentium) branding started (though only in part - starting to market direct to consumers rather than just EOMs was a significant factor in that too).
>AMD said Friday that its “independently derived” 486 microprocessor borrowed some microcode from Intel’s earlier 386 chip.
Borrowed hehe. Ended up in a 1995 settlement where AMD fully admitted copying and agreed to pay $58mil penalty in exchange for official license to 386 & 486 microcodes and infamous patent 338(mmu). Intel really wanted a legal win confirming validity of their patent 338 to threaten other competitors. 338 is what prevented sale of UMC Green 486 in USA. Cyrix bypassed the issue by manufacturing at SGS and TI who had full Intel license https://law.justia.com/cases/federal/district-courts/FSupp/8...
>were able to achieve better performance
Every single Am386 instruction executes at same cycle count as Intel counterpart, difference is only official ability to work at 40MHz.
Then there was the big licensing deal for Intel 8088 and its successors, which was forced by IBM upon Intel, in order to have a second source for the critical components of the IBM PC.
IP is one of those things you invent once you made it to the top.
https://www.amazon.com/Kicking-Away-Ladder-Development-Persp...
The US industrial revolution was from Samuel Slater memorizing detailed plans of British textile mills and their machines and bringing them here.
Apparently by ripping off their military customers.
>says Wikipedia.
Why is that a primary source?
- Mario Puzo, The Godfather
instant 20% speed boost replacing the 8080 with the v20 chip
bought a sleeve of them cheap and went around to all the PCs and popped them out
only problem was software that relied on clocks ran too fast
Definitely read that wrong the first time I skimmed the article
> The processor was reverse-engineered by Ashawna Hailey, Kim Hailey and Jay Kumar. The Haileys photographed a pre-production sample Intel 8080 on their last day in Xerox, and developed a schematic and logic diagrams from the ~400 images.