Best Options for Using AI in Chip Design
Posted5 months agoActive5 months ago
semiengineering.comTechstory
calmpositive
Debate
20/100
AI in Chip DesignSemiconductor IndustryMachine Learning ApplicationsHardware Development
Key topics
AI in Chip Design
Semiconductor Industry
Machine Learning Applications
Hardware Development
Discussion on using AI in chip design, exploring current options and future possibilities.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
2h
Peak period
5
2-4h
Avg / period
3
Comment distribution12 data points
Loading chart...
Based on 12 loaded comments
Key moments
- 01Story posted
Aug 20, 2025 at 12:29 PM EDT
5 months ago
Step 01 - 02First comment
Aug 20, 2025 at 2:33 PM EDT
2h after posting
Step 02 - 03Peak activity
5 comments in 2-4h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 21, 2025 at 2:00 PM EDT
5 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 44963391Type: storyLast synced: 11/20/2025, 6:39:46 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
> We essentially have rolled out an L1 through L5, where L5 is the Holy Grail with fully autonomous end-to-end workflows. L1 is where we are today, and maybe heading into L2. L3 involves orchestration and then planning and decision-making. When we get to L5, we’ll be asking questions like, ‘Are junior-level engineers really needed?’
We're seeing this in the software development world too, where it's becoming harder and harder for junior engineers to both learn programing and to be successful in their careers. If the only thing that's needed are senior engineers, how do people grow to become senior engineers? It's a harrowing prospect.
As in: by the time this becomes an issue, AI will begin to displace senior engineers - the same way it's displacing junior engineers now.
Considering where AI was a decade ago? I'd be reluctant to bet on this happening within a decade from now, but I certainly wouldn't bet against.
Humans trying to build and navigate systems that they do not understand and is going to be a disaster
With FPGA, you can have your purpose built chip overnight.
Thus, in my not so humble opinion, one should use whatever means one can to make FPGAs more efficient.
Commercial companies who may be interested in AI tools for EDA do have these things of course but are any going through the expensive process of fine tuning LLMs with them?
Indeed perhaps it's important to include a high quality corpus in pre training? I doubt anyone wants to train an LLM from scratch for EDA.
Perhaps NVidia are doing experiments here? They've got the unique combination of access to a decent corpus, cheaper training costs and in house know how.
[1] https://docs.unsloth.ai/basics/datasets-guide
I fine-tuned reasoning models (o1-mini and o3-mini) which were already well into instruction-following and reasoning behavior. The dataset I prepared was taking this into account, but it was just simple prompt/response pairs. Defining the task tightly, ensuring the dataset was of high quality, picking the right hyper parameters, and preparing the proper reward function (and modeling that against the API provided) were the keys to success.
That does sound reasonable to me. The main problem is that you (at least for software) can't train on source code alone, as comments are human language, so you need some corpus of human language as well, so that the LLM learns that, next to the programming language(s). I'd assume it's the same as well.
Depending on what you're going for, you could take an existing pre-trained model, and further pretrain it on your EDA corpus. That means you'll have to reinvent or lift from somewhere else the entire finetuning data and pipeline, which is significantly harder than doing a finetune.