Fabrice Bellard Releases Microquickjs
Key topics
The release of MicroQuickJS, a tiny JavaScript engine by Fabrice Bellard, has sparked a lively discussion about its potential uses and the motivations behind its creation. Some commenters, like baudaux, are eager to port it to WebAssembly, while others, like MobiusHorizons, question the practicality of running a JS engine within another, citing performance concerns. However, others point out that having a fast JavaScript engine inside WebAssembly can be useful for sandboxing user-authored code, as seen in Figma's use of QuickJS, or for environments without a native JS interpreter. As the conversation unfolds, witty remarks about Bellard's coding abilities and potential AI-assisted coding tools add a lighthearted touch to the debate.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
17m
Peak period
119
0-6h
Avg / period
20
Based on 160 loaded comments
Key moments
- 01Story posted
Dec 23, 2025 at 12:33 PM EST
10 days ago
Step 01 - 02First comment
Dec 23, 2025 at 12:50 PM EST
17m after posting
Step 02 - 03Peak activity
119 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 26, 2025 at 4:41 PM EST
7 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Figma for example used QuickJS, the prior version of the library this post is about, to sandbox user authored Javascript plugins: https://www.figma.com/blog/an-update-on-plugin-security/
It's pretty handy for things like untrusted user authored JS scripts that run on a user's client.
That way, programs that embed WebAssembly in order to be scriptable can let people use their choice of languages, including JavaScript.
That said, judging by the license file this was based on QuickJS anyway, making it a moot comparison.
It's more a model of what a really talented person who applies themselves building things they enjoy building can do.
Why do you say that specifically is his specialty? He also started QEMU and ffmpeg which are foundational pieces of software for several industries, and his day job is as founder of a company that makes software defined radio test equipment for cellular networks. There isn't one thing I could point at as a specialty.
Ffmpeg uses a ton of arcane C and assembly knowledge to make multimedia system manageable and efficient. QEMU uses dynamic binary translation for hardware emulation and virtualization. Amarisoft speciality is basically using software to do things which are usually done by hardware.
The intersection of programming language and system programming seems to me like a pretty fair description of what Fabrice Bellard is extremely good at.
https://www.macplus.net/depeche-82364-interview-le-createur-...
I also read one in English ~a decade ago. He keeps a low profile and let his work speak for itself.
He really is brilliant.
One such award is the Turing Award [1], given "for contributions of lasting and major technical importance to computer science."
[1] https://en.wikipedia.org/wiki/Turing_Award
AIUI the Turing award is primarily CS focused.
Being an engineer and coding at this stage/level is just remarkable- sadly this trade craft is missing in most (big?) companies as you get promoted away into oblivion.
Now I know that Markdown generally can include HTML tags, so probably it should be somewhat restricted.
It could allow to implement second web in a compatible way with simple browsers.
With a markdown over HTTP browser I could already almost browse Github through the READMEs and probably other websites.
Markdown is really a loved and now quite popular format. It is sad gemini created a separate closed format instead of just adopting it.
Browsers are complex because they solve a complex problem: running arbitrary applications in a secure manner across a wide range of platforms. So any "simple" browser you can come up with just won't work in the real world (yes, that means being compatible with websites that normal people use).
No, new adhering websites would emerge and word of mouth would do the rest : normal people would see this fast nerd-web and want rid of their bloated day-to-day monster of a web life.
One can still hope..
Oh right. 99% of people don't do even that, much less switch their life over to entirely new websites.
In 2025, depending on the study, it is said that 31.5~42.7% of internet users now block ads. Nearly one-third of Americans (32.2%) use ad blockers, with desktop leading at 37%.
I understand this has been tried before (flash, silverlight, etc). They weren't bad ideas, they were killed because of companies that were threatened by the browser as a standard target for applications.
But it was not insane, and it represented a clarity of thought that then went missing for decades. Several things that were in WML are quite reminiscent of interactions designed in web components today.
Gemini is not a good or sensible design. It's reactionary more than it is informed.
The embedded use case is obvious, but it'd also be excellent for things like documentation — with such a browser you could probably have a dozen+ doc pages open with resource usage below that of a single regular browser tab. Perfect for things that you have sitting open for long periods of time.
Also, legacy machines couldn't run it as fast as they could.
For a “lite web” browser that’s built for a thin, select slice of the web stack (HTML/CSS/JS), dragging around the heft of a full fat JS engine like V8 is extreme overkill, because it’s not going to be running things like React but instead moderate use of light enhancing scripts — something like a circa-2002 website would skew toward the heavy side of what one might expect for a “lite web” site.
The JS engine for such a browser could be trimmed down and aggressively optimized, likely even beyond what has been achieved with MQJS and similar engines, especially if one is willing to toss out legacy compatibility and not keep themselves beholden to every design decision.
https://en.wikipedia.org/wiki/Progressive_enhancement
Or maybe just make it all a single lispy language
Work towards an eventual feature freeze and final standardisation of the web would be fantastic though, and a huge benefit to pretty much everyone other than maybe the Chrome developers.
No matter how much you hate LLM stuff I think it's useful to know that there's a working proof of concept of this library compiled to WASM and working as a Python library.
I didn't plan to share this on HN but then MicroQuickJS showed up on the homepage so I figured people might find it useful.
(If I hadn't disclosed I'd used Claude for this I imagine I wouldn't have had any down-votes here.)
In this particular case AI has nothing to do with Fabrice Bellard.
We can have something different on HN like what Fabrice Bellard is up to.
You can continue AI posting as normal in the coming days.
... and that it provides a useful sandbox in that you can robustly limit both the memory and time allowed, including limiting expensive regular expression evaluation?
I included the AI bit because it would have been dishonest not to disclose how I used AI to figure this all out.
I'm currently on a multi-year side-quest to find safe ways to execute untrusted user-provided code in my Python and web applications.
As such, I pay very close attention to any new language or library that looks like it might be able to provide a robust sandbox.
MicroQuickJS instantly struck me as a strong candidate for that, and initial protoyping has backed that up.
None of that was clear from my original comment.
Unfortunately it means those languages will be the permanent coding platforms.
not really,
I suspect training volume has a role in debugging a certain class of errors, so there is an advantage to python/ts/sql in those circumstances: if, as an old boss once told me, you code by the bug method :)
The real problems I've had that hint at training data vs logic have been with poorly documented old versions of current languages.
To me, the most amazing capability is not the code they generate but the facility for natural language analysis.
my experience is that agent tools enable polyglot systems because we can now use the right tool for the job, not just the most familiar.
https://github.com/libriscv/libriscv (I talked with the author of this project, amazing author fwsgonzo is amazing) and they told me that this has the least latency out of any sandbox at only minor consequence of performance that they know of
Btw for sandboxing, kvm itself feels good too and I had discussed it with them in their discord server when they had mentioned that they were working on a minimal kvm server which has since been open sourced (https://github.com/varnish/tinykvm)
Honestly Simon, Deno hosting/the way deno works is another good interesting tidbit for sandboxing. I wish something like deno's sandboxing capabilities came to python perhaps since python fans can appreciate it.
I will try to look more into your github repository too once I get more free.
Maybe we HN users have minds in sync :)
https://news.ycombinator.com/item?id=46359396#46359695
Have a nice day! Awesome stuff, would keep an eye on your blog, Does your blog/website use mataroa by any chance as there are some similarities even if they are both minimalist but overall nice!
Maybe someone finds it useful: https://paste.ubuntu.com/p/rD6Dz7hN2V/
Thanks for sharing it.
Thanks a lot for checking out my blog/project. Have a great day!
My issue is that the cost, in terms of time, for these experiments have really gone down with LLMs. Earlier, if someone played around with the posted project, we knew they spent a reasonable amount of time on it, and thus care about the subject. With LLMs, this is not the case.
Compiling this to wasm and calling it from python as a sandboxed runtime isn’t tangential. I wouldn’t have know from reading the project’s readme that this was possible, and it’s a really interesting use case. We might as well get mad at simonw for using an IDE while he explored the limits of a new library.
I read this post of yours https://simonwillison.net/2025/Dec/18/code-proven-to-work/ and although there is a point that can be made that what you are doing isn't a job and I myself create prototypes of code using AI, long term (in my opinion) what really matters are the maintainance and claim (like your article says in a way, that I can pin point a person whose responsible for code to work)
If I find any bug right now, I wouldn't blame it on you but AI and I have varying amount of trust on it
My opinion on the matter is that for prototyping AI can be considered good use but long term it definitely isn't and I am sure that you share a similar viewpoint.
I think that AI is so contrasting that there stops existing any nuance. Read my recent comment (although warning, its long) (https://news.ycombinator.com/item?id=46359684)
Perhaps you can build a blog post about the nuance of AI? I imagine that a lot of people might share a similar aspect of AI policy where its okay to tinker with it. I am part of the new generation and trust be told I don't think that there becomes much incentives long term unless someone realizes things of not using AI because using AI just feels so lucrative for especially the youngsters.
I am 17 years old and I am going to go into a decent college with (might I add immense competition to begin with) when I have passion about such topics but only to get dissuaded because the benchmark of solving assignments etc. are done by AI and the signal ratio of universities themselves are reducing but they haven't reduced to the point that they are irrelevant but rather that you need to have a university to try to get a job but companies have either freezed hiring which some point out with LLM's
If you ask me, Long term to me it feels like more people might associate themselves with hobbyist computing and even using AI (to be honest sort of like pewdiepie) without being in the industry.
I am not sure what the future holds for me (or for any of us as a matter of fact) but I guess the point I am trying to state is that there is nuance to the discussion from both sides
Have a nice day!
If you care that much, write a blog post and post that, we don't need low effort LLM show and tell all day everyday.
Your github research/ links are an interesting case of this. On one hand, late AI adopters may appreciate your example prompts and outputs. But it feels like trivially reproducible noise to expert LLM users.
The HN AI pushback then drowns out your true message in favor of squashing perceived AI fluff.
My simonw/research GitHub repo is deliberately separate from everything else I do because it's entirely AI-generated. I wrote about that here: https://simonwillison.net/2025/Nov/6/async-code-research/#th...
This particular case is a very solid use-case for that approach though. There are a ton of important questions to answer: can it run in WebAssembly? What's the difference to regular JavaScript? Is it safe to use as a sandbox against attacks like the regex thing?
Those questions can be answered by having Claude Code crunch along, produce and execute a couple of dozen files of code and report back on the results.
I think the knee-jerk reaction pushing back against this is understandable. I'd encourage people not to miss out on the substance.
If someone wants to read your blog, they will, they know it exists, and some people even submit your new articles here. There's no need to do what you're doing. Every day you're irritating more people with this behavior, and eventually the substance won't matter to them anymore, so you're acting against your own interests.
Unless you want people to develop the same kind of ad blindness mechanism, where they automatically skip anything that looks like self promotion. Some people will just see a comment by simonw and do the same.
A lot of people have told you this in many threads, but it seems you still don’t get it.
You're not pushing against an arbitrary taboo where people dislike self links in principle. People already accept self links on HN when they're occasional and clearly relevant. What people are reacting to is the pattern when "my answer is a link to my site" becomes your default state, it stops reading like helpful reference and starts reading like your distribution strategy.
And that's why "I'm determined to normalize it" probably won't work because you can't normalize your way out of other people's experience of friction. If your behavior reliably adds a speed bump to reading threads forcing people to context switch/click out and wonder if they're being marketed to then the community will develop a shortcut I mentioned in my previous comment which basically is : this is self promo so just ignore.
If your goal is genuinely to share useful ideas, you're better off meeting people where they are: put the relevant 2-6 sentences directly in the comment, and then add something like "I wrote more about it on my blog" or whatever and if anyone is interested they will scroll through your blog (you have it in your profile so anyone can find it with one click) or ask for a link.
Otherwise you're not "normalizing" anything, you're training readers to stop paying attention to you. And I assure you once that happens, it's hard to undo, because people won't relitigate your intent every time. They'll just scroll. It's a process that’s already started, but you can still reverse it.
I'm actively pushing back against the "don't promote your site, don't link to it, restate your content in the comments instead" thing.
I am willing to take on risk to my personal reputation and credibility in support of my goal here.
If everyone starts dropping their "relevant content" in the comments, most of it won't be relevant, and a lot of it will be spam. People don't have time to sift through hundreds of links in the comments and tens of thousands of words when the whole point of HN is that discussion and curation work in the opposite direction.
If your content is good, someone else will submit it as a story. Your blog is probably already read by thousands of people from HN, if they think a particular post belongs in the discussion in some comment, they'll link it. That's why other popular HN users who blog don't constantly promote or link their own content here, unlike you. They know that you don't need to do it yourself, and doing it repeatedly sends the wrong signal (which is obvious and plenty of socially aware people have already pointed out to you in multiple threads).
Trying to normalize that kind of self promoting is like normalizing an annoying mosquito buzz, most people simply don't want it and no amount of "normalizing" will change that.
The difference between LinkedIn slop and good content is not the presence or absence of a link to one’s own writing, but the substance and quality of the writing.
If simonw followed these rules you want him to follow, he would be forced to make obscure references to a blog post that I would then need to Google or hope that his blog post surfaces on HN in the next few days. It seems terribly inefficient.
I agree with you that self-promotion is off-putting, and when people post their competing project on a Show HN post, I don’t click those links. But it’s not because they are linking to something they wrote. It’s because they are engaged in “self-promotion”, usually in an attempt to ride someone else’s coattails or directly compete.
If simonw plugged datasette every chance he got, I’d be rolling my eyes too, but linking to his related experiments and demos isn’t that.
I'd chalk up the -4 to generic LLM hate, but I find examples of where LLMs do well to be useful, so I appreciated your post. It displays curiosity, and is especially defensible given your site has no ads, loads blazingly fast, and is filled with HN-relevant content, and doesn't even attempt to sell anything.
You can safely assume so. Bellard is the creator of jslinux. The news here would be if it _didn't_.
> What's the difference to regular JavaScript?
It's in the project's README!
> Is it safe to use as a sandbox against attacks like the regex thing?
This is not a sandbox design. It's a resource-constrained design like cesanta/mjs.
---
If you vibe coded a microcontroller emulation demo, perhaps there would be less pushback.
A lot of HN people got cut by AI in one way or another, so they seem to have personal beefs with AI. I am talking about not only job shortages but also general humbling of the bloated egos.
Especially so when it concerns AI theft of human music and visual art.
"Those pompous artists, who do they think they are? We'll rob them of their egos".
The problem is that these ego-accusations don't quite come from egoless entities.
AI brings clarity. This results in a lot of pain for those who tried to hijack the game in one way or another.
From the psychological point of view, AI is a mirror of one's personality. Depending on who you are, you see different reflections: someone sees a threat, others see the enlightenment.
Do you mean that kind of clarity when no audio/video evidence is a proof of anything anymore?
Programmers willingly put their projects to the open source, and it's their consent by default unless there is a prohibitive license that explicitly denies AI training.
Artists, designers, musicians - I agree here, but this is the point where boosted egos usually enter the room.
When was the last time you have heard a song without autotune? If it was a machine all along for the last 20 years, why we should care now? Why we should care if 99% of artists do not care about us by pushing subpar materials down our throats to "make it" (for them, of course)?
AI rebalances those asymmetries by bringing them to ground 0. Some people consider it as destruction, but some see a breeding ground for the future.
Right now, 2025-12-25, 09:00 UTC, I'm listening to a song without autotune.
Everything But The Girl - Missing (Todd Terry remix)
And I can also find thousands of songs without autotune that were released on Spotify yesterday.
Also it's not clear how autotune is related to ego. Does compressor relate to it too? Delay? Reverb?
"This song has too much ego, the reverb is 6dB louder than the dry signal"
How does a sheer number of subpar artists caring only about money, which is of course a thing and maybe a problem for those who are too lazy too discern and search themselves, justify robbing truly sincere artists who share their soul with the listener?
I'm gonna give you the benefit for the doubt here. Most of us do not dislike genAI because we were fired or "humbled". Most of us dislike it because a) the terrible environmental impacts, b) the terrible economic impacts, and c) the general non-production-readiness of results once you get past common, well-solved problems
Your stated understanding comes off a little bit like "they just don't like it because they're jealous".
(Keep posting please. Downvotes due to mentioning LLMs will be perceived as a quaint historic artifact in the not so distant future…)
388 more comments available on Hacker News