Key Takeaways
In theory, WASM could be a single cross platform compile target, which is kind of a CS holy grail. It's easy to let your mind spin up a world where everything is web assembly, a desktop enivornment, a server, day to day software applications.
After I've imagined all of that, being told web assembly helps some parts of Figma run faster feels like a big let down. Of course that isn't fair, almost nothing could live up to the expectations we have for WASM.
Its development is also by committee, which is maybe the best option for our current landscape, but isn't famous for getting things going quickly.
So basically wasm is some optimisation. That's fine but it's not something groundbreaking.
And if we remove web from the platform list, there were many portables bytecodes. P-code from Pascal era, JVM bytecode from modern era and plenty of others.
That's underselling it a bit IMO. There's a reason asm.js was abandoned.
The perfect article: https://hacks.mozilla.org/2017/03/why-webassembly-is-faster-...
Honestly the differences are less than I would have expected, but that article is also nearly a decade old so I would imagine WASM engines have improved a lot since then.
Fundamentally I think asm.js was a fragile hack and WASM is a well-engineered solution.
I agree 100% with the startup time arguments made by the article, though. No way around it if you're going through the typical JS pipeline in the browser.
The argument for better load/store addressing on WASM is solid, and I expect this to have higher impact today than in 2017, due to the huge caches modern CPUs have. But it's hard to know without measuring it, and I don't know how hard it would be to isolate that in a benchmark.
Thank you for linking it. It was a fun read. I hope my post didn't sound adversarial to any arguments you made. I wonder what asm.js could have been if it was formally specified, extended and optimized for, rather than abandoned in favor of WASM.
And AFAIK asm.js is the precursor to WASM, like the early implementations just built on top of asm.js's primitives.
I don't see how it'd be much different to compiling to JavaScript otherwise. Isn't it usually pretty clear where allocations are happening and how to avoid them?
Why reverse-engineer each JS implementation if you can just target a non-GC runtime instead?
The tooling is just not there yet. Everyone is just stuck on supporting Docker still.
- Building / moving file hierarchies around
- Compatibility with software that expects Linux APIs like /proc
- Port binding, DNS, service naming
- CLI / API tooling for service management
And about a gazillion other things. WASI, meanwhile, is just a very small subset of POSIX but with a bunch of stuff renamed so nothing works on it. It's not meaningfully portable in any way outside of UNIX so you might as well just write a real Linux app. WASI buys you nothing.
WASM is heavily overfit to the browser user case. I think a lot of the dissipated excitement is due to people not appreciating how much that is true. The JVM is a much more general technology than WASM is which is why it was able to move between such different use cases successfully (starting on smart TV boxes, then applets, then desktop apps, then servers + smart cards, then Android), whereas WASM never made it outside the browser in any meaningful way.
WASM seems to exist mostly because Mozilla threw up over the original NaCL proposal (which IMO was quite elegant). They said it wasn't 'webby', a quality they never managed to define IMO. Before WASM Google also had a less well known proposal to formally extend the web with JVM bytecode as a first class citizen, which would have allowed fast DOM/JS bindings (Java has an official DOM/JS bindings API for a long time due to the applet heritage). The bytecode wouldn't have had full access to the entire Java SE API like applets did, so the security surface area would have been much smaller and it'd have run inside the renderer sandbox like V8. But Mozilla rejected that too.
So we have WASM. Ignoring the new GC extensions, it's basically just regular assembly language with masked memory access and some standardized ABI stuff, with the major downside that no CPU vendor uses it so it has to be JIT compiled at great expense. A strange animal, not truly excellent at anything except pleasing the technical aesthetic tastes of the Mozillians. But if you don't have to care about what Mozilla think it's hard to come up with justifications for using it.
And a capability system and a brand new IDL, although I'm not sure who the target audience is...
> it's basically just regular assembly language
This doesn't affect your point at all, but it's much closer to a high-level language than to regular assembly language, isn't it? Nonaddressable, automatically managed stack, mandatorily structured control flow, local variables instead of registers, etc.
No, Mozilla's concerns at the time were very concrete and clear:
- NaCl was not portable - it shipped native binaries for each architecture.
- PNaCl (Portable Native Client, which came later) fixed that, but it only ran out of process, making it depend on PPAPI, an entirely new set of APIs for browsers to implement.
Wasm was designed to be PNaCl - a portable bytecode designed to be efficiently compiled - but able to run in-process, calling existing Web APIs through JS.
And was NPAPI not a part of the web, and a key part of its early success? Was ActiveX not a part of the web? I think they both were.
So the idea of portability is not and never has been a requirement for something to be "the web". There have been non-portable web pages for the entire history of the web. The sky didn't fall.
The idea that everything must target an abstract machine whether the authors want that or not is clearly key to Mozilla's idea of "webbyness", but there's no historical precedent for this, which is why NaCL didn't insist on it.
In the context of the web, portability means that you can, ideally at least, use any browser on any platform to access any website. Of course that isn't always possible, as you say. But adding a big new restriction, "these websites only run on x86" was very unpopular in the web ecosystem - we should at least aim to increase portability, not reduce it.
> And was NPAPI not a part of the web, and a key part of its early success? Was ActiveX not a part of the web? I think they both were.
Historically, yes, and Flash as well. But the web ecosystem moved away from those things for a reason. They brought not only portability issues but also security risks.
Security is similar. It sounds good, but is always in tension with other goals. In reality the web doesn't have a goal of ever increasing security. If it was, then they'd take features out, not keep adding new stuff. WebGPU expands the attack surface dramatically despite all the work done on Dawn and other sandboxing tech. It's optional, hardly any web pages need it. Security isn't the primary goal of the web, so it gets added anyway.
This is what I mean by saying it was vague and unclear. Portability and security are abstract qualities. Demanding them means sacrificing other things, usually innovation and progress. But the sort of people who make portability a red line never discuss that side of the equation.
As far back as I can remember well (~20 years) it was an explicitly stated goal to keep the web open. "Open" including that no single vendor controls it, neither in terms of browser vendor nor CPU vendor nor OS vendor nor anything else.
You are right that there has been tension here: Flash was very useful, once, despite being single-vendor.
But the trend has been towards openness: Microsoft abandoned ActiveX and Silverlight, Google abandoned NaCl and PNaCl, Adobe abandoned Flash, etc.
Portability and openness are opposing goals. A truly open system allows or even encourages anyone to extend it, including vendors, and including with vendor specific extensions. Maximizing the number of devices that can run something necessarily requires a strong central authority to choose and then impose a lowest common denominator: to prevent people adding their own extensions.
That's why the modern web is the most closed it's ever been. There are no plugin APIs. Browser extension APIs are the lowest power they've ever been in the web's history. The only way to meaningfully extend browsers is to build your own and then convince everyone to use it. And Google uses various techniques to ensure that whilst you can technically fork Chromium, in practice hardly anyone does. It's open source but not designed to actually be forked. Ask anyone who has tried.
So: the modern web is portable for some undocumented definition of portable because Google acts as that central authority (albeit is willing to compromise to keep Mozilla happy). The result is that all innovation happens elsewhere on more open platforms like Android or Linux. That's why exotic devices like VR headsets or AI servers run Android or Linux, not ChromeOS or WebOS.
WASI fixed well-known flaws in the POSIX API. That's not a bad thing.
> the major downside that no CPU vendor uses it so it has to be JIT compiled at great expense.
WASM was designed to be JIT-compiled into its final form at the speed it is downloaded by a web browser. JS JIT-compilers in modern web browsers are much more complex, often having multiple compilers in tiers so it spends time optimising only the hottest functions.
Outside web browsers, I'd think there are few use-cases where WASM couldn't be AOT-compiled.
The performance would be worse, and it would be harder to integrate with everything else. It might be more secure, I guess.
I also rather like the idea of deploying programs rather than virtual machines.
Docker's cardinal sin imo is that it was designed as a monetizable SaaS product, and suffers from inner platform effect, reinventing stuff (package management, lifecycle management etc) that didn't need to be invented.
Also WASI is a way of running a single process. If your app needs to run subprocesses you'll need to do more work.
the fact we haven't heard much about was use is probably because it isnt as valuable as we think, or no one has played around with it yet to find out
TFA has many examples of big tech companies using Wasm in production. It's not exhaustive either, e.g. the article doesn't mention:
- Google using it as a backend for Flutter and to implement parts of Google Maps, Earth, Meet, Sheets, Keep, YouTube, etc
- Microsoft using it in Copilot Studio
- eBay using it in their mobile app
- MongoDB using it for Compass
- Amazon supporting it in EKS
- 1Password using it in their browser extension
- Unity having it as a build target
(And this was just what I found with some quick web searches; I'm sure there are many other examples.)
---
> the fact we haven't heard much about was use is probably because it isnt as valuable as we think
One of the conclusions of the article is that it's mostly used in ways that aren't very visible.
Media, and wasm, are really important when you need them, but usually you don't.
[0] https://www.destroyallsoftware.com/talks/the-birth-and-death...
It might be this one I'm thinking of, as it closely fits the bill. But something is telling me it's not, and that it was published earlier.
Any ideas?
Theory and practice doesn't match in this case, and many people have remarked that companies that sit on the WhatWG board have vested interest in making sure their lucrative app stores are not threatened by a platform that can run any app just as well.
I remember when Native Client came to the scene and allowed people to compile complex native apps to the web that run at like 95% of native speed. While it was in many ways an inelegant solution, it worked better than WebAssembly does today.
Another one of WebAssembly's killer features was supposed to be native web integration. How JS engines work is that you have an IDL that describes the interface of JS classes which is then used to generate code to bind to underlying C++ implementations. You could probably bind those to Webassembly just as well.
I don't think a cross-platform as in cross CPU arch matters that much, if you meant 'runs on everything' then I concur.
Also the dirty secret of WebAssembly is that it's not really faster than JS.
> Also the dirty secret of WebAssembly is that it's not really faster than JS.
That is near purely due to amount of work it took to make that shitty language run fast. Naive webassembly implementation will beat interpreted JS many times over but modern JIT implementations are wonder.
V8 is a modern engineering marvel.
There is no reason why WASM couldn't be as fast, or faster than JS, especially now with WASM 3.0. Before, every programs in a managed language had to be shipped with its own GC and exception handling framework in WASM which was probably crippled by size constraints.
Any language with advanced GC algorithms, or interior pointers, will run poorly with current WASM GC.
It works as long as their GC model overlaps with JS GC requirements.
Some of the real GC tests will be strings support (because immutability/interning) and higher-level composite objects, which is all still in various draft/proposal states.
The WASM runtime ended up from something that ingests pseudo-assembly,validates it and turns it into machine code, into a full-fledged multi-tiered JIT, like what JS has, with crazy engineering complexity per browser, and similar startup performance woes (which was one of the major goals of Nacl/Wasm to alleviate the load time issues with huge applications).
Starting from not only single-threaded but memory-limited target was... weird decision
I don't think you need conspiracy theories for that. DOM involves complex JS objects and you have to have an entirely working multi-language garbage collection model if you are expecting other languages to work with DOM objects otherwise you run the risk of memory leaking some of the most expensive objects in a browser.
That path to that is long and slow, especially with the various committees' general interest being in not requiring non-JS languages to entirely conform to JS GC (either implementing themselves on top of JS GC alone or having to implement their own complex subset of JS GC to interop correctly), so the focus has been on very low level tools over complex GC patterns. The first basics have only just been standardized. The next step (sharing strings) seems close but probably still has months to go. The steps after that (sharing simple structs) seem pretty complex with a lot of heated debate still to happen, and DOM objects are still some further complexity step past that (as they involve complex reference cycles and other such things).
Some joker who built Solana actually thought Berkeley Packet Filter language would be better than WASM for their runtime. But besides that dude, everyone is discovering how great WASM can be to run deterministic code right in people’s browsers!
No, WASM is deterministic, JS is fundamentally not. Your dislike of all things blockchain makes you say silly things.
https://takahirox.github.io/WebAssembly-benchmark/
Js is not always faster, but in a good chunk of cases it is.
Webassembly that was supposed to replace it needs to be at least as good, that was the promise. We're a decade in, and still Wasm is nowhere near while it has accumulated an insane amount of engineering complexity in its compiler, and its ability to run native apps without tons of constraints and modifications is still meh as is the performance.
Also, Native Client started up so fast because it shipped native binaries, which was not portable. To fix that, Portable Native Client shipped a bytecode, like wasm, which meant slower startup times - in fact, the last version of PNaCl had a fast baseline compiler to help there, just like wasm engines do today, so they are very similar.
And, a key issue with Native Client is that it was designed for out-of-process sandboxing. That is fine for some things, but not when you need synchronous access to Web APIs, which many applications do (NaCl avoided this problem by adding an entirely new set of APIs to the web, PPAPI, which most vendors were unhappy about). Avoiding this problem was a major principle behind wasm's design, by making it able to coexist with JS code (even interleaving stack frames) on the main thread.
I don't see an issue with shipping uArch specific assembly, nowadays you only have 2 really in heavy use today, and I think managing that level of complexity is tenable, considering the monster the current Wasm implementation became, which is still lacking in key ways.
As for out of process sandboxing, I think for a lot of things it's fine - if you want to run a full-fat desktop-app or game, you can cram it into an iframe, and the tab(renderer) process is isolated, so Chrome's approach was quite tenable from an IRL perspective.
But if seamless interaction with Web APIs is needed, that could be achieved as well, and I think quite similarly to how Wasm does it - you designate a 'slab' of native memory and make sure no pointer access goes outside by using base-relative addressing and masking the addresses.
For access to outside APIs, you permit jumps to validated entry points which can point to browser APIs. I also don't see why you couldn't interleave stack frames, by making a few safety and sanity checks, like making sure the asm code never accesses anything outside the current stack frame.
Personally I thought that WebAssembly was what it's name suggested - an architecture independent assembly language, that was heavily optimized, and only the register allocation passes and the machine instruction translation was missing - which is at the end of the compiler pipeline, and can be done fairly fast, compared to a whole compile.
But it seems to me Wasm engines are more like LLVM, an entire compiler consuming IR, and doing fancy optimization for it - if we view it in this context, I think sticking to raw assembly would've been preferable.
> I don't see an issue with shipping uArch specific assembly, nowadays you only have 2 really in heavy use today,
That is true today, but it would prevent other architectures from getting a fair shot. Or, if another architecture exploded in popularity despite this, it would mean fragmentation.
This is why the Portable version of NaCl was the final iteration, and the only one even Google considered shippable, back then.
I agree the other stuff is fixable - APIs etc. It's really portability that was the sticking point. No browser vendor was willing to give that up.
But that is really only common in small computational kernels. If you take a large, complex application like Adobe Photoshop or a Unity game, wasm will be far closer to native speed, because its compilation and optimization approach is much closer to native builds (types known ahead of time, no heavy dependency on tiering and recompilation, etc.).
In practice, WASM codebases won't be simply running a single pure function in WASM from JS but instead will have several data structures being passed around from one WASM function to another, and that's going to be faster than doing the same in JS.
By the way, if I remember correctly V8 can optimize function calls heuristically if every call always passes the same argument types, but because this is an implementation detail it's difficult to know what scenarios are actually optimized and which are not.
This is an entirely unnecessary jab. There’s a whole generation dealing with stuff like this because of economic and other forces outside their control.
The JVM says "Hello!" from 1995.
The JVM is a great parallel example. Anyone listening to the hype in the early days based around what the JVM could be would surely be disappointed now. It isn't faster than C, it doesn't see use everywhere due to practical constraints, etc.
But you'd be hard pushed to say the JVM is a total failure. It's used by lots all round the world, and solves real problems, just not the ones we were hoping it would solve. I suspect the future of WASM looks something like that.
None of the technical arguments for JVM matter any more. It's just bait to trick you into sticking your hand under the lawnmower and helping Larry Ellison solve his problems.
The two are so similar that Java bytecode to .NET bytecode translators exist. With some, it is possible to take a class defined in Java, subclass it with C#, call it from Java, etc...
Not really when tools like Figma were not really possible before it
For developing brand new code, I don't think there's anything fundamentally impossible without Wasm, except SIMD.
Also, the ability to recompile existing code to wasm is often important. Unity or Photoshop could, in theory, write a new codebase for the Web, but recompiling their existing applications is much more appealing, and it also reuses all their existing performance work there.
Meanwhile javascript will be much faster to download since it is smaller and javascript can execute while it is downloading.
It will be a while before WASM GC will look close to any language's GC.
If size is your top priority, you can produce very small binaries, for example with C. Project [0] emulates an x86 architecture, including hardware, BIOS, and DOS compatibility, and ends up with a WebAssembly size of 78 kB uncompressed and a 24 kB transfer size.
Not many people are going to want to be rolling their own libc like that author. Most people just compile their app and ship megabytes of webassembly at the expense of their users. To me webassembly is just a shortcut to ship faster because you don't have to port existing code.
Emscripten provides a libc implementation based on musl, and so does wasi-libc (https://github.com/WebAssembly/wasi-libc).
If you explicitly list which functions you want to export from your WebAssembly module, the linker will remove all the unused code, in the same way that "tree-shaking" works for JS bundlers.
In my experience, a WebAssembly module (even with all symbols exported) is smaller than the equivalent native library. The bytecode is denser.
WebAssembly modules tend to be larger than JavaScript because AOT-compiled languages don't care as much about code size--they assume you only download the program/library once. In particular, LLVM (which I believe is the only mainstream WebAssembly-emitting backend) loves inlining everything.
Judicious use of `-Oz`, stripping debug info, and other standard code size techniques really help here. The app developer does have to care about code size, of course.
• WebAssembly is not huge. Fundamentally it’s generally smaller than JavaScript, but JavaScript comes with more of a standard library and more of a runtime, which unbalances comparisons. If you use something like Rust, it’s not difficult to get the basic overhead down to something like 10 kB, or for a larger project still well under 100 kB, until you touch things that need Unicode or CLDR tables; and it will generally scale similarly to JavaScript, once you take transport compression into account. If you use something like Go or .NET, sure, then there’s a heavier runtime, maybe a megabyte, maybe two, also depends on whether Unicode/CLDR tables are needed, and then JS will probably win handily on bundle size and startup time.
• JavaScript can’t execute while it’s downloading. In theory speculative parsing and even limited speculative execution is possible, but I don’t think any engine has tried that seriously. As for WebAssembly, it can be compiled and instantiated while streaming, generally at a faster rate than you can download it. The end result is that in an apples-to-apples comparison WebAssembly is significantly faster to start than JavaScript.
I always feel like I'm downloading megabytes of it whenever someone uses it. In practice it is. Even a basic hello world in rust will set you back a few megabytes compared to a the tens of bytes it takes in javascript.
>JavaScript comes with more of a standard library and more of a runtime, which unbalances comparisons.
Being able to make programs in a few bytes is a legitimate strength. You can't discount it because it's an effective way javascript saves size.
Lies. It’s 35 kB:
$ cargo new x
…
$ cd x
$ cat src/main.rs
fn main() {
println!("Hello, world!");
}
$ cargo build --release --target=wasm32-unknown-unknown
…
$ ls -l target/wasm32-unknown-unknown/release/x.wasm
… 34597 …
And that’s with the default allocator and all the std formatting and panic machinery. Without too much effort, you can get it to under 1 kB, if I remember correctly.For the rest: I mention comparisons being unbalanced because people often assume it will scale at the rate they’ve seen—twice as much code, twice as much size. Runtimes and heavy tables make for non-scaling overhead. That 35 kB you’ve paid for once, and now can use as much as you like without further growth.
Meanwhile, an empty React project seems to be up to 190 kB now, 61 kB gzipped.
For startup performance, it’s fairly well understood that image bytes are cheap while JavaScript bytes are expensive. WebAssembly bytes cost similar to images.
That's definitely not true.
A debug build of a "hello wasm-bindgen" style Rust program indeed takes ~2MB, but most of that is debug into; disabling that and/or stripping gets it down to 45-80kB (depending how I did it). And a release build starts at 35kB, and after `wasm-opt -O` gets down to 25kB. AFAIK most of the remaining space is used by wasm-bindgen boilerplate, malloc and panic machinery.
...and then, running wasm-bindgen to generate JS bindings somehow strips most of that boilerplate too, down to 1.4kB.
Side note, I never understood how wasm-opt is able to squeeze so much on top of what LLVM already did (it's a relatively fast post-build step and somehow reduces our production binaries by 10-20% and gives measurable speedups).
Lots of people pretend they don't need ICU or TZDB but that means leaving non-english-speakers or people outside of the US in the cold without support, which isn't the case for JS applications.
I still think this is a major unsolved problem for WebAssembly and I've previously raised it. I understand why it's not solved though - specifying and freezing the bitstream for ICU databases is a big task, etc.
The native ecosystem never payed attention to binary size optimization, but the JS ecosystem payed attention to code size in the very beginning.
Plus, WASM game runtimes need to bundle redundant 2D or 3D stacks, audio, fonts, harfbuzz, etc. yet don't expose eg. text rendering capabilities on par with those that browsers already have natively.
The whole thing is priorizing developer over user experience.
WebAssembly makes it possible to:
* Run x86 binaries in the browser via JIT-ting (https://webvm.io)
* Run Java applications in the browser, including Minecraft (https://browsercraft.cheerpj.com)
* Run node.js containers in the browser (https://browserpod.io)
It's an incredibly powerful tool, but very much a power-user one. Expecting your average front-end logic to be compiled that WebAssembly does not make much sense.
Why not? .NET Blazor and others already do that. In my eyes this was the whole hype of WASM. Replace JS. I don't give a crap about running node/java/whatever in the browser, why would i want that? I can run those outside the browser. I mean sure if you have some use case for it that's fine and I'm glad WASM lets you do it but I really don't see why most devs would care about that. We use the browser for browsing the web and displaying our websites.
To me the browser is for displaying websites and I make websites but I loathe JS. So being able to make websites without JS is awesome.
Not every language is a good source for targeting WASM, in the sense that you don't want to bring a whole standard library, custom runtime etc with you.
High-level languages may fare better if their GC is compatible with Wasm's GC model, though, as in that case the resulting binaries could be quite small. I believe Java-to-wasm binaries can be quite lean for that reason.
In c#'s case, it's probably mostly blazor's implementation, but it's not a good fit in this form for every kind of website (but very nice for e.g. an internal admin site and the like)
> we are at 2MB compressed with https://minfx.ai
That's still pretty bloated. That's enough size to fit an entire Android application a few years ago (before AndroidX) and simple Windows/Linux applications. I'll agree that it's justified if you're optimizing for runtime performance rather than first-load, which seems to be appropriate for your product, right?!What is this 2 MB for? It would be interesting to hear about your WebAssembly performance story!
Regarding the website homepage itself: it weighs around 767.32 kB uncompressed in my testing, most of which is an unoptimized 200+kB JPEG file and some insanely large web fonts (which honestly are unnecessary, the website looks _pretty good_ and could load much faster without them).
Modern Blazor can do server side rendering for SEO/crawlers and fast first load similar to next.js, and seamlessly transition to client side rendering or interactive server side rendering afterwards.
Your info/opinion may be based on earlier iterations of Blazor.
It's pretty impressive how far along CheerpJ is right now. I kinda wish this existed about five or ten years ago with this level of performance, maybe it would've allowed some things in the web platform to pan out differently.
Consider dropping in our Discord for further help: https://discord.leaningtech.com
Moreso than anything technical though, there sure seems to be a lot of bad blood between the group of people behind AssemblyScript and the people behind WASI. This feels like a classic case of small initial technical disagreements spiraling out of control and turning into a larger conflict fueled by personalities and organizational politics. I agree that overall this doesn't add confidence to the WebAssembly ecosystem as a whole, but it's not clear to me that the obvious conclusion is "WASI is controversial" as "WebAssembly seems like it might have a problem with infighting".
Furthermore, there are now competing interest groups within the Wasm camp. Wasm originally launched as a web standard: an extension of the JavaScript environment. However, some now want to use Wasm as the basis for replacing containers: an extension of a POSIX environment.
If you pulled the plug on WASM, a lot would stop working and it would heavily impact much of the JS frontend world.
What hasn't caught on is modern UI frameworks that are native wasm. We have plenty of old ones that can be made to work via WASM but it's not the same thing. They are desktop UI toolkits running in a browser. The web is still stuck with CSS and DOM trees. And that's one of the areas where WASM is still a bit weak because it requires interfacing with the browser APIs via javascript. This is a fixable problem. But for now that's relatively slow and not very optimal.
Solutions are coming. But that's not going to happen overnight. But web frontend teams being able to substitute Javascript for something else is going to require more work. Mobile frontend developers cross compiling to web is becoming a thing though. Jetbrain's compose multiplatform is has native Android/IOS supported now with a canvas rendered web frontend supported in Beta currently.
You can actually drive the dom from WASM. There are some RUST frameworks. I've dabbled with using kotlin's wasm support to talk to browser dom APIs. It's not that hard. It's just that Rust is maybe not ideal (too low level/hard) for frontend work and a lot of languages lack frameworks that target low level browser APIs. That's going to take years to fix. But a lot compiles to wasm at this point. And you kind of have access to most of the browser APIs when you do. Even if there is a little performance penalty.
I'm pretty sure this is just plain false. Do you have an exemple?
1. The most obvious use case is replacing JS, but the stakeholders have made it clear that WASM is to be some kind of abstract bytecode, not wedded to the browser.
2. This bytecode doesn't appear to be headed to bare metal, nor is there any obvious room for it among the existing ISAs. The direction of travel appears to be the opposite, towards the JVM/CLR. (With the addition of GC, etc)
3. But we already have the JVM/CLR? And tons of other VMs? What does WASM offer here that is not already a solved problem in the high-level VM space? What is the out-of-browser use case here?
In as far as a high level VM with security sandboxing is neat, nothing is stopping MS/Oracle from replicating that functionality with a restricted feature set of their existing VMs.
In as far as it's "C++ in a browser" - sure, that's neat - but I wouldn't bet on that being 'the future', and if that's the play, why is there such intense reticence to actually surfacing browser interop?
I keep feeling like the folks behind WASM are just totally neglecting the obvious use case (fully replacing JS in the browser), in favour of some abstract adventure to being an ideal Platonic ISA, except there's no market or practical use case for that.
Any language can target a combination of JS and Wasm, right now, to get a "fully-featured JS replacement". How would adding more features to Wasm improve that situation?
Adding browser interop to Wasm that obviated the need for JS would achieve that goal.
I'm kind of at a loss trying to interpret your reply in good faith as logically coherent.
If someone wanted to replace C, I would strongly suggest compiling something else to C, yes. That seems kind of obvious. It's how many programming languages that aim to replace C get their start.
To put it another way: I can see how [replacing JS as the interface that the human deals with] can be valuable goal, but why is [replacing JS so completely that it doesn't even exist as generated code] so valuable to you?
Transpiling to C is the worst possible way of trying to address these issues. If you write `x + y` in your newlang, and that transpiles to `x + y` in C, you have simply inherited all of C's implicit type conversion nonsense. If, alternatively, you write a whole bunch of machinery to ensure x + y is always safe, congratulations, you're now writing a VM in C, which is even harder to get right and do safely. You still have all the cost / risk of dealing with C, except now you're writing an extremely complex program in it that isn't even integral to your business logic.
It's for this reason that languages trying to replace C don't generally transpile to C, despite your claim. The biggest C replacement candidates right now are Zig and possibly Rust, both of which target LLVM IR, not C.
Similarly, transpiling to JS inherits all of JS' baggage and issues, of which much has accumulated in the last thirty years. It would be an undoubted improvement to be able to bypass it, regardless of your opinion on JS itself.
It's really not. Compiler writers, in your words, "write a whole bunch of machinery" so that the code has the intended semantics. It's not fundamentally that different to generating LLVM IR.
I never said every single language compiles via C. Sure, you're right, Zig and Rust generate LLVM IR instead (edit: iirc Zig is moving to their own backend, but I don't think that's relevant), and there isn't much reason not to target LLVM these days, unless you want to target niche platforms that LLVM doesn't support.
> It would be an undoubted improvement to be able to bypass that layer when it isn't useful, regardless of one's opinion on JS.
I will ask you clearly: you are asking for a lot of work from browser makers. What use cases, concretely, will that work actually enable?
If you hate the baggage of JS -- which is fair enough, there's a big mismatch between it and many other languages -- Wasm can be used for most of the heavy lifting, and it lacks that baggage. The JS only needs to be a little layer between Wasm and the browser.
But what will getting rid of that layer enable? What will it let people do that couldn't be done before?
I'm not trying to be mean, so I'm sorry if it came off that way.
I think in general the JS layer is simply unnecessary, and I'm not a big fan of unnecessary things in programming.
The world in which I would like to live is one where any language can be compiled to Wasm without the need for JS bindings, to enable people to write their business logic in whatever language they prefer, and not fret about JS baggage.
I don't really think there's much of a use case for Wasm to become some kind of abstract ISA, so I don't understand why the Wasm stakeholders are so resistant to acknowledging Wasm largely lives in a browser, and actually adapting it for making the most of that environment.
I''m personally a big fan of Wasm; it has been one of my favorite technologies ever since the first time I called malloc from the JS console when experimenting with an early version of Emscripten. Modern JS engines can be almost miraculously fast, but Wasm still offers the best performance and much higher levels of control over what's actually running on the CPU. I've written about this in the past.
The only way it really fell short is in the way that a lot of people were predicting that it would become a sort of total replacement for JS+HTML+CSS for building web apps. In this regard, I'd have to agree. It could be the continued lack of DOM bindings that have been considered a key missing piece for several years now, or maybe something else or more fundamental.
I've tried out some of the Wasm-powered web frameworks like Yew and not found them to provide an improvement for me at all. It just feels like an awkwardly bolted-on layer on top of JS and CSS without adding any new patterns or capabilities. Like you still have to keep all of the underlying semantics of the way JS events work, you still have to keep the whole DOM and HTML element system, and you also have to deal with all the new stuff the framework introduces on top of that.
Things may be different with other frameworks like Blazor which I've not tried, but I just find myself wanting to write JS instead. I openly admit that it might just be my deep experience and comfort building web apps using React or Svelte though.
Anyway, I strongly feel that Wasm is a successful technology. It's probably in a lot more places than you think, silently doing its job behind the scenes. That, to me, is a hallmark of success for something like Wasm.
There's lots of good options that come with windows preinstalled.
I for one hope that doesn't happen anytime soon. YouTube or Spotify could theoretically switch to Wasm drawing to a canvas right now (with a lot of development effort), but that would make the things that are currently possible thanks to the DOM (scraping, ad blockers etc.) harder or impossible.
However ads still need to be delivered over the net so there is still some way to block them (without resorting to router/firewall level blocking).
Not gonna happen.
This is a cat mouse fight, and Facebook already does some ultra-shady stuff like rendering a word as a list of randomly ordered divs for each character, and only using CSS to display in a readable way.
But it can't be made impossible, at the worst case we can always just capture the screen and use an AI to recognize ads, wasting a lot of energy. The same is true for cheating in video games and many forms of online integrity problems - I can just hire a good player who would play in my place, and no technology could recognize that.
I wonder how much the developers writing that are being paid to be complete assholes.
I've personally resigned from positions for less and it hasn't cost me much comfort in life (maybe some career progression perhaps but, meh).
Perhaps require monitoring of the arm muscle electrical signals, build a profile, match the readings to the game actions and check that the profile matches the advertised player
The meat of the article is informative, but the headline and motivation is based on this statement. It’s doesn’t reflect my experience but maybe I just don’t hang out in the same internet spots as the OP.
> We don’t yet see major websites entirely built with webassembly-based frameworks
I don’t know why this entered into the zeitgeist. I don’t think this was ever a stated goal of the WebAssembly project. I get the sense that some people assumed it and then keep wondering why this non-goal hasn’t been realized.
As far as I know, we are the fastest on the market. The multithreaded support is a pain though.
But what happened? Why am I not using it for all of my other random side projects? I posit that the JS ecosystem got so incredibly good that it it's a no-brainer for a very large percentage of workflows. React + Vite + TypeScript is an incredibly productive stack. I can use it to build all but the most demanding apps productively. Additionally, JS is pretty fast these days, so the speed boost from WASM isn't actually that meaningful for most use cases. Only really heavy use cases like media editing or Figma-like apps really benefit from what WASM has to offer.
Which almost no-one cares about.
Where it has worked is as infrastructure: fast, sandboxed, portable code for the parts that actually need it. A lot of people are already using it indirectly without realizing. So it’s less "what happened to Wasm?" and more "it didn’t become the silver bullet people imagined."
There were several articles that promoted it heavily - aka the hype phase.
And then ... nothing really materialized. If you look at, for instance, ruby WASM, https://github.com/ruby/ruby.wasm - there is virtually zero real documentation. Granted, this is a specific problem of ruby, and japanese devs not understanding english; but when you search for webassembly, contrast it to the numerous tutorials we have with regards to HTML, CSS, JavaScript. I get it, it is younger, it is harder than the other three tech stacks, but virtually nothing really improves here. It is like a borne-dead technology that has only a tiny niche, e. g. Rust developers. That's about it. And I fear this is also not going to change anymore. After a while, if the hype fails to deliver, people will lose interest - and a technology will eventually subside. That also happened to e. g. XHTML and the heavy use of XML in general in, say, 2000. I also don't think WebAssembly can be brought back now that the hype stage went off.
117 more comments available on Hacker News
Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.