Cap'n Web: a New Rpc System for Browsers and Web Servers
Key topics
Cap'n Web is a new RPC system for browsers and web servers that simplifies remote procedure calls and allows for bidirectional communication, sparking discussion on its features, comparisons to other RPC systems, and potential applications.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
71
0-6h
Avg / period
16
Based on 160 loaded comments
Key moments
- 01Story posted
Sep 22, 2025 at 9:05 AM EDT
3 months ago
Step 01 - 02First comment
Sep 22, 2025 at 11:14 AM EDT
2h after posting
Step 02 - 03Peak activity
71 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 24, 2025 at 10:12 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
But it may be tough to justify when we already have working Cap'n Proto implementations speaking the existing protocol, that took a lot of work to build. Yes, the new implementations will be less work than the original, but it's still a lot of work that is essentially running-in-place.
OTOH, it might make it easier for Cap'n Proto RPC to be implemented in more languages, which might be worth it... idk.
That makes sense. There is some opportunity though since the Cap'n Proto RPC had always lacked a JavaScript RPC implementation. For example, I had always been planning on using the Cap'n Proto OCaml implementation (which had full RPC) and using one of the two mature OCaml->JavaScript frameworks to get a JavaScript implementation. Long story short: Not now, but I'd be interested in seeing if Cap'n Web can be ported to OCaml. I suspect other language communities may be interested. Promise chaining is a killer feature and was (previously) difficult to implement. Aside: Promise chaining is quite undersold on your blog post; it is co-equal to capabilities in my estimation.
https://github.com/capnproto/capnproto/blob/v2/c%2B%2B/src/c...
That's just the RPC state machine -- the serialization is specified elsewhere, and the state machine is actually schema-agnostic. (Schemas are applied at the edges, when messages are actually received from the app or delivered to it.)
This is the Cap'n Web protocol, including serialization details:
https://github.com/cloudflare/capnweb/blob/main/protocol.md
Now, to be fair, Cap'n Proto has a lot of features that Cap'n Web doesn't have yet. But Cap'n Web's high-level design is actually a lot simpler.
Among other things, I merged the concepts of call-return and promise-resolve. (Which, admittedly, CapTP was doing it that way before I even designed Cap'n Proto. It was a complete mistake on my part to turn them into two separate concepts in Cap'n Proto, but it seemed to make sense at the time.)
What I'd like to do is go back and revise the Cap'n Proto protocol to use a similar design under the hood. This would make no visible difference to applications (they'd still use schemas), but the state machine would be much simpler, and easier to port to more languages.
I love the no-copy serialization and object capabilities, but wow, the RPC protocol is incredibly complex, it took me a while to wrap my head around it, and I often had to refer to the C++ implementation to really get it.
[0] https://ocapn.org/
SturdyRefs are tricky. My feeling is that they don’t really belong in the RPC protocol itself, because the mechanism by which you restore a SturdyRef is very dependent on the platform in which you're running. Cloudflare Workers, for example, may soon support storing capabilities into Durable Object storage. But the way this will work is very tied to the Cloudflare Workers platform. Sandstorm, similarly, had a persistent capability mechanism, but it only made sense inside Sandstorm – which is why I removed the whole notion of persistent capabilities from Cap’n Proto itself.
The closest thing to a web standard for SturdyRefs is OAuth. I could imagine defining a mechanism for SturdyRefs based on OAuth refresh tokens, which would be pretty cool, but it probably wouldn’t actually be what you want inside a specific platform like Sandstorm or Workers.
The name "Cap'n Proto" came from "capabilities and protobuf". The first, never-released version was based on Protobuf serialization. The first public release (way back on April 1, 2013) had its own, all-new serialization.
There's also a pun with it being a "cerealization protocol" (Cap'n Cruch is a well-known brand of cereal).
Tiny remark for @kentonv if you're reading: it looks like you've got the wrong code sample immediately following the text "Putting it together, a code sequence like this".
The code was supposed to be:
That is, the client is not packaging up all its logic and sending a single blob that describes the fully-chained logic to the server on its initial request. Right?
When I first read it, I was thinking it meant 1 client message and 1 server response. But I think "one round trip" more or less message "1 server message in response to potentially many client messages". That's a fair use of "1 RTT", but took me a moment to understand.
Just to make that distinction clear from a different angle, suppose the client were _really_ _really_ slow and it did not send the second promise message to the server until AFTER the server had computed the result for promise1. Would the server have already responded to the client with the result? That would be a way to incur multiple RTTs, albeit the application wouldn't care since it's bottlenecked by the client CPU, not the network in this case.
I realize this is unlikely. I'm just using it to elucidate the system-level guarantee for my understanding.
As always, thanks for sharing this, Kenton!
But the client can send all three messages back-to-back without waiting for any replies from the server. In terms of network communications, it's effectively the same as sending one message.
The client sends over separate 3 calls in one message, or one message describing some computation (run this function with the result of this function) and the server responds with one payload.
See "But how do we solve arrays" part:
> > .map() is special. It does not send JavaScript code to the server, but it does send something like "code", restricted to a domain-specific, non-Turing-complete language. The "code" is a list of instructions that the server should carry out for each member of the array
Although it seems to solve one of the problems that GraphQL solved that trpc doesn't (the ability to request nested information from items in a list or properties of an object without changes to server side code), there is no included solution for the server side problem that creates that the data loader pattern was intended to solve, where a naive GraphQL server implementation makes a database query per item in a list.
Until the server side tooling for this matures and has equivalents for the dataloader pattern, persisted/allowlist queries, etc., I'll probably only use this for server <-> server (worker <-> worker) or client <-> iframe communication and keep my client <-> server communication alongside more pre-defined boundaries.
However, if your database is sqlite in a Cloudflare Durable Object, and the RPC protocol is talking directly to it, then N+1 selects are actually just fine.
https://www.sqlite.org/np1queryprob.html
I've been working on this issue from the other side. Specifically, a TS ORM that has the level of composability to make promise pipelining a killer feature out of the box. And analagous to Cap'n Web's use of classes, it even models tables as classes with methods that return composable SQL expressions.
If curious: https://typegres.com/play/
Have you considered making a sqlite version that works in Durable Objects? :)
Right now I'm focused on Postgres (biggest market-share for full-stack apps). A sqlite version is definitely possible conceptually.
You're right about the bigger picture, though: Cap'n Web + Typegres (or a "Typesqlite" :) could enable the dream dev stack: a SQL layer in the client that is both sandboxed (via capabilities) and fully-featured (via SQL composability).
If I run, in client side Cap'n Web land (from the post): ``` let friendsWithPhotos = friendsPromise.map(friend => { return {friend, photo: api.getUserPhoto(friend.id))}; } ```
And I implement my server class naively, the server side implementation will still call `getUserPhoto` on a materialized friend returned from the database (with a query actually being run) instead of an intermediate query builder.
@kentonv, I'm tempted to say that in order for a query builder like typegres to do a good job optimizing these RPC calls, the RpcTarget might need to expose the pass by reference control flow so the query builder can decide to never actually run "select id from friends" without the join to the user_photos table, or whatever.
Agreed! If we use `map` directly, Cap'n Web is still constrained by the ORM.
The solution would be what you're getting at -- something that directly composes the query builder primitives. In Typegres, that would look like this:
``` let friendsWithPhotos = friendsPromise.select((f) => ({...f, photo: f.photo()}) // `photo()` is a scalar subquery -- it could also be a join ```
i.e., use promise pipelining to build up the query on the server.
The idea is that Cap'n Web would allow you to pipeline the Typegres query builder operations. Note this should be possible in other fluent-based query builders (e.g., Kysely/Drizzle). But where Typegres really synergizes with Cap'n Web is that everything is already expressed as methods on classes, so the architecture is capability-ready.
P.S. Thanks for your generous offer to help! My contact info is in my HN profile. Would love to connect.
Building an operation description from the callback inside the `map` is wild. Does that add much in the way of restrictions programmers need to be careful of? I could imagine branching inside that closure, for example, could make things awkward. Reminiscent of the React hook rules.
So it turns out it's actually not easy to mess up in a map callback. The main thing you have to avoid is side effects that modify stuff outside the callback. If you do that, the effect you'll see is those modifications only get applied once, rather than N times. And any stubs you exfiltrate from the callback simply won't work if called later.
edit: Downvoted, is this a bad question? The title is generically "web servers", obviously the content of the post focuses primarily on TypeScript, but i'm trying to determine if there's something unique about this that means it cannot be implemented in other languages. The serverside DSL execution could be difficult to impl, but as it's not strictly JavaScript i imagine it's not impossible?
* Use Cap'n Proto in your Rust backend. This is what you want in a type-safe language like Rust: generated code based on a well-defined schema.
* We'll build some sort of proxy that, given a Cap'n Proto schema, converts between Cap'n Web and Cap'n Proto. So your frontend can speak Cap'n Web.
But this proxy is just an idea for now. No idea if or when it'll exist.
It's usually best to ignore downvotes. Downvoted comments are noticeably grey. If people feel that's unfair, that'll attract upvotes in my experience.
Fwiw i think it was only once, and i was upvoted after mentioning it. You're right i could have worded it as something more ambiguous, aka "it seems this is unpopular" or w/e, but my edit was in reply to someones feedback (the downvote), so i usually mention it.
No complaint, just a form of wordless-feedback that i was attempting to respond to. Despite such actions being against HN will heh.
> as of this writing, the feature set is not exactly the same between the two. We aim to fix this over time, by adding missing features to both sides until they match.
do you think once the two reach parity, that that parity will remain, or more likely that Cap'n Web will trail cloudflare workers, and if so, by what length of time?
[1] https://github.com/cloudflare/capnweb/tree/main?tab=readme-o...
If anything I'd expect Cap'n Web to run ahead of Workers RPC (as it is already doing, with the new pipeline features) because Cap'n Web's implementation is actually much simpler than Workers'. Cap'n Web will probably be the place where we experiment with new features.
Cap'n Proto is inspired by ProtoBuf, protobuf has gRPC and gRPC web.
We've been using ProtoBuf/gRPC/gRPC-web both in the backends and for public endpoints powering React / TS UI's, at my last startup. It worked great, particularly with the GCP Kubernetes infrastructure. Basically both API and operational aspects were non-problems. However, navigating the dumpster fire around protobuf, gRPC, gRPC web with the lack of community leadership from Google was a clusterfuck.
This said, I'm a bit at loss with the meaning of schemaless. You can have different approaches wrt schema (see Avro vs ProtoBuf) but otherwise, can't fundamentally eschew schema/types. It's purely information tied to a communication channel that needs to be somewhere, whether that's explicit, implicit, handled by the RCP layer, passed to the type system, or worse all the way to the user/dev. Moreover, schemas tend to evolve and any protocol needs to take that into account.
Historically, ProtoBuf has done a good job managing various tradeoffs, here but had no experience using Capt'n Proto, yet seen mostly good stuff about it, so perhaps I'm just missing something here.
But Cap'n Web itself does not need to know about any of that. Cap'n Web just accepts whatever method call you make, sends it to the other end of the connection, and attempts to deliver it. The protocol itself has no idea if your invocation is valid or not. That's what I mean by "schemaless" -- you don't need to tell Cap'n Web about any schemas.
With that said, I strongly recommend using TypeScript with Cap'n Web. As always, TypeScript schemas are used for build-time type checking, but are then erased before runtime. So Cap'n Web at runtime doesn't know anything about your TypeScript types.
So it's basically Stubby/gRPC.
From strictly a RPC perspective this makes sense (i guess to the same degree gRPC would be agnostic to protobuf serialization scheme, which IIRC is the case (also thinking Stubby was called that for the same reason)).
However, that would mean some there's
1. a ton of responsibility on the user/dev —i.e. the same amount that prompted protobuf to exist, afterall.
You basically have the (independent problem of) clients, servers and data (in fligiht, or even persisted) that get different versions of the schema.
2. a missied implicit compression opportunity? IDK to what extent this actually happens on the fly or not.
Stubby / gRPC do not support object capabilities, though. I know that's not what you meant but I have to call it out because this is a huuuuuuuge difference between Cap'n Proto/Web vs. Stubby/gRPC.
> a ton of responsibility on the user/dev —i.e. the same amount that prompted protobuf to exist, afterall.
In practice, people should use TypeScript to specify their Cap'n Web APIs. For people working in TypeScript to start with, this is much nicer than having to learn a separate schema format. And the protocol evolution / compatibility problem becomes the same as evolving a JavaScript library API with source compatibility, which is well-understood.
> a missied implicit compression opportunity? IDK to what extent this actually happens on the fly or not.
Don't get me wrong, I love binary protocols for their efficiency.
But there are a bunch of benefits to just using JSON under the hood, especially in a browser.
Note that WebSocket in most browsers will automatically negotiate compression, where the compression context is preserved over the whole connection (not just one message at a time), so if you are sending the same property names a lot, they will be compressed out.
I currently work in a place where the server-server API clients are generated based on TypeScript API method return types, and it's.. not great. The reality of this situation quickly devolves the types using "extends" from a lot of internal types that are often difficult to reason about.
I know that it's possible for the ProtoBuf types to also push their tendrils quite deep into business code, but my personal experience has been a lot less frustrating with that than the TypeScript return type being generated into an API client.
I'm confused. How is this a "protocol" if its core premises rely on very specific implementation of concurrency in a very specific language?
Anyway, the point here is that early RPC systems worked by blocking the calling thread while performing the network request, which was obviously a terrible idea.
https://youtu.be/bzkRVzciAZg
Some friends and I still jokingly troll each other in the vein of these, interjecting with "When async programming was discovered in 2008...", or "When memory safe compiled languages were invented in 2012..." and so forth.
Often when something is discovered or invented is far less influential[1] than when it jumps on and hype train.
[1] the discovery is very important for historical and epistemological reasons of course, rewriting the past is bad
Meanwhile Go doesn't have async/await and never will because it doesn't need it; it does greenthreading instead. Java has that too now.
Either way, your code waits on IO like before and does other work while it waits. But instead of the kernel doing the context switching, your runtime does something analogous at a higher layer.
The problem is synchronization becomes extremely hard to reason about. With event loop concurrency, each continuation (callback) becomes effectively a transaction, in which you don't need to worry about anything else modifying your state out from under you. That legitimately makes a lot of things easier.
The Cloudflare Workers runtime actually does both: There's a separate thread for each connection, but within each thread there's an event loop to handle all the concurrent stuff relating to that one connection. This works well because connections rarely need to interact with each other's state, but they need to mess with their own state constantly.
(Actually we have now gone further and stacked a custom green-threading implementation on top of this, but that's really a separate story and only a small incremental optimization.)
If some other transaction commits at just the wrong time, it could change the result of some of these queries but not all. The results would not be consistent with each other.
But one thing I can't figure out: What would be the syntax for promise pipelining, if you aren't using promises to start with?
Oh, great point! That does seem really hard, maybe even intractable. That's definitely a reason to like cooperative concurrency, huh...
Just to tangent even further, but some ideas:
- Do it the ugly way: add an artificial layer of promises in an otherwise pre-emptive, direct-style language. That's just, unfortunately, quite ugly...
- Use a lazy language. Then everything's a promise! Some Haskell optimizations feel kind of like promise pipelining. But I don't really like laziness...
- Use iterator APIs; that's a slightly less artificial way to add layers of promises on top of things, but still weird...
- Punt to the language: build an RPC protocol into the language, and promise pipelining as a guaranteed optimization. Pretty inflexible, and E already tried this...
- Something with choreographic programming and modal-types-for-mobile-code? Such languages explicitly track the "location" of values, and that might be the most natural way to represent ocap promises: a promise is a remote value at some specific location. Unfortunately these languages are all still research projects...
My mental model is that it's a caller who decides how call should be executed (synchroniously or asynchroniously). Synchronious call is when caller waits till completion/error, asynchronious - is when caller puts the call in the background (whatever it means in that language/context) and handle return results later. CSP concurrency model [1] is the closest fit here.
It's not a property of the function to decide how the caller should deal with it. This frustration was partly described in the viral article "What color is your function?" [2], but my main rant about this concurrency approach is that it doesn't match well how we think and reason about concurrent processes, and requires mental cognitive gymnastics to reason about relatively simple code.
Seeing "async/await/Promises/Futures" being a justification of a "protocol" makes little sense to me. I can totally get that they reimagined how to do RPC with first-class async/await primitives, but that doesn't make it a network "protocol".
[1] https://en.wikipedia.org/wiki/Communicating_sequential_proce...
[2] https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...
There's been a renaissance in the tools, but now we mainly use them like "REST" endpoints with the type signatures of functions. Programming language features like Future and Optional make it easier to clearly delineate properties like "this might take a while" or "this might fail" whereas earlier in RPC, these properties were kind of hidden.
RPC is "remote procedure call", emphasis on "remote", meaning you always necessarily gonna be serializing/deserializing the information over some kind of wire, between discrete/different nodes, with discrete/distinct address spaces
a client request by definition can't include anything that can't be serialized, serialization is the ground truth requirement for any kind of RPC...
a server doesn't provide "an object" in response to a query, it provides "a response payload", which is at most a snapshot of some state it had at the time of the request, it's not as if there is any expectation that this serialized state is gonna be consistent between nodes
edit: was skimming the github repo https://github.com/cloudflare/capnweb/tree/main?tab=readme-o...
and saw this which answers my question:
> Supports passing functions by reference: If you pass a function over RPC, the recipient receives a "stub". When they call the stub, they actually make an RPC back to you, invoking the function where it was created. This is how bidirectional calling happens: the client passes a callback to the server, and then the server can call it later.
> Similarly, supports passing objects by reference: If a class extends the special marker type RpcTarget, then instances of that class are passed by reference, with method calls calling back to the location where the object was created.
Gonna skim some more to see if i can find some example code.
The part that's most exciting to me is actually the bidirectional calling. Having set this up before via JSON RPC / custom protocol the experience was super "messy" and I'm looking forward to a framework making it all better.
Can't wait to try it out!
OTOH, JSON RPC is extremely simple. Cap'n Web is a relatively complicated and subtle underlying protocol.
Actually the author of JSON RPC suggested that method names could be dynamic, there's nothing in the spec preventing that.
https://groups.google.com/g/json-rpc/c/vOFAhPs_Caw/m/QYdeSp0...
So you could definitely build a cursed object/reference system by packing stuff into method names if you wanted. I doubt any implementations would allow this.
But yes, JSON RPC is very minimal and doesn't really offer much.
Is the server holding onto some state in memory that this specific client has already authenticated? Or is the API key somehow stored in the new AuthenticatedSession stub on the client side and included in subsequent requests? Or is it something else entirely?
This does mean the server is holding onto state, but remember the state only lasts for the lifetime of the particular connection. (In HTTP batch mode, it's only for the one batch. In WebSocket mode, it's for the lifetime of the WebSocket.)
Thanks for the explanation!
You mention that it’s schemaless as if that’s a good thing. Having a well defined schema is one of the things I like about tRPC and zod. Is there some way that you get the benefits of a schema with less work?
Well, except you don't get runtime type checking with TypeScript, which might be something you really want over RPC. For now I actually suggest using zod for type checks, but my dream is to auto-generate type checks based on the TypeScript types...
(I do wish it could be the other way, though: Write only TypeScript, get runtime checks automatically.)
But my expectation is you'd use Zod to define all your parameter types. Then you'd define your RpcTarget in plain TypeScript, but for the parameters on each method, reference the Zod-derived types.
Although perhaps that's not what you mean.
I found these through this https://github.com/moltar/typescript-runtime-type-benchmarks
One thing about a traditional RPC system where every call is top-level and you pass keys and such on every call is that multiple calls in a sequence can usually land on different servers and work fine.
Is there a way to serialize and store the import/export tables to a database so you can do the same here, or do you really need something like server affinity or Durable Objects?
When using WebSockets, that's the lifetime of the WebSocket.
But when using the HTTP batch transport, a session is a single HTTP request, that performs a batch of calls all at once.
So there's actually no need to hold state across multiple HTTP requests or connections, at least as far as Cap'n Web is concerned.
This does imply that you shouldn't design a protocol where it would be catastrophic if the session suddenly disconnected in the middle and you lost all your capabilities. It should be possible to reconnect and reconstruct them.
RPC SDKs should have session management, otherwise you end up in this situation:
"Any sufficiently complicated gRPC or Cap'n'Proto program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Akka"
FWIW the way I've handled this in a React app is, the root stub gets passed in as a prop to the root component, and children call the appropriate methods to get whatever objects they need from it. When the connection is lost, a new one is created, and the new root stub passed into the root component, which causes everything downstream to re-run exactly as you'd want. Seems to work well.
It looks like the server affinity is accomplished by using websockets. The http batching simply sends all the requests at once and then waits for the response.
I don't love this because it makes load balancing hard. If a bunch of chatty clients get a socket to the same server, now that server is burdened and potentially overloadable.
Further, it makes scaling in/out servers really annoying. Persistent long lived connections are beasts to deal with because now you have to handle that "what do I do if multiple requests are in flight?".
One more thing I don't really love about this, it requires a timely client. This seems like it might be trivial to DDOS as a client can simply send a stream of push events and never pull. The server would then be burdened to keep those responses around so long as the client remains connected. That seems bad.
Architecturally I don't think it makes sense to support this in a load balancer, you instead want to pass back a "cost" or outright decisions to your load balancing layer.
Also note the "batch-pipelining" example is just a node.js client; this already supports not just browsers as clients, so you could always add another layer of abstraction (the "fundamental theorem of software engineering").
That said, type checking is called out both in the blog post (in the section on TypeScript) and in the readme (under "Security Considerations"). You probably should use some runtime type checking library, just like you should with traditional JSON inputs.
In the future I'm hoping someone comes up with a way to auto-generate type checks based on TypeScript types.
> Similarly, supports passing objects by reference: If a class extends the special marker type RpcTarget, then instances of that class are passed by reference, with method calls calling back to the location where the object was created.
Can this be relaxed? Having to design the object model ahead of time for RpcTarget is constraining. If we could just attach a ThingClass.prototype[Symbol.for('RpcTarget')] = true then there would be a lot more flexibility, less need to design explciitly for RpcTarget, to use RpcTarget with the objects/classes of 3rd party libraries.
With that said, I do think we ought to support `new RpcStub(myObject)` to explicitly create a stub around an arbitrary class, even if it doesn't extend `RpcTarget`. It would be up to the person writing the `new RpcStub` invocation to verify it's safe.
> .map() is special. It does not send JavaScript code to the server, but it does send something like "code", restricted to a domain-specific, non-Turing-complete language. The "code" is a list of instructions that the server should carry out for each member of the array.
> But the application code just specified a JavaScript method. How on Earth could we convert this into the narrow DSL? The answer is record-replay: On the client side, we execute the callback once, passing in a special placeholder value. The parameter behaves like an RPC promise. However, the callback is required to be synchronous, so it cannot actually await this promise. The only thing it can do is use promise pipelining to make pipelined calls. These calls are intercepted by the implementation and recorded as instructions, which can then be sent to the server, where they can be replayed as needed.
The only catch is your function needs to have no side effects (other than calling RPC methods). There are a lot of systems out there that have similar restrictions.
For any other function accepting a callback, the function on the server will receive an RPC stub, which, when called, makes an RPC back to the caller, calling the original version of the function.
This is usually what you want, and the semantics are entirely normal.
But for .map(), this would defeat the purpose, as it'd require an additional network round-trip to call the callback.
map() works for cases where you don't need to compute anything in the callback, you just want to pipeline the elements into another RPC, which is actually a common case with map().
If you want to filter server-side, you could still accomplish it by having the server explicitly expose a method that takes an array as input, and performs the desired filter. The server would have to know in advance exactly what filter predicates are needed.
But in the concrete:
* Looking up some additional data for each array element is a particularly common thing to want to do.
* We can support it nicely without having to create a library of operations baked into the protocol.
I really don't want to extend the protocol with a library of operations that you're allowed to perform. It seems like that library would just keep growing and add a lot of bloat and possibly security concerns.
(But note that apps can actually do so themselves. See: https://news.ycombinator.com/item?id=45339577 )
I did a spiritually similar thing in JS and Dart before where we read the text of the function and re-parsed (or used mirrors in Dart) to ensure that it doesn't access any external values.
I'm trying to understand how well this no-side-effects footgun is defended against.
https://github.com/cloudflare/capnweb/blob/main/src/map.ts#L... seems to indicate that if the special pre-results "record mode" call of the callback raises an error, the library silently bails out (but keeps anything already recorded, if this was a nested loop).
That catches a huge number of things like conditionals on `item.foo` in the map, but (a) it's quite conservative and will fail quite often with things like those conditionals, and (b) if I had `count += 1` in my callback, where count was defined outside the scope, now that's been incremented one extra time, and it didn't raise an error.
React Hooks had a similar problem, with a constraint that hooks couldn't be called conditionally. But they solved their DX by having a convention where every hook would start with `use`, so they could then build linters that would enforce their constraint. And if I recall, their rules-of-hooks eslint plugin was available within days of their announcement.
The problem with `map` is that there are millions of codebases that already use a method called `map`. I'd really, really love to see Cap'n Web use a different method name - perhaps something like `smartMap` or `quickMap` or `rpcMap` - that is more linter-friendly. A method name that doesn't require the linter to have access to strong typing information, to understand that you're mapping over the special RpcPromise rather than a low-level array.
Honestly, it's a really cool engineering solve, with the constraint of not having access to the AST like one has in Python. I do think that with wider adoption, people will find footguns, and I'd like this software to get a reputation for being resilient to those!
You can't perform computation on a promise. The only thing you can do is pipeline on it.
`user.updatedAt == date` is trying to compare a promise against a date. It won't type check.
`new Date(user.updatedAt)` is passing a promise to the Date constructor. It won't type check.
Since JS doesn't have this, they have to pass in a special placeholder value and try to record what the code is doing to that value.
I wonder why they don't just do `.toString()` on the mapping function and then parse the resulting Javascript into an AST and figure out property accesses from that. At the very least, that'd allow the code to properly throw an error in the event the callback contains any forbidden or unsupported constructs.
Unfortunately, "every object is truthy" and "every object can be coerced to a string even if it doesn't have a meaningful stringifier" are just how JavaScript works and there's not much we can do about it. If not for these deficiencies in JS itself, then your code would be flagged by the TypeScript compiler as having multiple type errors.
On a little less trivial skim over it looks like the intention here isn't to map property-level subsets returned data (e.g., only getting the `FirstName` and `LastName` properties of a larger object); as much as it is to do joins and it's not data entities being provided to the mapping function but RpcPromises so individual property values aren't even available anyway.
So I guess I might argue that map() isn't a good name for the function because it immediately made me think it's for doing a mapping transformation and not for basically just specifying a join (since you can't really transform the data) since that's what map() can do everywhere else in Javascript. But for all I know that's more clear when you're actually using the library, so take what I think with a heaping grain of salt. ;)
That sounds incredibly complicated, and not something we could do in a <10kB library!
The suggestion was to parse _JavaScript_. (That's what `.toString()` on a function does... gives you back the JavaScript.)
It feels like C# has an answer to every problem I’ve ever had with other languages - dynamic loading, ADTs with pattern matching, functional programming, whatever this expression tree is, reflection, etc etc. Yet somehow it’s still a niche language that isn't widely used (outside of particular ecosystems).
I've worked only at startups/small businesses since I graduated university and it's all been in C#.
fucking nice ecosystem
Pi types, existential types and built-in macros to name a few.
If you dare leave the safety of a compiler you'll find that Sublime Merge can still save you when rewriting a whole part of an app. That and manual testing (because automatic testing is also clutter).
If you think it's more professional to have a compiler I'd like to agree but then why did I run into a PHP job when looking for a Typescript one? Not an uncommon unfolding of events.
Granted, I started out on LISP. My version of "easy to read and write" might be slightly masochistic. But I love Perl and Python and Javascript are definitely "you can jump in and get shit done if you have worked in most languages. It might not be idiomatic, but it'll work"...
It does require twice the lines of PHP code to make a Ruby or Python program equivalent, or more if you add phpdoc and static types though, so it is easier to read/write Ruby or Python, but only after learning the details of the language. Ruby's syntax is very expressive but very complex if you don't know it by heart.
Specifically, I'd like to be able to have "inches" as a generic type, where it could be an int, long, float, double. Then I'd also like to have "length" as a generic type where it could be inches as a double, millimeters as a long, ect, ect.
I know they added generic numbers to the language in C# 7, so maybe there is a way to do it?
You were maybe already getting at it, but as a kitchen sink language the answer is "simplicity". All these diverse language features increase cognitive load when reading code, so it's a complexity/utility tradeoff
Along with https://pypi.org/project/pony-stubs/, you get decent static typing as well. It's really quite something.
It generally unrolls as a `for loop` underneath, or in this case LINQ/SQL.
C# was innovative for doing it first in the scope of SQL. I remember the arrival of LINQ... Good times.
I’ve ended up building similar things over and over again. For example, simplifying the worker-page connection in a browser or between chrome extension “background” scripts and content scripts.
There’s a reason many prefer “npm install” on some simple sdk that just wraps an API.
This also reminds me a lot of MCP, especially the bi-directional nature and capability focus.
138 more comments available on Hacker News