Make Any Typescript Function Durable
Posted3 months agoActive2 months ago
useworkflow.devTechstoryHigh profile
heatednegative
Debate
80/100
TypescriptVercelWorkflowsDurable Functions
Key topics
Typescript
Vercel
Workflows
Durable Functions
Vercel introduces a new 'durable' function feature for TypeScript using a novel syntax that has sparked controversy among HN users, with many criticizing its 'magic string' approach and lack of clarity.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
56m
Peak period
61
0-12h
Avg / period
13.8
Comment distribution83 data points
Loading chart...
Based on 83 loaded comments
Key moments
- 01Story posted
Oct 23, 2025 at 1:03 PM EDT
3 months ago
Step 01 - 02First comment
Oct 23, 2025 at 1:59 PM EDT
56m after posting
Step 02 - 03Peak activity
61 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 28, 2025 at 4:13 AM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45684217Type: storyLast synced: 11/20/2025, 3:44:06 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
> I'm trying to find how they implemented the "use workflow" thing, and before I could find it I already found https://github.com/vercel/workflow/blob/main/packages/core/s...
> Telemetry is part of the core.
> Yuck.
At least understand what you're looking at before getting the ick.
JavaScript didn't have a lot of great options for this kind of statically-declared metaprogramming, which is why the "use X"; syntax has become so popular in various spaces. It's definitely not my favourite approach, but I don't think there's any clear "best" solution here.
I suspect that's why they need to transform the function into individual step functions, and for that they need to do static analysis. And if they're doing static analysis, then all the concerns detailed here and elsewhere apply and you're basically just picking your favourite of a bunch of syntaxes with different tradeoffs. I don't like magic strings, but at least they clearly indicate that magic is happening, which is useful for the developer reading the code.
You can do that manually by defining a bunch of explicit steps, their outputs and inputs, and how to transform the data around. And you could use generators to create a little DSL around that. But I think the idea here is to transform arbitrary functions into durable pipelines.
Personally, I'd prefer the more explicit DSL-based approach, but I can see why alternatives like this would appeal to some people.
These alternatives just hide the implementation and make debugging and debugging expanding/configuring unavailable
If I recall correctly, other solutions in this space work by persisting & memoizing the results of the steps as they succeed, so the whole thing can be rerun and anything already completed uses the memoized result.
We don't have workflow and steps tho, like async await, just functions all the way down.
Disclaimer: I'm the CEO of resonate
Disclaimer: I'm the CEO of Resonate
[0] https://ts-ast-viewer.com/#code/GYVwdgxgLglg9mABMOcAUBKRBvAU...
It's similar to the problem that `require` had in browsers, and the reason that the `import` syntax was chosen instead. In NodeJS. `require` is just a normal function, so I can do anything with it that I could do with a normal function, like reassigning it or wrapping it or overwriting it or whatever. But in browsers (and in bundlers), all imports need to be statically known before executing the program, so we need to search for all calls to `require` statically. This doesn't work in JavaScript - there are just too many ways to dynamically call a function with arguments that are unknown at runtime or in such a way that the calling code is obscured.
That's why the `import` syntax was introduced as an alternative to `require` that had rules that made it statically analysable.
You'd end up with a similar situation here. You want to know statically which functions are being decorated, but JavaScript is a deeply dynamic language. So either you end up having a magic macro-like function that can only be called in certain places and gets transpiled out of the code so it doesn't appear at runtime, or you have a real function but you recognise that you won't be able to statically find all uses of that function and will need to set down certain rules about what uses can do with that function.
Either way, you're going to be doing some magic metaprogramming that the developer needs to be aware of. The benefit of the "use X" syntax is that it looks more magical, and therefore better indicates to the reader that magic is happening in this place. Although I agree that it's not my personal preference.
I don’t like having custom bundler logic for my code.
"don't use react"
I can understand making legit criticisms of React, no doubt the hooks transition had some issues and the library has a high level of complexity without a clear winner in the state management domain, but pretending React is peddling shit like "use workflow" is frankly low effort.
React was created to make low skilled people capable of shipping low quality code. If that is the only thing you can do, I'd be careful about calling yourself fullstackchris
Starting to scatter magic strings throughout a code base feels like a real step back.
I think "use client" is the only one that has to go at the top of a file.
At least in any other framework library I can just command click and see why things are not working, place breakpoints and even modify code.
I do wish that there was some kind of self-hostable World implementation at launch. If other PAAS providers jump onto this, I could see this sticking around.
But we did have convos in the last couple of days on what we can do next on the pg world ;D
just build state machines folks
With RSM you often need to keep the history as well, just in a different form, that is the message/event log that your state machine processed.
With RSM you need your code to be deterministic to rebuild the materialized state during failover, unless you snapshot and replicate the state on each processed event/message.
looking at the docs and examples, I see Workflows and Steps and Retries, but I don't see any Durable yet. none of the examples really make it clear how or where anything gets stored
or is this some extra compilation step to rewrite the code?
I strongly believe that being obvious about steps with `step.run` is important: it improves o11y, makes things explicit, and you can see transactional boundaries.
I guess in the end it's another abstraction layer for queues or state machines and another way to lock you into Vercel.
"use blackBoxWrapperForEverything";
What happens if you saw a bug, or you want to update and change a workflow? Is there a way to discard / upgrade the existing in-memory workflows that are being executed (and correspond to the previous version) so they are now "updated"?
I asked their main developer Dillon about the data/durability layer and also the compilation step. I wonder if adding a "DBOS World" will be feasible. That way, you get Postgres-backed durable workflows, queues, messaging, streams, etc all in one package, while the "use workflow" interface remains the same.
Here is the response from Dillon, and I hope it's useful for the discussion here:
> "The primary datastore is dynamodb and is designed to scale to support tens of thousands of v0 size tenants running hundreds of thousands of concurrent workflows and steps."
> "That being said, you don't need to use Vercel as a backend to use the workflow SDK - we have created a interface for anyone to implements called 'World' that you can use any tech stack for https://github.com/vercel/workflow/blob/main/packages/world/..."
> "you will require a compiler step as that's what picks up 'use workflow' and 'use step` and applies source transformations. The node.js run time limitations only apply to the outer wrapper function w/ `use workflow`"
Once you have the primitive of real durable compute all the hard bits fall away. And it's not as if this is some fantasy, VM live migration is a real working example of it working. Then you just write your program in the grug way, use your language's built in retry tools and store state in normal variables because the entire thing is durable including memory, cpu state, gpu state, network state, open files, etc..
- where does the state and telemetry get stored?
- if something is sleeping for 7 days, and you release a new version in that time, what is invoked?
- how do you configure retries? Looks like it retries forever
And I echo the hatred of the magic strings. Terrible dx
How do you create an environment where everything is deterministic? Do they invoke every supported non deterministic function when creating the environment and rewrite those functions to return the values from the environment's creation time? Is there something more complex happening?
Durability is achieved by running the workflows in a wasm runtime.
It's probably a common async functional programming term that I don't know.
But when "algebraic effects" were all the rage, the people evangelizing them at least cared to explain first in their blog posts.
This one instead straight jumps via AI agents (what does this have to do with TypeScript syntax extensions?) to "installation".
No thanks.
Edit:
I've read the examples after commenting and it's understandable, still explained in a post-hoc way that I really dislike, especially when it comes to a proprietary syntax extension.
Also the examples for their async "steps" look like they are only one step away from assuming that any async function is some special Next.js thing with assumptions about clients and servers, not "just" async functions with some annotation to allow the "use workflow" to do its magic.