Jmap for Calendars, Contacts and Files Now in Stalwart
Posted2 months agoActive2 months ago
stalw.artTechstoryHigh profile
calmpositive
Debate
40/100
JmapEmailCalendarSelf-Hosting
Key topics
Jmap
Email
Calendar
Self-Hosting
Stalwart has implemented JMAP for Calendars, Contacts, and Files, sparking discussion on the benefits and challenges of adopting this new protocol, as well as the need for better client support.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
47m
Peak period
149
Day 1
Avg / period
32
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 22, 2025 at 1:26 PM EDT
2 months ago
Step 01 - 02First comment
Oct 22, 2025 at 2:12 PM EDT
47m after posting
Step 02 - 03Peak activity
149 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 2, 2025 at 9:10 AM EST
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45672336Type: storyLast synced: 11/20/2025, 8:32:40 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
https://stalw.art/compare/
> Stalwart Enterprise leverages AI technology to provide unparalleled email security and management. With AI-powered features, Stalwart Enterprise excels in accurately classifying spam, detecting sophisticated phishing attempts, and blocking various types of network attacks. This intelligent approach ensures that your email environment remains secure and reliable. Stalwart Enterprise comes equipped with a pre-trained large language model (LLM), offering robust out-of-the-box protection. Additionally, it supports integration with leading AI providers such as OpenAI, Anthropic, and other cutting-edge platforms, allowing you to enhance and customize your security measures. By utilizing AI, Stalwart Enterprise delivers a smarter, more efficient email solution that proactively safeguards your communications and data.
[0]: https://stalw.art/enterprise/
Still, it’s an interesting space, I think.
Email was never a binary protocol. Notoriously so, it's why MIME types and MIME encodings get so complicated.
Most of the "old internet" protocols (email, FTP, even HTTP itself) were bootstrapped on top of built-mostly-for-plaintext Telnet. HTTP as the new telnet has a bunch of improvements when it comes to binary data, request/response-based data flows, and some other considerations. HTTP/3 is even inherently a binary protocol, it's lack of "telnet-compatibility" one of the concerns about switching the majority of the web to it.
vCard/vCal/iCard/iCal were also deeply "plaintext formats". JSON is an improvement because it is more structured, even more efficient, than those predecessors. JSON may not look efficient, but it compresses extremely well and can be quite efficient in gzip and Brotli streams.
I feel like "JSON over HTTP" is a subtle improvement over "custom text formats over telnet", even if it doesn't sound like "binary protocol efficiency" at first glance. Especially as HTTP/3 pushes HTTP more efficient and more "binary", and arguably "more fundamental/basic" with HTTP/3 even taking over more roles in the TCP/UDP layer of the internet stack. (Telnet would never try to replace TCP.) HTTP isn't the worst bootstrap layer the internet could use to build new protocols and apps on top of. Sure, it would be neat to see more variety and experiments outside of the HTTP stack, too, but HTTP is too useful at this point not to build a bunch of things on top of it instead of as their own from-scratch protocol.
Additionally, as much people like to harp about "telcos focusing on connection-oriented protocols while we ran loops around them with packets", the reality is that NCP and later TCP pretty much focused on emulating serial lines around, and one of the earliest ways to access ARPAnet outside of machines directly on it was through calling into a TIP which set up bidirectional stream from your modem to a port on some host.
The idea with packets is that you don't need to reserve N bit/s of each link along the route to whatever system you're talking to; instead you just repeatedly say "here's a chunk of data, send it to X". It's not really relevant that the typical thing to do with these packets is to build a reliable stream on top of them, what matters is that everything except the endpoints can be a lot dumber.
This still requires you to set up a connection beforehand, but doesn't require you to reserve resources you might not be using.
Binary protocols just meant you actually needed to implement serialiser/deserialiser and similar tooling instead of writing dumbest possible riff on strtok() and hoping your software won't be used anymore once DoD internet becomes mature
That's also why the majority of OIDs in SNMP are rooted in the 1.3.6 hierarchy, which belongs to the DoD.
And SNMP is explicitly a DoD Internet simplified alternative to CMIS
Fortunately there is the 2.25 OID arc now, which you can use without any registration with anyone. There are also other ways to register OIDs for free. (I think that it is better than using domain names, which can be reassigned, and also require registration anyways. IDN is an even more severe problem (it could have been designed better, but they made it worse instead).)
I had idea (which would have to later be made standardized by ITU or ISO (preferably ITU)) of a new OID arc which allows you to combine an existing identifier (of many different types, such as: international telephone numbers, amateur radio call signs, internet domain names (encoded as bijective base 37), IP addresses, ICAO airport codes, etc) with a timestamp, and optional auto-delegation. (You can then add additional numbers like you can with other OIDs too)
Binary protocols have other benefits as well, such as not requiring escaping, and allowing binary data to be transferred is a way that is not as messy, not causing problems with character sets, etc.
If anything, HTTP/3 running on top of QUIC forced shitty middlebox vendors to de-ossify by permitting any QUIC-based protocol, as they cannot practically distinguish a new HTTP/3 connection from a QUIC connection.
I recommend actually reading X.200 (the specification of the OSI model) at some point: it's quite approachable (especially for an ITU spec, which are notoriously dense reading), and will quickly make you realize how silly it is that we still use it as a reference for modern stacks.
HTTP sorta acts as stump of ROSE with bit of ACSE. In addition it provides a bit of basic layer for passing some extra attributes that might be considered in-band or out (or side?) band to the actual exchange.
I made up ULFI because I thought MIME has some problems.
> JSON may not look efficient
Efficiency is not the only issue; there is also the consideration of e.g. what data types you want to use. JSON does not have a proper integer type, does not have a proper binary data type (you must encode it as hex or base64 instead), and is limited about what character sets can be used.
(Also, like other text formats, escaping will be needed.)
> I feel like "JSON over HTTP" is a subtle improvement over "custom text formats over telnet"
I think it can be, depending on the specific use; sometimes it isn't, and will make it worse. (HTTP does have the advantage of having URLs and virtual hosting, although I think it adds too much complexity more than should be needed.) However, I still think that DER is generally better than JSON.
> HTTP isn't the worst bootstrap layer the internet could use to build new protocols and apps on top of.
I think it depends on the specific application. However, even then, I think there are better ways than using HTTP with the complexity that it involves, most of which should not be necessary (even though a few parts are helpful, such as virtual hosting).
If the answer is monetary values, then those should never be floats, and should not be represented in JSON as such. E.g. a dollar and a half should be represented as 150 cents. This follows even for sub-cent precision.
Using cents instead of dollars sounds fine until you have to do math like VAT, you really need decimal math for that.
While the grammar is specified (that’s what JSON is, after all), the runtime representation is unspecified. A conformant JSON parser can parse “1” as 1.0. They can be backed by doubles, or singles, or arbitrary precision.
Which parser? That’s the problem: if you’re using JSON as a data interchange format, you’ll need to carefully control both the serializers and deserializers, and whatever libraries you use, they will need to (at least internally) hold onto the number in a lossless way — I am not aware of any libraries that do this. They all parse the number as an f64 before any deserializers run. If your input JSON contains a u128, then you’ll have a loss of precision when your type is deserialized.
If you can set up (de)serialization to work the way you need it, then there’s no problem. But if you share your JSON serialized data with other parties, then you/they may be in for a bit of a surprise.
You might find it a worth while exercise to try parsing JSON containing an arbitrary unsigned 128 bit integer in your language of choice.
That's just not true. Telnet and SMTP are built on top of TCP. They live on the same layer. They were originally both protocols that transmitted data with printable ascii, hence why they look similar. There are many other protocols like Telnet and SMTP that worked like that, auch as nntp, irc, and yes, even http.
It shouldn't. For some cases it helps, but other times it doesn't. Sometimes it helps but there would be better ways to do it, making it on a simpler protocol or making an entirely new protocol (which might or might not use TCP; sometimes it is better to use TCP and sometimes not) depending on the specific case.
> Stuff like file sharing or groupware, mail, calendars, and so on—these things could be a lot more efficient and don’t really need the overhead of JSON as the message interchange format, IMHO
I dislike JSON. I think it has many problems, and that DER is a better format.
(There are also the "small web" protocols such as Gemini and Scorpion and Spartan and Titan, which avoids some of the complexity of HTTP; I had considered using DER-over-Scorpion rather than JSON-over-HTTP. It is also possible to use SSH, although SSH does not have virtual hosting.)
Absolutely yes, IMO. This significantly eases web client development.
JSON by comparison has simple, obvious limitations that more people are familiar with dealing with.
There's also the tendency to tie your protocol to implementation. The Microsoft Exchange "protocol" didn't get reverse engineered for so long because it's basically the COM structure of Outlook fed through (if I remember rightly) DCOM-RPC.
There's no magic. Nothing sacred. Nothing that you aren't allowed to understand, intuitively. Nothing where you aren't allowed to imagine "what if it also had X?" The web is yours. The computer is yours. As an industry, we burn some incremental percentage of bandwidth to give you the keys to the kingdom, and to allow you, new developer, to be one of us.
In an age when LLMs feel like magic boxes to tech-minded people new to development, we need this more than ever.
HTTP/2 and HTTP/3 are binary protocols. And if you replace the JSON with CBOR, then even the payload becomes binary.
The reason for using HTTP is that the semantics are right. HTTP is a state transfer protocol, and ultimately, that's 90% of what you need for sync.
The other 10% is for subscriptions, updates, with versioning, and patches. You can get these by adding the Braid extensions (see braid.org) which upgrade HTTP from a state transfer to a state synchronization protocol. (I work on Braid.)
I'm struggling to think of any real benefits to not using HTTP other than it would be more interesting.
I have asked sooo many times since Stalwart first was introduced, but not got a straight answer. It is just FastMail or Topicbox. I want something like roudcoube or wildduck that can be used over https that I can self-host!
The documentation is not great - I'd say it's just about barely enough to get an overall idea, but there's no one proper single definitive overview of what options exist, what are their possible values, what are the defaults, and how they relate to each other. Maddy docs, despite looking a bit sloppy, were a lot easier to get through. IMHO Stalwart makes it unnecessarily difficult to write a non-minimal static configuration file, hooking everything up correctly.
To be fair, maybe there is a page like that but I haven't found it, despite trying.
I know the Web UI allows to do the configuration by clicking through the forms, but this approach conflicts with declarative deployment practices. In my case it's giving me nondescript 500 errors in the UI with "Failed to write local configuration" in the logs because the .toml file is read-only.
But in general, I agree that it has not been a very smooth experience. Having messed around with maddy and mox, Stalwart has had quite a few gotchas. Despite being a single binary promising simplicity, I'm finding it to be a real challenge figuring out how it all fits together, and I'm mostly learning by trial and error since the documentation is often outdated.
My biggest gripe is that it doesn't use the config.toml for every setting, or at least doesn't seem to have the option to do so. I broke my installation and had to find the posgresql key-value pairs for the settings, which was made harder by the fact that everything was stored as binary, which also made me have to edit it as binary as well. These were very simple settings that would have been a breeze in a flat configuration file. I absolutely do not like how necessary the WebAdmin is to manage simple things.
That said, the integration with calendar/contacts is nice even without JMAP... Getting Thunderbird and Roundcube setup with plugins and proper settings made it so easy to get several users setup with calendars, contacts, and shared email-boxes and shared contacts right upon first login.
The S3 storage is also working great (Hetzner Frankfurt VPS paired with AWS eu-central-1), and AWS downtime a few days ago notwithstanding, I'm feeling good about the reliability that gives me, leaving me mainly with the PosgresQL data store the main thing to keep backed up.
This is a hugely ambitious software and as such, there will be many things that I will have a hard time getting used to as a hobbyist, but also a lot to be gained. I'm sticking around for now and waiting for version 1, improved documentation, and more clarity on how it all works.
Also, I only have 5 mailboxes right now holding less than 15GB of data total... S3 is still cheaper than the minimum at Hetzner since I don't need anything close to a TB.
For example, it automatically handles Let's Encrypt certs for you. You get JMAP, CalDAV, WebDAV, CardDAV, IMAP4rev2, DKIM/SPF/DMARC, MTA-STS, DANE, spam filtering, SQL+blob+object storage backends, search, clustering, OpenTelemetry, etc all in one tiny binary.
Downsides: some features are gated behind an enterprise version and I think the dev team is one guy, or at least it was a while ago.
Having ran both for a long time, I'm sticking with Stalwart from now on as long as development continues.
I treat this as an insurance policy. Even in this thread people mentioned how Maddy, which is an alternative modern full stack email solution in a single binary, lacks development efforts.
This is why we have this fantastic release for Stalwart - free shit.
Also as of now enterprise is for $0.2 per account per month which is extremely cheap unless somebody wants to build a big spam farm, of which as civilized Internet user I don't support. Obviously this might change, but even if you can always built multi-tenancy layer by yourself if you really need it - rest of the codebase is AGPL.
The only way to adopt Stalward is to drop everything else and use a single monolithic do-it-all?
Messages are stored in a bespoke format and not easily accessible directly?
It doesn’t sound like it’s made to be usable with other software. This isn’t an advantage in my book.
Suits my needs, but I can see why it wouldn't suit everyone's.
Between all the options, you can design incremental backups, snapshots, or whatever with 3rd party tools to write a script to backup your mailboxes to be restorable in any other email service or software. I have tested it with rsync, restic, database dumps, mc/aws-cli depending on the backends used, of which I have tried them all, and found it designed to be very straightforward.
The monolithic aspect is a necessary aspect of being built for HA and distributed environments that it is all the more impressive how versatile it is.
It sounds awesome but the way it is intro'd here:
...gave me pause. A protocol I've never heard even though I hang out here for an hour a day, was so successful, that it launched 6 new projects?Sounds more like the parts of the web dev that give me ick (new and shiny; rush to copy new and shiny in other contexts; give it a year; and all of a sudden only 1 of the 6 actually was successful)
Now JMAP is quite a bit nicer to use than IMAP's API, but IMAP's gravitational field is too strong to be supplanted. IMAP is also becoming somewhat of a niche protocol, as the majority of users use vendor proprietary protocols for accessing their emails on Gmail, Outlook/Hotmail, etc. So why invest the time to add a niche replacement for IMAP when the entire protocol is a second class citizen to mainstream email clients.
If you want to push a new technology, you need to start somewhere. That's exactly what's happening with JMAP. It was created by Fastmail to use as a bridge between their servers and their own apps a case for which popularity doesn't matter. It's basically a modern vendor proprietary protocol but done in the open.
From there, support is only a matter of someone being interested enough to implement it and manifestly it's working. There are now three servers (Apache James, Cyrus and Stalwart) and some clients.
https://datatracker.ietf.org/wg/jmap/history/
Bron is the principal of fastmail, who now own pobox. This is a serious activity.
That's a really cruel response, because this is important work. I don't want my kids beholden to bigco.
I think it's real & important.
I also wanna make sure people like me, who have to keep tabs on the intersection of "how can I help liberate from BigCo" and "how can I make a livable wage doing so"
It is, quite literally, real, but also something you shouldn't waste time on if you're already busy. (c.f. https://jmap.io/software.html)
The modernization efforts of JMAP are interesting, too. Most of the old protocols are a mess of bespoke plaintext formats full of quirks evolved over decades in a giant mess of different software. Even the stuff that was already web tech like WebDAV and its extensions CalDAV and CardDAV were full of quirks, violated some REST "rules", and originally intended for a different purpose (file shares/FTP replacement). JMAP is much closer to "plain REST" than WebDAV's complex HTTP protocol extensions/changes.
Never hosted Postfix / Dovecot stack, in fact this is the first time I host emails, but from what I understand Stalwart is designed to handle inbound directly.
For very high throughput inbound you could check out KumaMTA - it was designed specifically for that, but I think Stalwart doesn’t have bottlenecks in it’s clustered topologies which would require it unless you are doing something crazy.
They have very good docs in general IMO, here are docs on how to cluster - https://stalw.art/docs/cluster/configuration
Haven't looked into spam more closely yet. After first glance on most publicly shared email address - there is around 2 spam messages per hour.
Here is report prepared by llm which looked through the last 20 email headers found in spam. All of them were categorized correctly, however there were few emails in the past few days which went to spam where they shouldn't but I think this is fixable.
- Critical Authentication Failures: A large number of the messages failed basic email authentication. We see many instances of SPF_FAIL and VIOLATED_DIRECT_SPF, meaning the sending IP address was not authorized to send emails for that domain. This is a major red flag for spoofing.
- Poor Sender IP Reputation: Many senders were listed on well-known Real-time Blackhole Lists (RBLs). Rules like RBL_SPAMCOP, RBL_MAILSPIKE_VERYBAD, and RBL_VIRUSFREE_BOTNET indicate the sending IPs are known sources of spam or are part of botnets.
- Suspicious Content and Links: The spam filter identified content patterns statistically similar to known spam (BAYES_SPAM) and found links to malicious websites (ABUSE_SURBL, PHISHING).
- Fundamental Technical Misconfigurations: Many sending servers had no Reverse DNS (RDNS_NONE), a common trait of compromised machines used for spam.
There have been few messages which went to spam which didn't meet any of this spam criteria but actually they were cold marketing emails, so it's good too. In addition to this stalwart emits info log for each possible spam message ingested. Not sure if this can get any better than this.
(This should not be interpreted as a defense of IMAP.)
[1] https://www.rfc-editor.org/rfc/rfc5465.html
UIDs don't change, but of course they can be deleted so it's a gappy list, meaning you can request even quite a large looking range of UIDs and get nothing back.
Message numbers change in every session, and also change every time you get an EXPUNGE. They're basically an ordered list without gaps, so you do a memmove at the offset of the EXPUNGE each time you get an expunge.
There are efforts like UIDONLY (RFC9586) to avoid having to keep that mapping at all, and there's OBJECTID (RFC8474) to let you cache a lot more even when UIDs are changed or when messages are moved between folders.
Realistically speaking, does any server ever rotate its UIDValidity?
You don't need major providers to support it, they support SMTP and that's how messages are relayed. JMAP is just so you: the client, can fetch your mail from wherever you host your mail.
To be honest, I’m not sure why end-users would want JMAP for e-mail access.
It would be interesting if they do successfully roll out all of these additional RFC proposals providing a cohesive “groupware” protocol covering calendering, contacts, file shares, etc, we see notable server implementations, and interest is enough to drive client support.
That’s a lot of “ifs”.
People say things like that, and I wonder if I’ve just been living in a gilded tower of using Apple Mail with decent IMAP server implementations.
I’m also pretty familiar with the wire protocol and its implementation — it’s never struck me as particularly horrible.
A new protocol isn’t likely to solve the problem of poorly implemented clients and servers — e.g. Google doesn’t really care about good IMAP support, so they’re unlikely to care much about JMAP, either. They just want you to use their webapp.
Shameless plug for a client with true offline-first IMAP support:
https://marcoapp.io
Mail.app is what NeXT used internally, and Apple uses to this day AFAIK. Steve Jobs historically paid a lot of attention to it and wasn’t shy about weighing in on any changes.
Most of the complaints that I’ve heard about it seemed to stem from poor IMAP servers (e.g. Gmail), but it sounds like your knowledge in the space would be a lot more detailed and recent than mine, so I would be very interested in your thoughts.
I've written about my experience and motivations here:
https://marcoapp.io/blog/marco-an-introduction
Gmail does indeed _intentionally_ provide poor IMAP service. But the long and short of it is that Apple Mail simply isn't a first-class product. It's an afterthought.
For regular desktop software, I’m not sure that it’s really an improvement over existing protocols.
I’ve got a friend who’s been pitching me on building a new email client for years. “I’ll do it if we exclusively use JMAP.” “okay does that include Gmail and Apple/iCloud accounts?” “Nope.”
I could sort of see dual-supporting Gmail's proprietary API and JMAP, but unless the #2-5 competitors support it… what’s the point? (sorry to put on the pessimism hat)
P.S. ("New" Outlook already only connects to MS365 servers and then stores your credentials and data on Azure, while they proxy to your actual IMAP/SMTP server )
edit: we use it on very resource constrained environments, the container version is too much overhead.
1. systemd timer
2. curl github api
3. if new release, fetch, verify checksum
4. update symlink
5. restart service
i don’t think repackaging is actually easier here, for main services of a system is ok to skip the package manager.
This is not the case for all versions, but I've found it to be common enough that I have to read all of the release notes between point versions when upgrading.
It can definitely be improved.
https://stalw.art/docs/install/upgrade/
The goal is to stabilize the database layout/configuration format very soon so v1.0.0 can be released (hopefully before Q1/Q2 2026).
In that case the overhead is just a small amount of kernel accounting.
However, the quadlets technology allows you to easily setup systemd using systemd generators to initialize the containerized applications using podman and then monitor it for any crashes. Quadlets essentially does everything that docker compose does.
That aside, a container's main overheads aren't the compute or the memory. It's the storage overhead. You're essentially replicating the minimal Linux userland for each container, unless that's in a shared layer.
I am most curious
I guess contacts/calendar follows JMAP naturally when the clients already implement it, but that only applies in the 'already wrote a JMAP email client' case. Virtually any other case would rather stay with widely supported protocols?
I think we're about ten years past the point where "newer = better" was a reasonable starting presumption.
JMAP is better than IMAP because IMAP is a too stateful design, the IMAP/SMTP distinction allows for misconfigurations where sending doesn't work, has dozens of extensions where key extensions are inconsistently supported, doesn't have as many batched operations, etc. One could make an effort to improve IMAP - but the effort to do this consistently in server software would likely be comparable to adding JMAP and the result worse...
OTOH, the new protocols intrude on areas that go far beyond email software (you're very unlikely to get support for these in older Androids/iOS/Windows even if the modern OSs ever consider them), and don't offer as much as JMAP offers over IMAP. The cost/benefit is worse. They may make sense for a JMAP email client but IMHO not elsewhere.
However, doesn't stalwart already also support WebDAV though?
I looked into adding JMAP support to Thunderbird but the client is so tied around the ideas and principles of IMAP, it needs surgical refactoring of many parts of it and I don’t love C++.
So instead in my spare time I am developing a JMAP only gnome email client, using many Stalwart libraries. Think Geary but Rust instead of Vala, GTK4 instead of GTK3 and JMAP instead of IMAP. It’s been mostly an excuse to play with Rust and gtk-rs and Relm4 (beautiful Elm inspired rust bindings for GTK4). Someday, it will be released.
Client support for a new protocol is never that quick, but I believe adoption will happen, at least outside of the big providers, who will never support it.
Here is a quote I found on https://thunderbird.topicbox.com/groups/planning/T437cd854af...:
> We have been experimenting with this for a while now and are using Stalwart as the software stack we are building upon. We have been working with the Stalwart maintainer to improve its capabilities (for instance, we have pushed hard on calendar and contacts being a core piece of the stack).
However, unfortunately I am unsure whether this is a good source or official page.
The downsides of developing and testing this stuff as we were writing it up!
We've finished rewriting the objectid generation to give smaller sized and more sortable IDs (they're an inverse of nanosecond internaldates now, plus some extra magic on the low bits for IMAP appends to avoid clashes)... which we wanted to speed up and reduce disk usage for the offline mode.
Next up is indeed updating to the latest spec on calendars and contacts. Files might take a bit longer, I really want to do some desktop clients for the files system, we have a really nice files backend at Fastmail which is only accessible via our interface or WebDAV right now.
The next, next big thing would be the Chatmail relays[1] supporting JMAP based servers (right now it's Dovecot) and this new targeted push extension for faster notifications without battery drain on mobile. I can see how the Fastmail mobile client will benefit from this RFC as well (it's already incredibly battery efficient, thanks to the team).
[1] https://github.com/chatmail/relay
https://datatracker.ietf.org/doc/draft-ietf-jmap-calendars/
And Contacts was only 10-months ago.
https://www.rfc-editor.org/rfc/rfc9610.html
Can others confirm if these problems are widespread? I get that these protocols are probably a pain to develop for but given they are "robust, widely adopted and battle-tested" it seems that is probably a solved problem. It's better to have one standard that is used everywhere than to have to choose between two standards.
Always relevant: https://xkcd.com/927/
I haven’t been there in more than a decade. I really am curious what the response in Apple (and Google) is to this spec.
I researched what it would take to implement a full calendaring server once, and after reading all the RFCs, just backed away slowly from the whole idea and never thought about it again.
Nylas pricing has gotten better recently, but is still quite high though - at $1.50/connected account/month at scale, it's likely material to your per-user margin if it's part of your SaaS offering.
But if you have a use case where this is a no-brainer (like capturing/analyzing/building custom real-time UI around your internal sales team's emails) then it's remarkably powerful.
42 more comments available on Hacker News