Flow
github.comKey Features
Tech Stack
Key Features
Tech Stack
This is more node.js-like communication than erlang.
BTW, Erlang does not implement CSP fully. Its' interprocess communication is TCP based in general case and because of this is faulty.
> Its' interprocess communication is TCP based in general case and because of this is faulty.
What? It's faulty because of TCP? No, in Erlang it is assumed that communication can be faulty for a lot of reasons, so you have to program to deal with that and the standard library gives you tools to deal with this.
This means that Erlang does not implement CSP, it implements something else.
Again, general case of communication between Erlang processes includes communication between processes on different machines.
Specific evidence?
> Its' interprocess communication is TCP based in general case
No, it is not. Only between machines is that true.
> and because of this is faulty.
LOL, no. Why are you rolling with "speaking a whole lot of BS based on ignorance" today?
On the other hand, I now understand that one impediment to Elixir adoption is apparently "people repeating a lot of bullshit misinformation about it"
>> Its' interprocess communication is TCP based in general case
> No, it is not. Only between machines is that true.
It is true for communication between two VMs on same machine, isn't it?The general case includes same-VM processes, different VM processes and also different VMs on different machines.
> Why are you rolling with "speaking a whole lot of BS based on ignorance" today?
TCP is unreliable: https://networkengineering.stackexchange.com/questions/55581...That was acknowledged by Erlang's developers before 2012. I remember that ICFP 2012 presentation about Cloud Haskell mentioned that "Erlang 2.0" apparently acknowledged TCP unreliability and tried to work around.
Erlang circa 2012 was even less reliable than TCP on which its interprocess communication was based.
Namely, TCP allows for any prefix of messages m1,m2,m3... to be received. But Erlang circa 2012 allowed for m1,m3... received, dropping m2.
It may be not case today, but it was case about ten years ago.
Tigris uses it: https://www.tigrisdata.com/blog/building-a-database-using-fo...
A good collection of papers, blog posts, talks, etc.: https://github.com/FoundationDB/awesome-foundationdb
This "Who is hiring" post for Tesla mentions FoundationDB [0].
Firebolt [1] uses it.
FoundationDB is used at Datadog [2].
[0] https://news.ycombinator.com/item?id=26306170
[1] https://www.firebolt.io/blog/decomposing-firebolt-transactio...
("Legacy" products have a negative growth rate.)
I've never spent less time thinking about a data store that I use daily.
1: https://apple.github.io/foundationdb/configuration.html#choo...
The best system for this I've ever used was Thrift, which properly abstracts data formats, transports and so on.
https://thrift.apache.org/docs/Languages.html
Unfortunately Thrift is a dead (AKA "Apache") project and it doesn't seem like anyone since has tried to do this. It probably didn't help that there are so many gaps in that support matrix. I think "Google have made a thing! Let's blindly use it!" also helped contribute to its downfall, despite Thrift being better than Protobuf (it even supports required fields!).
Actually I just took a look at the Thrift repo and there are a surprising number of commits from a couple of people consistently, so maybe it's not quite as dead as I thought. You never hear about people picking it for new projects though.
Wanted to do unspeakable and evil things to people responsible to choosing it as well as its authors last time I worked on a project that used Thrift extensively.
I recall threatening I'll rewrite everything with ONC-RPC out of pure pettiness and wish to see the network stack not go crazy.
actually never heard of thrift until today, thanks for the insight :)
As an interesting historical note, Thrift was inspired by Protobuf.
> We wanted FoundationDB to survive failures of machines, networks, disks, clocks, racks, data centers, file systems, etc., so we created a simulation framework closely tied to Flow. By replacing physical interfaces with shims, replacing the main epoll-based run loop with a time-based simulation, and running multiple logical processes as concurrent Flow Actors, Simulation is able to conduct a deterministic simulation of an entire FoundationDB cluster within a single-thread! Even better, we are able to execute this simulation in a deterministic way, enabling us to reproduce problems and add instrumentation ex post facto. This incredible capability enabled us to build FoundationDB exclusively in simulation for the first 18 months and ensure exceptional fault tolerance long before it sent its first real network packet. For a database with as strong a contract as the FoundationDB, testing is crucial, and over the years we have run the equivalent of a trillion CPU-hours of simulated stress testing.
[1]https://pierrezemb.fr/posts/notes-about-foundationdb/#simula...
But I wonder if this can be a better abstraction than async. (And whether I can build something like this in existing Rust.)
Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.