Why I Love Ocaml (2023)
Postedabout 2 months agoActiveabout 2 months ago
mccd.spaceTechstoryHigh profile
calmmixed
Debate
70/100
OcamlFunctional ProgrammingProgramming Languages
Key topics
Ocaml
Functional Programming
Programming Languages
The post discusses the author's love for OCaml, a functional programming language, and the comments section explores various aspects of OCaml, its strengths and weaknesses, and comparisons with other languages.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
116
0-6h
Avg / period
20
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Nov 7, 2025 at 9:05 AM EST
about 2 months ago
Step 01 - 02First comment
Nov 7, 2025 at 11:01 AM EST
2h after posting
Step 02 - 03Peak activity
116 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 10, 2025 at 2:53 PM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45846517Type: storyLast synced: 11/20/2025, 8:18:36 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Because plenty of people have been shipping great projects in Ocaml since it was released so it doesn’t seem to be much of an issue to many.
I doubt Ocaml will be surpassed soon. They just added an effect system to the multicore rewrite so all things being considered, they seem to be pulling even more ahead.
Beginners face the following problems: there's multiple standard libraries, many documents are barely more than type signatures, and data structures aren't printable by default. Experts also face the problem of a very tiny library ecosystem, and tooling that's often a decade behind more mainstream languages (proper gdb support when?). OCaml added multicore support recently, but now there is the whole Eio/Lwt/Async thing.
I used to be a language nerd long ago. Many a fine hour spent on LtU. But ultimately, the ecosystem's size dwarfs the importance of the language itself. I'm sympathetic, since I'm a Common Lisp man, but I don't kid myself either: Common Lisp isn't (e.g.) Rust. I like hacking with a relic of the past, and that's okay too.
> there's multiple standard libraries
Scala has a far more fragmented ecosystem with Cats, Scalaz, Zio and Akka. C++ and Java have a bunch of stdlib extensions like boost, Guava, Apache Commons etc.
> many documents are barely more than type signatures
Can be said of most of Java, Kotlin, Scala, Erlang etc etc. Just compiled javadocs, sometimes with a couple of unhelpful lines.
> data structures aren't printable by default
Neither they are in C++
I think the real reason it's not popular is that there are languages which solve more or less the same problems of system programming but look far more familiar to an avg. programmer who was raised on C++ and Java.
I wanted to use OCaml since 2002, since it was a GC'd language with good performance, achieving a lot with relatively few lines of code. Being a language nerd, I was (am?) positively inclined to the language. Yet there was always something that made it notably less pleasant to solve my current problem in than in than some other language.
If it had trouble getting traction with me, that's bad news.
Meanwhile the mainstream has progressed a lot since 2000. GC is the standard, closures are normal, pattern matching and destructuring are increasingly so. While HM-style type inference is not mainstream, local type inference is (and I'm no longer convinced that global type inference is the way). Algebraic data types aren't unusual anymore.
A few years back I just threw in the towel and went with Rust; this happened after I volunteered to improve OCaml's gdb support (including DWARF hell), which went nowhere. I wish Rust compiled faster, and a GC'd language is usually more productive for my problems, but in every other regard it stole what should have been OCaml's thunder. And when popular successor languages eventually appear, they'll do it even better.
(Rust has refcounting, but it's slow, and needing to handle cycles manually limits its usefulness.)
https://www.ponylang.io/ https://github.com/ponylang/ponyc
The Pony community is here:
https://ponylang.zulipchat.com/
Pony's garbage collection and runtime can yield performance faster than C/C++/Rust because the compiler eliminates data races at compile time, and thus no locks are needed.
It features implementations of Dmitri Vyukov (www.1024cores.net) algorithms for work stealing and Multi-producer single consumer queues that use only a single atomic instruction to read. The designer, Sylvan Clebsch, is an ex-game-developer and ex-high-frequency-trading-infrastructure engineering lead; he knew what he was doing when he designed the language for high performance on multicore systems.
On the other hand, you do have to re-wire your thinking to think in terms of Actors and their state, and this is kind of hard to wrap one's head around if you are used to Go's CSP or the popular async/await approaches -- it is a somewhat different paradigm.
There is only one standard library and it shipped with the compiler.
> data structures aren't printable by default
So? That’s the case with most languages. Ocaml has a deriver to make types printable and a REPL which automatically prints for testing.
> tooling that's often a decade behind more mainstream languages
Opam is a fully featured package manager, dune works fine, bucks2 supports Ocaml.
> proper gdb support when?
Ocaml has had a rewindable debugger since approximately forever.
> OCaml added multicore support recently, but now there is the whole Eio/Lwt/Async thing.
Lwt was the default and everyone agrees eio is the future now that effects are there. Async is a Janestreet thing with pretty much no impact on the language outside of Janestreet.
Honestly, I maintained my initial point. Ocaml alleged frictions were always widely overstated.
When I was writing Ocaml professionally 15 years ago, there was no dune and no opam and it was already fairly easy to use the language.
What applications are written in OCaml? All I can think of (which says more about me than it does about OCaml) is the original Rust compiler.
Even Haskell has Pandoc and Xmonad.
Generally, you want stuff where you have to build a fairly large core from scratch. Most programs out there doesn't really fit that too well nowadays. We tend to glue things more than write from nothing.
For example, there is no OAuth2 client library for OCaml [1]
[1] https://ocaml.org/docs/is-ocaml-web-yet
So that does seem to be a good use-case for the language.
I don't build HFTs and my compilers are just for fun. None of my day jobs have ever been a situation where the smaller ecosystem and community of ocaml was offset by anything ocaml did better than the selected options like .net, Java, go, rails, C or anything else I've touched. Heck, I've written more zig for an employer than ocaml, and that was for a toy DSL engine that we never ended up using.
Yeah, its more just extensions to support their use cases at scale. Think of it more as bleeding edge ocmal, once they work out kinks/concerns they'll get merged back into the language OR if it remains ultra specific it'll stay in oxcaml.
Not a complete own version lol
Python gets forked in other investment banks as well. I wouldn’t say that is evidence of any deficiencies, rather they just want to deal with their own idiosyncrasies.
See https://calpaterson.com/bank-python.html
The main user has been writing extensions to the compiler that they test before pushing for integration like they have done for the past twenty years or so. They publish these versions since last year.
Hardly a failure and certainly not something mandatory to keep things from failing over. Your initial comment is extremely disingenuous.
The latter going much further than most mainstream languages.
There might be some power in attracting all the people who happen to love ocaml, if there are enough of competent people to staff your company, but that's more a case of cornering a small niche than picking on technical merits
Regarding attracting talent, they've said they don't care about existing knowledge of OCaml as the language part is something they train staff on anyway. Their interviews are brutal from what I recall. I could be an OCaml expert and have no chance of making it through an interview without equal talent in high performance fintech and relevant mathematics.
Unless their hiring process has changed in the past few years, if you're a dev they're not hiring you for your financial skills, just general problem solving and software development ability. It is (was?) the usual Google-style algorithms/data structures rigamarole, but somewhat more challenging.
Like I said, my information might be a hair out of date, but it's first-hand.
Everybody in the company is expected to be able to sling numbers in text on a computer super efficiently.
[1]: https://quoteinvestigator.com/2016/03/01/velvet/
https://news.ycombinator.com/item?id=45646520#45752905
I know there are great many Polish people in the world, but why it matters so much in this case? They could have been any nationality, even French!
Why must you call me out in this way? How did I wrong you, that I deserve this?
I would never.
If people start using the term AI, we better be living in I, Robot. Not whatever the hell this is.
Tangential rant. Sorry.
If one can stand a language that is just a little bit older, there is always Standard ML. It is like OCaml, but perfect!
While it's not yet standard nearly all Standard ML implementations support what has become known as "Successor ML" [0]. A large subset of Successor ML is common to SML/NJ, MLton, Poly/ML, MLKit and other implementations.
That includes record update syntax, binary literals, and more expressive patterns among other deficiencies in Standard ML.
For me the two big remaining issues are:
1) There's only limited Unicode support in both the core language and the standard library. This is a big issue for many real-world programs including these days the compilers for which SML is otherwise a wonderful language.
2) The module system is a "shadow language" [0] which mirrors parts of SML but which has less expressiveness where modules cannot be treated as first-class values in the program. Also if you define infix operators in a module their fixity isn't exported along with the function type. (Little annoyance that gets me every time I am inclined to write Haskell-style code with lots of operators. Though maybe that's just another hint from the universe that I shouldn't write code like that.) Of course, the fix to that would be a fundamentally different language; not a revised SML.
[0] http://mlton.org/SuccessorML
[1] https://gbracha.blogspot.com/2014/09/a-domain-of-shadows.htm...
I certainly agree that SML isn't really a production language, though.
Ironically probably because it had the "O"bjects in it, "which was the style of the time"... something that has since dropped off the trendiness charts.
{Ecosystem, Functors} - choose 1
F# is not stagnant thankfully, it gets updates with each new version of dotnet (though I haven't checked what is coming with dotnet 10), but I don't recall anything on the level of the above Ocaml changes in years.
Unfortunately lots of the more advanced stuff seems to be blocked on C# team making a decision. They want a smooth interop story.
But F# remains a solid choice for general purpose programming. It's fast, stable and .NET is mainstream.
C# has lots of anti-features that F# does not have.
There's nothing inherently wrong with using Jane Street's stdlibs if you miss the goodies they provide, but be aware the API suffers breaking changes from time to time and they support less targets than regular OCaml. I personally stopped using them, and use a few libraries from dbunzli and c-cube instead to fill the gaps.
Nobody wants to effectively learn a lisp to configure a build system.
I'd like to hear some practical reasons for preferring OCaml over F#. [Hoping I don't get a lot about MS & .NET which are valid concerns but not what I'm curious about.] I want to know more about day to day usage pros/cons.
Bigger native ecosystem. C#/.net integration is a double edged sword: a lot of libraries, but the libraries are not written in canonical F#.
A lot of language features F# misses, like effect handlers, modules, GADTs etc.
As for missing language features, they can also be a double-edged sword. I slid down that slippery slope in an earlier venture with Scala. (IIRC mostly implicits and compile times).
Meanwhile, OCaml got rid of its global lock, got a really fast-compiling native toolchain with stable and improving editor tooling, and has a cleaner language design with some really powerful features unavailable to F#, like modules/functors, GADTs, effects or preprocessors. It somehow got immutable arrays before F#!
F# still has an edge on some domains due to having unboxed types, SIMD, better Windows support and the CLR's overall performance. But the first two are already in the OxCaml fork and will hopefully get upstreamed in the following years, and the third is improving already, now that the opam package manager supports Windows.
If a language makes "unboxed types" a feature, a specific distinction, and has to sell "removing global lock" as something that is a massive breakthrough and not table stakes from 1.0, it can't possibly be compared to F# in favourable light.
Let's not dismiss that their solution to remove the global lock has been both simpler than Python's (better funded?) ongoing work and backwards compatible; and the multicore support came with both a memory model with strong guarantees around data races, and effect handlers which generalize lightweight threads.
I agree that lack of unboxed types is a fatal flaw for several domains (Worse, domains I care about! I usually consider Rust/Go/.NET for those.) But when the comparison with OCaml is favourable to F#, the gap isn't much wider than just unboxed types. F# has an identity crisis between the ML parts and the CLR parts, its C# interop is decaying as the ecosystem evolves, and newer .NET APIs aren't even compatible with F# idioms (here's a .NET 6+ example [1]).
[1]: https://sharplab.io/#v2:DYLgZgzgNAJiDUAfA9gBwKYDsAEBlAnhAC7o...
Where's the equivalent to https://hackage-content.haskell.org/package/vector-0.13.2.0/... ?
These are usually defined with the [<Struct>] attribute over a regular type definition, or using the struct keyword before tuple/anonymous record types. Also, the original 'T option type now has a value-type successor 'T voption.
1. Interop with C# is great, but interop for C# clients using an F# library is terrible. C# wants more explicit types, which can be quite hard for the F# authors to write, and downright impossible for C# programmers to figure out. You end up maintaining a C#-shell for your F# program, and sooner or later you find yourself doing “just a tiny feature” in the C# shell to avoid the hassle. Now you have a weird hybrid code base.
2. Dotnet ecosystem is comprehensive, you’ve got state-of-the web app frameworks, ORMs, what have you. But is all OOP, state abounds, referential equality is the norm. If you want to write Ocaml/F#, you don’t want to think like that. (And once you’ve used discriminated unions, C# error-handling seems like it belongs in the 1980’ies.)
3. The Microsoft toolchain is cool and smooth when it works, very hard to wrangle when it doesn’t. Seemingly simple things, like copying static files to output folders, require semi-archaic invocations in XML file. It’s about mindset: if development is clicking things in a GUI for you, Visual Studio is great (until it stubbornly refuses to do something) ; if you want more Unix/CLI approach, it can be done, and vscode, will sort of help you, but it’s awkward.
4. Compile-times used to be great, but are deteriorating for us. (This is both F# and C#.)
5. Perf was never a problem.
6. Light syntax (indentation defines block structure) is very nice until it isn’t; then you spend 45 minutes how to indent record updates. (Incidentally, “nice-until-it-isn’t” is a good headline for the whole dotnet ecosystem.
7. Testing is quite doable with dotnet frameworks, but awkward. Moreover. you’ll want something like quickcheck and maybe fuzzing; they exist, but again, awkward.
We’ve been looking at ocaml recently, and I don’t buy the framework/ecosystem argument. On the contrary, all the important stuff is there, and seems sometimes easier to use. Having written some experimental code in Ocaml, I think language ergonomics are better. It sort of makes sense: the Ocaml guys have had 35 years or so to make the language /nice/. I think they succeeded, at least writing feels, somehow, much more natural and much less inhibited than writing F#.
Whether those features are more valuable to you than the ability to tap into .NET libraries depends largely on what you're doing.
OCaml did become popular, but via Rust, which took the best parts of OCaml and made the language more imperative feeling. That's what OCaml was missing!
It has no dogmatic inclination towards functional. It has a very pragmatic approach to mutation.
I'm not saying that Rust feels like Ocaml as some are interpreting, I said Rust is more imperative feeling, they're not the same. The reason Rust has had success bringing these features to the mainstream where Ocaml has not, I believe, is because Rust does not describe itself as a functional language, where as Ocaml does, right up front. Therefore, despite Rust having a reputation for being difficult, new learners are less intimidated by it than something calling itself "functional". I see it all the time. By the time they learn Rust, they are ready to take on a language like Ocaml, because they've already learned some of the best parts of that language via Rust.
Note my comment about their similarities is not at the level of borrow checkers and garbage collectors.
The similarities are fairly superficial actually. It’s just that Rust is less behind the PL forefront that people are used to and has old features which look impressive when you discover them like variants.
There is little overlap between what you would sanely use Rust for and what you would use Ocaml for. It’s just that weirdly people use Rust for things it’s not really suited for.
I personally love ML languages and would be happy to keep developing in them, but the ecosystem support can be a bit of a hassle if you aren't willing to invest in writing and maintaining libraries yourself.
OCaml has some high profile use at Jane Street which is a major fintech firm. Haskell is more research oriented. Both are cool, but wouldn't be my choice for most uses.
Languages have features/constructs. It's better to look at what those are. And far more importantly: how they interact.
Take something like subtyping for instance. What makes this hard to implement is that it interacts with everything else in your language: polymorphism, GADTs, ...
Or take something like Garbage Collection. It's presence/absence has a large say in everything done in said language. Rust is uniquely not GC'ed, but Go, OCaml and Haskell all are. That by itself creates some interesting behavior. If we hand something to a process and get something back, we don't care if the thing we handed got changed or not if we have a GC. But in Rust, we do. We can avoid allocations and keep references if the process didn't change the thing after all. This permeates all of the language.
It has basically all of the stuff about functional programming that makes it easier to reason about your code & get work done - immutability, pattern matching, actors, etc. But without monads or a complicated type system that would give it a higher barrier to entry. And of course it's built on top of the Erlang BEAM runtime, which has a great track record as a foundation for backend systems. It doesn't have static typing, although the type system is a lot stronger than most other dynamic languages like JS or Python, and the language devs are currently adding gradual type checking into the compiler.
The current tool to wrap your bytecode with a VM so that it becomes standalone is Burrito[1], but there's some language support[2] (I think only for the arch that your CPU is currently running? contra Golang) and an older project called Distillery[3].
1: https://github.com/burrito-elixir/burrito
2: https://hexdocs.pm/mix/Mix.Tasks.Release.html
3: https://hexdocs.pm/distillery/home.html
I always wanted to learn Elixir but never had a project where it could show it strengths. Good old PHP works perfectly fine.
Also corporations like their devs to be easily replaceable which is easier with more mainstream languages, so it is always hard for "newer" languages to gain traction. That said I am totally rooting for Elixir.
I know of a Haskell shop and everybody said they’d have a hell of a time finding people… but all them nerds were (and are) tripping over themselves to work there because they love Haskell… though some I’ve talked to ended up not liking Haskell in production after working there. There seems to be a similar dynamic, if a bit less extreme, in Elixir shops.
But even bog-standard business processes eventually find the need for data-processing, crypto and parsing - the use-cases where people code Elixir NIF's. That is why for example you have projects like html5ever_elixir for parsing HTML/XML. Another use case is crypto - you have NIF's for several crypto libraries. Data processing - there are NIF's for Rust polars.
From a technical perspective, this is the overwhelming majority of what makes the Net happen. BEAM is great for that and many other things, like extremely reliable communication streams.
Use the right tool for the job. Rust sucks for high-level systems automation but that doesn’t make it any less useful than bash. It’s all about about use cases and Elixir fits many common use cases nicely, while providing some nice-to-haves that people often ask for in other common web dev environments.
If you use type guards correctly the performance can be surprisingly good. Not C/C++/Rust/Go/Java good, but not too far off either. This is definitely a corner-case though.
[1] Tackling the Awkward Squad: monadic input/output, concurrency, exceptions, and foreign-language calls in Haskell
Talk about "immutable by default". Talk about "strong typing". Talk about "encapsulating side effects". Talk about "race free programming".
Those are the things that programmers currently care about. A lot of current Rust programmers are people who came there almost exclusively for "strong typing".
Data is immutable and thats much more important than whether local variables can be modified imo.
"Type Providers" are an example of such negligence btw, it's something from the early 2010's that never got popular even though some of its ideas (Typed SQL that can generate compile-time errors) are getting traction now in other ecosystems (like Rust's SQLx).
My team used SQL Providers in a actual production system, combined with Fable (to leverage F# on the front end) and people always commented how our demos had literally 0 bugs, maybe it was too productive for our own good.
In 2025, Elixir is a beautiful system for a niche that infrastructure has already abstracted away.
The throughput loss stems from a design which require excessive communication. But such a design will always be slow, no matter your execution model. Modern CPUs simply don't cope well if cores need to send data between them. Neither does a GPU.
The grand design of BEAM is that you are copying data rather than passing it by reference. A copy operation severs a data dependency by design. Once the copy is handed somewhere, that part can operate in isolation. And modern computers are far better at copying data around than what people think. The exception are big-blocks-of-data(tm), but binaries are read-only in BEAM and thus not copied.
Sure, if you set up a problem which requires a ton of communication, then this model suffers. But so does your GPU if you do the same thing.
As Joe Armstrong said: our webserver is a thousand small webservers, each serving one request.
Virtually none of them have to communicate with each other.
Do you mean Kubernetes?
My mental model of Erlang and Elixir is programming languages where the qualities of k8s are pushed into the language itself. On the one hand this restricts you to those two languages (or other ports to BEAM), on the other hand it allows you to get the kinds of fall over, scaling, and robustness of k8s at a much more responsive and granular level.
That's like complaining that unsafe{} breaks Rust's safety guarantees. It's true in some sense, but the breakage is in a smaller and more easily tested place.
What?
But if you write without the escape hatches in both languages, in my experience the safety is exactly the same and the cost of that safety is lower in TypeScript.
A very common example I've encountered is values in a const array which you want to iterate on and have guarantees about. TypeScript has a great idiom for this:
``` const arr = ['a', 'b'] as const; type arrType = typeof arr[number];
for (const x of arr) { if (x === 'a') { ... } else { // Type checker knows x === 'b' } } ```
I haven't experienced the same with Ocaml
I'm still surprised it can do so many things so well, so fast.
136 more comments available on Hacker News