Elixir 1.19
Posted3 months agoActive2 months ago
elixir-lang.orgTechstoryHigh profile
supportivepositive
Debate
40/100
ElixirProgramming LanguagesType Systems
Key topics
Elixir
Programming Languages
Type Systems
The release of Elixir 1.19 brings new features and improvements, including progressive introduction of automated type checking, sparking discussion among developers about the language's design and ecosystem.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
24m
Peak period
60
12-24h
Avg / period
13.2
Comment distribution145 data points
Loading chart...
Based on 145 loaded comments
Key moments
- 01Story posted
Oct 16, 2025 at 3:31 AM EDT
3 months ago
Step 01 - 02First comment
Oct 16, 2025 at 3:55 AM EDT
24m after posting
Step 02 - 03Peak activity
60 comments in 12-24h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 23, 2025 at 5:52 AM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45602428Type: storyLast synced: 11/20/2025, 6:45:47 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
So many examples of programming languages have huge breaking changes between versions that end up creating a split in the ecosystem that takes years to resolve.
Thankfully José has been very clear about Elixir being done since at least 2018. The language is stable and the language/core foundation is not changing anymore.
https://www.youtube.com/watch?v=suOzNeMJXl0
Truly outsanding work and stewardship.
Python 3 was really, really needed to fix things in 2. Hence 2 became 3. They managed it pretty well, vaguely similar to Go, with automated update tools and compatibility-ish layers. It had its speed bumps and breakages as not everything went smoothly.
OTOH: Ruby 3 went the wrong way with types separate files and fragmentation of tools. And that's not mention having to opt-in with boilerplate to change how String literals work. Or: gem signing exists but is optional, not centrally-managed, and little-used. Or: Ruby Central people effectively stole some gems because Shopify said so. PS: many years ago Hiroshi Shibata blocked me from all GH Ruby contributions for asking a clarifying question in an issue for no reason. It seemed agro, unwarranted, and abrupt. So the rubygems repository fragment drama seems like the natural conclusion of unchecked power abuse lacking decorum and fairness, so I don't bother with Ruby much these days because Rust, TS, and more exist. When any individual or group believe they're better than everyone else, conflict is almost certainly inevitable. No matter how "good" a platform is, bad governance with unchecked conduct will torpedo it. PSA: Seek curious, cooperative, and professional folks with mature conflict-resolution skills.
It's a good idea™ to think deeply and carefully and experiment with language tool design in the real world before inflicting permanent, terrible choices rather than net better but temporarily-painful ones. PSA: Please be honest, thoughtful, clear, and communicate changes in advance so they can be avoided or minimized to inflict least net pain for all users for all time.
Honestly, I hope more development goes into making Phoenix/Elixir/OTP easier, more complete, more expressive, more productive, more testable, and more performant to the point that it's a safe and usable choice for students, hobbyists, startups, megacorps, and anyone else doing web, non-web, big data, and/or AI stuff.
Plug for https://livebook.dev, an app that brings Elixir workbooks to a desktop near you. And https://exercism.org/tracks/elixir
Although, what parts of Elixir itself are rough or missing creature comforts? I generally feel it's stable and fine, but I admittedly haven't written Elixir code in a couple of years, sadly.
I mean this describes every full stack web framework right? Like sure if the underlying language doesn't have macros or macro-like tools that limits how perverted the syntax can get but the line between "DSL" and "API" gets really blurry in all of these massive frameworks.
Wherever rails or phoenix has macro-defined syntax to handle a specific task, laravel or whatever will have a collection of related functions that need to be used in very specific ways to accomplish the same thing. Whether this collection is a "class" with an "api" or whether it is a "language" defined around this "domain" you will have the abstraction and the complexity.
Having a preference for one approach of managing this abstraction & complexity seems fine but "a collection of DSLs" is pretty much what a web framework is so that can't be the problem here.
It's kind of the standard way to paper over the protocol grit of HTTP and make people able to quickly pump out fresh plumbing between outbound socket and database.
You mean in the sense that the language's built-in syntax and available abstractions get abused so much that it approximates a DSL?
With macros, even language servers may need customization if they introduce new syntax. The code that runs doesn't exist until it runs, so you can't see it ahead of time.
This doesn't sound like too big a problem if you're familiar with the tooling already, but trying to figure out where some random method comes from in a rails code base when you're new to Ruby is somewhere between a nightmare and impossible without debugging and using the repl to tell you where the source is.
React has a JSX macro, and I love using it, so there's definitely room for them. There is a world of difference in developer experience when macros are used versus when not, however, and it is wrong to say that it is all the same.
The idea that Phoenix is also mostly macros does not hold in practice. Last time this came up, I believe less than 5% of Phoenix' public API turned out to be macros. You get this impression because the initial skeleton it generates has the endpoint and the router, which are macro heavy, but once you start writing the actual application logic, your context, your controllers, and templates are all regular functions.
no, but the Framework does push you into using them. A good example is the `use MyAppWeb` pattern. That's a macro that nests other macros. the good news is that you can pretty much excise that and everything works fine, and LLMs have no problem even! (i think they slightly prefer it)
a few cognitive pain points with phoenix macros:
plug: (love it dearly) but a bit confusing that it creates a conn variable out of whole cloth. a minor complaint. worth it, otherwise.
phoenix.router: is a plug but isnt quite a plug.
anyways that's it! the rest is ~fabulous. i think to find a framework where you have two minor complaints is a blessing. remember how activerecord automagically pluralized tables for you?
Conn is just a pipeline of functions, the initial Conn struct is created at request time and passed through to each function in the pipeline.
> I believe less than 5% of Phoenix' public API turned out to be macros.
The idea may still be right, but I'm curious if that addresses the majority of the public API that users are greeted with. I have unfortunately not written Elixir in a few years (cries), and I've never fully grokked Phoenix, so perhaps I'm still wrong.
I don't know how you can say this honestly - it was turbulent and fraught with trouble and angst. It most certainly was NOT handled well.
> Honestly, I hope more development goes into making Phoenix/Elixir/OTP easier, more complete, more expressive, more productive, more testable, and more performant to the point that it's a safe and usable choice for students, hobbyists, startups, megacorps, and anyone else doing web, non-web, big data, and/or AI stuff.
Seriously, this has been the case all the time. It's a great fit for AI, web (Phoenix), non-web (Nerves), students (Pragstudio), hobbyists (hi), megacorps (Discord, bleachereport).
What do you mean it's not testable, productive, expressive enough? Do you mean the entire elixir community is just fiddling about with unsafe software?
This comment seems just like a giant ragebait.
I usually find the Erlang/OTP upgrades to be a bit more problematic compatibility-wise.
So I’m often in the latest elixir but one Erlang/OTP version behind cuz I wait a few months for all the kinks to be worked out.
I can only think of 2: python 3 and perl 6.
Those two were very traumatic so it's not surprising it feels like more.
There was a rails upgrade around that time that was similarly painful, at least in the humongous rails app I was working in.
Also breaking changes do happen, see list of removed methods
https://docs.oracle.com/en/java/javase/17/migrate/removed-ap...
On paper, it really was just a few changes. In practice, it forced a massive transitive dependency and technical debt cleanup for many companies.
It’s certainly a case that languages need to be championed by competent IDE writers otherwise they fail to scale. Because you can’t have 50 devs all using neovim - and only neovim - without making a huge gigantic mess. Large projects can sustain a few brilliant people working with one hand tied behind their back but not everyone.
I tried few times checking on it, but I failed to find something that motivated me to continue
But as I said above, I realized long ago that languages without IDEs by and large falter in the long term (that's why I'm currently concerned about Jetbrains needing a buggy plugin to do Elixir), so Jetbrains being behind it added a lot of gravitas.
And after fighting with Larry Ellison for a bit, Android phones moved to Kotlin to get around the lawyers.
The issue for me is that Scala design lacks focus. They say yes to too many features.
Full of Least Surprise violations, and just far too goddamned big. Did 3 try to pare that back into something reasonable?
See all platforms that have their identity tied with a specific language, the platform's language always has a guaranteed future as long as the platform continues to be industry relevant.
The others on top, come and go.
This caused quite a lot of work on the apps I worked on.
Ruby 1.8 to 1.9 was a major version change in the semver sense; Ruby wasn't using Semver before, IIRC, 2.1.0, it was using a scheme that was basically loosely like Semver with an extra prefix number. Ruby minor versions were equivalent semver major (and also had a less-stable implication for odd numbers, more stable for even, Ruby “tiny” versions were equivalent to semver minor, and Ruby still had patch versions.
While C#, F#, VB and C++/CLI were kept compatible, it doesn't help when the library stuff you want to call isnt' there any longer.
C++ removal of exception specifiers, GC API,
C VLAs got dropped in C11, function prototypes changed meaning in C23, and K&R declarations were dropped from the standard.
Java, already someone else mentioned.
D, the whole D1 => D2 transition, and Tango vs Phobos drama.
Bit of a quibble but I'm not sure I'd call that a "huge breaking change" given that that feature wasn't really implemented in the first place, let alone actually used.
https://cppreference.com/w/cpp/compiler_support/11.html
It was a bad feature, as the two main C++ commercial products that make use of GC, namely C++/CLI and Unreal C++, were never taken into account while designing it, a good example how WG21 often does PDF driven design.
Not so sure I'd call these huge breaking changes. They're breaking, sure, but I'd expect them to be trivial to fix in any existing codebase.
Maybe VLAs are a huge breaking change? Most code never used it due to no way at all to use them safely, so while it is a pain to replace all occurrences of them, the number of usages should be really low.
https://www.phoronix.com/news/Linux-Kills-The-VLA
Breaking changes are breaking changes, even if it is only fixing a comma, someone has to spend part of their time making it compile again, which many times maps to actual money for people working at a company, their salary mapped into hours.
No disagreement there, but the context ITT was specifically about huge breaking changes. I consider those breaking changes, but not necessarily huge ones.
Typescript has introduced breaking changes but they're not that bad
Dart 2.12 (released March 2021) introduced null-safety.
That started a 2 year transitionary period during which you could mix nullsafe (language versions 2.12 or above) and non-nullsafe (language versions below 2.12) code in one program.
Dart 3.0 (released May 2023) removed support for language versions prior to 2.12 - meaning that you can no longer opt out of null-safety.
It's overkill for some of our problems but it's working fine! We make mistakes, but they're mistakes we'd have made with most other languages.
I did have to buy Pragstudio licenses for anyone using Elixir on the team. I'd prefer a few books, but most Elixir/Phoenix books don't seem like they're keeping up with the rate of change.
Getting feedback directly from production has been helpful too to tell us when we didn't think something through. We don't use branches, everything is a commit to main and every push is a production deployment so all three of us are in the loop on what each other is doing.
Didn't really buy any books but testing and trying things out in LiveBook has been huge to learn the nuances of the language. The LiveDashboard has been great in monitoring things, especially the PostGres plugin for it. The Discord community has been very supportive as well, and the Elixir Forums as well.
I can't speak too much about Python – but immutable vars is a core prerequisite for many of the features OTP (the platform underpinning Elixir (and Erlang)).
.ex compiles to beam files to be run later
.exs compiles to memory
You don’t need to know Erlang to use Elixir; I’m a few years in now and I’ve never had to write any Erlang.
A high-level language with a strict shared-nothing concurrency model doesn't need a GIL... but you naturally can't practically share very large objects between BEAM processes.
1. Regarding Python's GIL: The issue isn't memory sharing between threads. Java and Go allow you to do this, too, but they don't have GILs. The reason Python has a GIL is that it uses reference counting for memory management. If it didn't have a GIL, multiple threads could simultaneously manipulate reference counts, which would lead to memory corruption/leaks.
2. You can share massive "objects" between BEAM processes. For example, if you're running BEAM in a 64-bit environment, you can share maps, structs, and other data structures that are up to 2,305,843,009,213,693,951 bytes in size.
I hope this information helps. I also hope it is correct. I think it is, but I've been wrong before.
CPython used to have a GIL, it is no longer the case since the latest version, 3.14.
Other Pythons, jPython, GrallPy, PyPy, never had a GIL.
Languages and implementations aren't the same.
However it is still a CPython issue, it is already available for those that install the right version, and no longer considered experiemental.
1. It's dynamic 2. It's compiled 3. Elixir script is just a file with Elixir code that compiles and runs right away 4. I've been writing Elixir for 7 years and barely know any Erlang. I even submitted bugs to OTP team with reproductions in Elixir, they're chill. 5. Preemptive scheduler, immutable data
Elixir and Erlang are dynamic compiled languages.
The actor model being built in to the runtime offers many benefits, all of which I cannot enumerate here, but prominent among them are the ability for the VM to preemptively schedule actors, and the fact that actors are independent in memory and cannot mess with the internal state of other actors.
The jump from Elixir to Erlang or vice versa is small.
The hardest (and most rewarding) part is learning OTP and the whole BEAM runtime system, which you can do with either language.
Erlang and Elixir are slightly different syntaxes for the same semantics, and if you know one you can learn to read and probably write the other in less than a day.
It isn’t like Clojure and Java where Clojure is significantly higher level than Java in many ways. Elixir adds a few things to Erlang but is otherwise the same programming model.
> I just don't know which niche applications Elixir targets and excels at.
Pretty much any application where concurrent IO and state management are the main problems. Web applications, proxies/brokers, long running network stuff, semi-embedded low power devices that are hard to physically access and must remain reliable for years at a time, that kind of thing.
I actually agree with you that the ecosystem and tooling and especially the value proposition are confusing, and the sales pitch over the years has often been poor.
The whole BEAM community would do well to speak more plainly about the concrete benefits for programmers and companies, and the existing successful applications rather than the theoretical beauty of the syntax or the actor model.
I hope this helps.
For me some serious elixir adventure is high up in my todo list. But I remain suspicious if I can ever fully enjoy myself with a dynamic language - I think gleam and elixir do cater to different crowds. Gleam is pure minimalism (just pattern matching really), but elixir doesn't seem bloated either.
I am so happy that both languages exist and give alternatives in times of hundreds of node deps for any basic slob webapp.
- A language (Elixir/Erlang) and runtime (BEAM) - The concurrency standard library (OTP)
The language and runtime give you the low level concurrency primitives - spawn a process, send a message etc. But to build actual applications, you rarely use those primitives directly - instead you use GenServers and supervisors. That gives yo a way to manage state, turns message passing into function calls, restarts things when they crash etc.
Gleam compiles to the same runtime, but doesn't provide OTP in the same way - static types aren't easy to make work with the freely message passing world of OTP. They are implementing a lot of the same concepts, but that's work that Gleam has to do.
(Elixir provides some thin API wrappers over the Erlang OTP APIs, and then also provides some additional capabilities like Tasks).
My last team wanted to port all remaining Java code to Kotlin, because they just enjoyed working with it so much more.
While porting I've regularly replaced 5-8 lines of Java with a single line of Kotlin.
Kotlin needs Java like Ruby needs C or Elixir needs Erlang.
Granted, it's alpha software and it's currently embedded in the Hologram framework, but still.
https://hologram.page/
in case it wasnt clear: a zero share tx being a tombstone is a stock accounting convention, not a choice of that particular software project.
to wit: anything that calculates an average over non-fixed data cardinality for business logic is potentially at risk for a seroious logic error in gleam.
what's worse is this is a nonobvious error. llms will likely make it a lot. a code review is likely to miss it.
what's even worse is that the author refuses to acknowledge this and digs deeper in (because, well, its a "principled" choice that got made and now to change is breaking and it breaks a core "feature" of the language). 1/0 = 0 is fine for a language like idris which is used for theorem proving but never used for real world things. It's inappropriate for deploying when, for example, real money might be at stake.
I personally do not like type systems, and still code in JS, not TS. Any JS artifacts I produce are untyped. Yet even my Elixir-code is nearly type ready.
So while TS is fighting an uphill battle, I think Elixir is working downhill.
It's never been a wat-language in the style of JavaScript.
I was a fan of Ruby -- because of it's pragmatism and subjective beauty -- but then I got into type systems.
Elixir now also has a type system and, so does Ruby...
Though I know program in Kotlin, which syntax-wise is very much a "typed Ruby".
The language itself is maybe OK but the overall experience is not.
On a production build, stack traces look like Erlang code, which is the weird syntax that Elixir tried to improve upon.
Then you have macros, which make code unmaintainable at the 10k SLOC mark, and increasingly harder to maintain as projects get larger.
Running "mix xref graph" on most Elixir projects shows a spaghetti mess.
The toolchain has much room for improvement. Editing, debugging, profiling, unit testing, or pretty much any basic routine development task, involves a tool that's decades behind the state of the art. Even Borland tools from the 80s have a better toolchain.
Building a team around Elixir is hard. You have to train people on the job and they will probably not write idiomatic code that takes advantage of the language. Or deal with people that won't stop selling you how great the language is.
And the documentation for most of the projects you will use is full of noise, with few workable examples, grandiose claims of performance and fantastic treasures, and the articles are a great read if you want to waste your entire evening.
Support for massive concurrency is nice but you are realistically not going to need it. If you do need it then yes, Elixir can be a good tool for the job.
ExUnit has been hands down the most impressive testing library I've ever worked with, and the debugging, profiling, analytics, introspection, observability, etc of the BEAM is unbeatable.
Documentation of elixir, elixir deps, and elixir code is also far and above any language I've ever seen.
And the struggles I had supporting minimal concurrency in python were completely alleviated - so even if you don't need massive concurrency, elixir has a good chance of massively simplifying anything that needs minimal concurrency (which is probably most web related projects).
I have a lot of respect for the community behind it but the experience is still not there.
Not only this article dumps significant cognitive load on the reader. It's not well digested and not a soft landing into the subject. Worst of all, many Elixir articles assume familiarity with GenStage.
Compare it to this, which is not the best example but is a much more soft landing. https://doc.akka.io/libraries/akka-core/current/typed/actors...
I complain about OCaml docs all the time. But Elixir? no way.
Do you have an example? There are some cases that I can think of where the application dumps some foreign-looking data structures if the release fails to start, but that's very rare and usually the actual error is somewhere near the beginning like "eaddrinuse" here:
Here's how runtime errors are normally reported (in `MIX_ENV=prod mix release` build): > Then you have macros, which make code unmaintainable at the 10k SLOC mark, and increasingly harder to maintain as projects get larger.Absolutely, so don't write macro-heavy code. This is mentioned in the first paragraph of official Macro documentation and documented as an anti-pattern in the official documentation.
> The toolchain has much room for improvement.
I agree that editing experience (due to lacklustre language server support which is now being worked on officially), and debugging tools are lagging behind.
> And the documentation for most of the projects you will use is full of noise, with few workable examples, grandiose claims of performance and fantastic treasures, and the articles are a great read if you want to waste your entire evening.
I don't agree with this at all.
> On a production build, stack traces look like Erlang code...
Elixir has the most readable stacktraces of any language I've used. Here's an example (which is color-coded in the terminal for even more clarity and, as you can see, doesn't contain any Erlang code):
It's easy to see that the issue is on line 4, and that it is a missing curly brace.> Then you have macros, which make code unmaintainable...
Elixir gives you full access to the AST, making macros extremely easy to read and reason through. The point of the Lisp-style macros Elixir uses is to simplify your code. If your code becomes unmaintainable due to your use of macros, you're probably misusing them. I'd have to see a sample to make that determination, though.
> Running "mix xref graph" on most Elixir projects shows a spaghetti mess.
Spaghetti? It's a simple two-level tree that is in alphabetical order. Here's an example, and it is like this all the way down:
To me, this output seems extremely accessible.> The toolchain has much room for improvement...
Having developed many Windows apps using Borland's tools in the 80s and 90s, I disagree with this statement for these reasons:
• Mix is one of the best and most integrated build tools/task runners I've used. For example, you can create, migrate, and reset databases, execute tests, lint code, generate projects, compile, build assets, install packages, pull dependencies, etc.
• ExUnit is a great testing framework that handles all kinds of tests in an easy-to-read DSL similar to Ruby's RSpec.
• IEx is a fantastic REPL.
• Elixir's debugging tools are excellent. For example, IEx.pry lets you stop all processes and interact with your system in that frozen state in the REPL. You can watch variables, run functions, and even create new functions on the fly to interact with your data to see how it behaves in different scenarios.
> Building a team around Elixir is hard.
Why is it hard? I've worked exclusively on Elixir projects for both start-ups and large companies with hundreds of engineers for over ten years now, and never had a problem with hiring teams.
> And the documentation for most of the projects you will use is full of noise, with few workable examples, grandiose claims of performance, and fantastic treasures, and the articles are a great read if you want to waste your entire evening.
All technical documentation could be improved, but Elixir's is already quite good. See for yourself: https://hexdocs.pm/elixir/1.19.0/Kernel.html.
Furthermore, from within IEx, you can type:
for any function and see an explanation of what the function does and examples of how to use it.> Support for massive concurrency is nice, but you are realistically not going to need it...
Elixir supports minute concurrency as well, and yes, I do need it. For example, in Ruby on Rails, which has a GIL, I'd have to use a gem like Sidekiq to push long-running processes into Redis so they can be processed in the background. In Elixir, I can just run them in a separate, concurrent process, which is simple.
Here's an example that takes a collection of users and a function and then runs each user through that function, each in a separate thread:
Here's the same Elixir code running in a single thread. In the first example, I could have 1 user, 1,000 users, or 1,000,000 users, and this code would run as optimally as possible on all the cores in my CPU or across all the cores in all the CPUs in my multi-server BEAM cluster. There are no extra programs or libraries needed. In the second example, users are processed one at a time, similar to languages like Python, Ruby, JavaScript, PHP, and Perl.Given the simplicity of writing parallel code in Elixir, why would I limit myself to one CPU core to perform a task when I can use all cores simultaneously?
> Or deal with people that won't stop selling you how great the language is.
The reason they won't stop selling you on Elixir is that Elixir is a fantastic language. I hope you take the time to revisit it in the future. It really is much better than most things out there.
dynamic() > boolean()
integer() > boolean()
I absolutely love the language, the way it fits into the Erlang runtime, and especially Jose's stewardship. But Phoenix/LV don't jive with my brain nearly as well as Elixir itself does. Additionally, the push towards native development never evolved to a place where it could realistically supplant Expo & RN for me.
This probably sounds insane to anyone who hates how the TS/JS community has a million different frameworks, but I think the upside of all that chaos is that a plethora of new ideas that get explored, and the truly exceptional ideas end up getting adopted anywhere.
My gut feeling is that the Elixir world has a TON of amazing ideas that have yet to be explored.
I have a question about how the type inference works. Dialyzer, which also attempts to do type inference, uses "success" typing which means it will not flag something if it could work. It tries to minimize false positives. In practice, this means it hardly ever catches anything for me (and even when it does warn about something, it's usually not real anyway!), so I don't find it that useful.
Does this approach use "success" typing as well? I found the `String.Chars` protocol example interesting, since I've had my fair share of crashes from bad string interpolations. But in the example, it's _clearly_ wrong, and will fail every time. That's not that useful to me because any time that code is exercised (e.g. in a simple test) it would be caught. What's more useful is if some particular code path results in trying to interpolate something wrong.
I know under the hood, the Elixir type system has something to do with unions of possible values, so it is tracking all the things that "could" be passed to a function. Will it warn only if all of them fail, or if any of them fail?
My app is a mix of real time and rest endpoints, and there’s no heavy computation and even if there was I could just call do that one off in Go or something.
Would Phoenix make sense for me? I have some cool collaborative features in the works.
Having the $lang_ecosystem address this sounds godsent. Unfortunately we don't use Elixir at $work.