Julia 1.12 Highlights
Mood
excited
Sentiment
positive
Category
other
Key topics
The Julia 1.12 release brings several exciting features, including struct redefinition, package apps, and the experimental '--trim' option for creating smaller binaries, sparking enthusiasm and discussion among the community about the language's potential and future directions.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
7m
Peak period
90
Day 1
Avg / period
23.5
Based on 94 loaded comments
Key moments
- 01Story posted
Oct 8, 2025 at 2:42 PM EDT
about 2 months ago
Step 01 - 02First comment
Oct 8, 2025 at 2:49 PM EDT
7m after posting
Step 02 - 03Peak activity
90 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 18, 2025 at 12:57 PM EDT
about 1 month ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
So for now we will continue rewriting code that needs to run on small systems rather than deploy the entire julia environment, but I am excited about the progress that has been made in creating standalone executables, and can't wait to see what the next release holds.
But all of this is more about maturing the ecosystem to be more amenable to static compilation and analysis. The whole SciML stack had an initiative starting at the beginning of this summer to add JET.jl testing for type inference everywhere and enforcing this to pass as part of standard unit tests, and using AllocCheck.jl for static allocation-free testing of inner solver loops. With this we have been growing the surface of tools that have static and real-time guarantees. Not done yet, some had to be marked as `@test_broken` for having some branch that can allocate if condition number hits a numerical fallback and such, but generally it's getting locked down. Think of it as "prototype in Julia, deploy in Rust", except instead of re-writing into Rust we're just locking down the behavior with incrementally enforcing the package internals to satisfy more and more static guarantees.
> We want a language that's open source, with a liberal license. We want the speed of C with the dynamism of Ruby. We want a language that's homoiconic, with true macros like Lisp, but with obvious, familiar mathematical notation like Matlab. We want something as usable for general programming as Python, as easy for statistics as R, as natural for string processing as Perl, as powerful for linear algebra as Matlab, as good at gluing programs together as the shell. Something that is dirt simple to learn, yet keeps the most serious hackers happy. We want it interactive and we want it compiled. (Did we mention it should be as fast as C?). While we're being demanding, we want something that provides the distributed power of Hadoop — without the kilobytes of boilerplate Java and XML
> We are power Matlab users. Some of us are Lisp hackers. Some are Pythonistas, others Rubyists, still others Perl hackers. There are those of us who used Mathematica before we could grow facial hair. There are those who still can't grow facial hair. We've generated more R plots than any sane person should. C is our desert island programming language.
When I first heard about Julia I understood it to be a faster alternative to Python. As I started to learn it I realised that's really not what it's about, it's trying hard to compete simultaneously with Matlab and R and Fortran and C++ (and the template metaprogramming language hiding in C++) and APL and Lisp and maybe OCaml just as much as Python (but not Rust or Java or Agda), and I can't even speak to the other languages mentioned.
At the time we were part of the wave I suppose that was trying to convince people that open source Python was a better prospect than MATLAB which was where many people in physics/engineering were on interpreted languages. At least in my view, it wasn’t until much more recently that Julia became a workable alternative to those, regardless of the performance benefits (which were largely workable in Python and MATLAB anyway - and for us at least we were happy developing extension modules in C for the flexibility that the Python interface gave us over the top).
They still have to this day.
Any thoughts from someone more plugged in to the community today?
Python started in 1989, it also took its time.
Are you using Julia?
It does have some well-known issues (like slow startup/compilation time) but if you're using it for long-running data pipelines it's great.
If you can't talk about library stacks, it'd be at least interesting to hear your thoughts about how you minimize memory allocation.
We control memory allocation the boring, manual way – we preallocate all our arrays, and then just modify them, so that we have very little continued allocation in production.
Worse, there are still way too many compilation traps. Splatted a large collection into a function? Compiler chokes. Your code accidentally moves a value from the value to the type domain? You end up with millions of new types, compiler chokes. Accidentally pirate a method? Huge latency. Chose to write type unstable code? Invalidations tank your latency.
It's the most annoying thing about hn that people will regularly declare/proclaim some thing like this as if it's a Nobel prize winning discovery when it's actually just some incremental improvement. I have no idea how this works in these people's lives - aren't we all SWEs where the specifics actually matter. My hypothesis is these people are just really bad SWEs.
The problem comes from Julia trying to be two languages at once -- the dynamic language that is useful for quickly generating plots and prototyping; and the production language that's running production code on the backend server, or running the HPC simulation on the supercomputer. They've deliberately staked out the middle point here, which comes with the benefit of speed but the tradeoff is in the ttfp latency. It might be considered the leak of the multiple dispatch abstraction. Yes it can feel like magic when it works, but when it doesn't it manifests as spikes in latency and an explosion of complexity.
In the end I don't know how big the ttfp issue is for Julia. But they've certainly branded it and the existence of the problem has made its way to people who don't use the language, which is an issue for community growth. They've also left themselves open for a language to come on that's "Julia but without ttfp issues".
All I can say is that many of "us" live in that tension between high level and low level every day. It's actually going to become more pronounced with `--trim` and the efforts on static compilation in the near term. The fact that Julia can span both is why I'm a part of it.
$ time julia -e "exit"
real 0m0.156s
user 0m0.096s
sys 0m0.100s
$ time julia -e "using Plots"
real 0m1.219s
user 0m0.981s
sys 0m0.408s
$ time julia -e "using Plots; display(plot(rand(10)))"
real 0m1.581s
user 0m1.160s
sys 0m0.400s
Not a super fair test since everything was already hot in i/o cache, but still shows how much things have improved.- Plots.jl, 1.4 seconds (include package loading)
- CairoMakie.jl, 4 seconds (including package loading)
julia> @time @eval (using Plots; display(plot(rand(3))))
1.477268 seconds (1.40 M allocations: 89.648 MiB, 2.70% gc time, 7.16% compilation time: 5% of which was recompilation)The other day that old article "Why I no longer recommend Julia" got passed around. On the very same day I encountered my own bug in the Julia ecosystem, in JuliaFormatter, that silently poisoned my results. I went to the GitHub issues and someone else encountered it on the same day. I'm sure they will fix it (they haven't yet, JuliaFormatter at this very moment is a subtle codebase-destroyer) but as a newcomer to the ecosystem I am not prepared to understand which bog standard packages can be trusted and which cannot. As an experiment I switched to R and the language is absolute filth compared to Julia, but I haven't seen anyone complain about bugs (the opposite, in fact) and the packages install fast without needing to ship prebuilt sysimages like I do in Julia. Those are the only two good things about R but they're really important.
I think Julia will get there once they have more time in the oven for everything to stabilize and become battle hardened, and then Julia will be a force to be reckoned with. An actually good language for analysis! Amazing!
The codebase destruction warning was not super loud, though. Obviously I missed it despite using JuliaFormatter constantly. It doesn't get printed when you install the package nor when you use it. It's not on the docs webpage for JuliaFormatter. 2.x is still the version you get when you install JuliaFormatter without specifying a version. The disclaimer is only in the GitHub readme, and I was reading the docs. What other packages have disclaimers that I'm not seeing because I'm "only" reading the user documentation and not the GitHub developer readme?
I don't think this is an accurate summary. the bug here is that JuliaFormatter should put a <=1.9 compatibility bound in its Project.toml if it isn't correct with JuliaSyntax.jl
OffsetArrays was different because it exposed a bunch of buggy and common code patterns that relied on (incorrect) assumptions about the array interface.
it can be used for deep learning but you probably shouldn't, currently, except as a small piece of a large problem where you want Julia for other reasons (e.g. scientific machine learning). They do keep improving this and it will probably be great eventually.
i don't know what the experience is like using it for traditional data science tasks. the plotting libraries are actually pretty nicely designed and no longer have horrible compilation delays.
people who like type systems tend to dislike Julia's type system.
they still have the problem of important packages being maintained by PhD students who graduate and disappear.
as a language it promises a lot and mostly delivers, but those compromises where it can't deliver can be really frustrating. this also produces a social dynamic of disillusioned former true believers.
This is true. As far as I understand it, there is not a type theory basis for Julia's design (type theory seems to have little to say about subtyping type lattices). Relatedly, another comment mentioned that Julia needs sum types.
- simply typed lambda calculus
- System F
- dependent type theory (MLTT)
- linear types
- row types
- and so on
But it's subtle to talk about. It's not like there is a single type theory that underlies Typescript or Rust, either. These practical languages have partial, (and somewhat post-hoc) formalizations of their systems."On the use of LISP in implementing denotational semantics"
https://dl.acm.org/doi/10.1145/319838.319866
Type theory in CS isn't a synonymous with whatever Haskell happens to do.
As for the actual numerical stuff I tend to roll my own implementations of most algorithms to better control relevant tradeoffs. There are sometimes issues where a particular algorithm is implemented by a Julia package, but has performance issues / bugs in edge cases. For example, in my testing I wasn't able to get ImageContrastAdjustment CLAHE to run very fast and it had an issue where it throws an exception with an image of all zeros. You also can't easily call the OpenCV version as CLAHE is implemented in OpenCV using an object which doesn't have a binding available in Julia. After not getting anywhere within the ecosystem I just wrote my own optimized CLAHE implementation in Julia which I'm very happy with, this is truly where Julia shines. It's worth noting however that there are many excellent packages to build on such as InterprocessCommunication, ResumableFunctions, StaticArrays, ThreadPinning, Makie, and more. If you don't mind filling in some gaps here and there its completely serviceable.
As for the core language and runtime we are deploying a Julia service to production next release and haven't had any stability/GC/runtime issues after a fairly extensive testing period. All of the Python code we replaced led to a ~40% speedup while improvements to numerical precision led to measurably improved predictions. Development with Revise takes some getting used to but once you get familiar with it you will miss it in other languages. All in all it feels like the language is in a good place currently and is only getting better. I'd like to eventually contribute back to help with some of the ecosystem gaps that impacted me.
Which is a shame, because now Python has all the same problems with the long startup time. On my computer, it takes almost 15 seconds just to import all the machine-learning libraries. And I have to do that on every app relaunch.
Of course I would also rather be doing all of the above in Julia instead of Python ;)
Of course there are caveats - it won't update actively running code, but if your code it's structured reasonably and you are aware of Revise's API and the very basics of Julia's world age you can do it pretty easily IME.
> To build a BOLT-optimized Julia, run the following commands
Is BOLT the default build (eg. fetched by juliaup) on the supported Linux x86_64 and aarch64? I'm assuming not, based on the wording here, but I'm interested in what the blocker is and whether there's plans to make it part of the default build process. Is it considered as yet immature? Are there other downsides to it than the harmless warnings the post mentions?
Though it is quite a progress after years of insisting that this additional package PackageCompiler.jl is all you need.
For what it's worth, I am able to generate a non-small (>1 GiB) binary with 1.11 that runs on other people's machines. Not shipping in production, but it could be if you're willing to put in the effort. So in a sense, PackageCompiler.jl is all you need. ;)
1) Install juliaup following the instructions for your platform from julialang.org
2) Install julia 1.12 with `juliaup add 1.12` and switch to it `juliaup default 1.12`.
3) Start a standard julia 1.12 REPL with `julia --project=.` This is the "Outer" project that will do the compiling (PackageCompiler.jl terminology, might not be necessary now that juliac can run from the command line).
4) Install juliac with `] app add JuliaC`. The ']' character opens the package manager. You might need to add Julia's app location to your PATH in .zshrc with `export PATH="/home/ccp/.julia/bin:$PATH"` and restart your shell (I had this already from earlier attempts with 1.12 beta).
5) Create a new project with `] generate AppProject`. This is the "Inner" project that will be compiled. Put the example AppProject code from the julia 1.12 announcement in src/AppProject.jl
6) Run the example command `juliac --output-exe app_test_exe --bundle build --trim=safe --experimental ./AppProject` while in the "Outer" directory. You should be able to run it as in the announcement `./build/bin/app_test_exe`
In 2020, I thought Julia would be _the_ language to use in 2025. Today I think that won't happen until 2030, if even then. The community is growing too slowly, core packages have extremely few maintainers, and Python and Rust are sucking the air out of the room. This talk at JuliaCon was a good summary of how developers using Rust are so much more productive in Rust than in Julia that they switched away from Julia:
https://www.youtube.com/watch?v=gspuMS1hSQo
Which is pretty telling. It takes a overcoming a certain inertia to move from any language.
Given all that, outside of depending heavily on DifferentialEquations.jl, I don't know why someone would pick Julia over Python + Rust.
> Given all that, outside of depending heavily on DifferentialEquations.jl, I don't know why someone would pick Julia over Python + Rust.
See his last slide. And no, they didn't replace their Julia use in its entirety with Rust, despite his organization being a Rust shop. Considering Rust as a replacement for Julia makes as much sense to me as to considering C as a replacement for Mathematica; Julia and Mathematica are domain specific (scientific computation) languages, not general systems programming languages.
Neither Julia nor Mathematica is a good fit for embedded device programming.
I also find it amusing how you criticize Julia while praising Python (which was originally a "toy" scripting language succeeding ABC, but found some accidental "gaps" to fit in historically) within the narrative that you built.
> In any non-toy Julia program that's not going to be the case.
Why?
"Ecosystem" is not a part of the language, and in any case, the Python ecosystem is not written in Python, because Python is not a suitable language for scientific computing, which is unsurprising because that's not what it was designed for.
It is ironic you bring up hype to criticize Julia while praising Python which found popularity thanks to hype rather than technical merit.
What promise are you referring to? Who promised you what? It's a programming language.
Doesn't matter. Languages do not matter, ecosystems do, for they determine what is practically achievable.
And it doesn't matter that Python ecosystem relies on huge amounts of C/C++ code. Python people made the effort to wrap this code, document it and maintain those wrappers. Other people use such code through Python APIs. Yes, every language with FFI can do the same. For some reason none achieved that.
Even people using Julia use PythonCall.jl, that's how much Python is unsuitable.
> What promise are you referring to? Who promised you what? It's a programming language.
Acting dumb is poor rhetorical strategy, and ignores such a nice rhetorical advice as principle of charity - it is quite obvious that I didn't mean that programming language made any promise. Making a promise is something that only people can do. And Julia creators and people promoting it made quite bombastic claims throughout the years that turned out to not have much support in reality.
I leave your assumptions about my age or other properties to you.
Language matters, and two-language problem is a real problem, and you can't make it go away by closing your ears and chanting "doesn't matter! doesn't matter!"
Julia is a real step toward solving this problem, and allows you to interact with libraries/packages in ways that is not possible in Python + Fortran + C/C++ + others. You are free to keep pretending that problem doesn't exist.
You are making disparaging and hyperbolic claims about hyperbolic claims without proper attribution, and when asked for source, you cry foul and sadly try to appear smart by saying "you're acting dumb". You should take on your advice and instead of "acting dumb", explicitly cite what "promises" or "bombastic claims" you are referring to. This is what I asked you to do, but instead of doing it, you are doing what you are doing, which is interesting.
The fact that you can use those nice numerical and scientific libraries from the language that had also tremendous amount of nice libraries from other domains, wide and good IDE support, is very well documented and has countless tutorials and books available... is an argument against that language? Because you can easily use Fortran code in Fortran?
Nice.
> You don't need FFI to use a Fortran library from Fortran
Wow. Didn't know that.
> And no, many other scripting languages have wrappers,
Always less complete, less documented, with less teaching materials available etc.
But sure, many other languages have wrappers. Julia for example wraps Python API.
> and no, scientific computing is not restricted to ML
Never said it is. I don't do ML, by the way.
> You are making disparaging and hyperbolic claims about hyperbolic claims without proper attribution, and when asked for source, you cry foul
Yeah, yeah. My claims on marketing like "Julia writes like Python, runs like C" are hyperbolic and require explicit citation, even though everyone that had any exposure to this language knows such and similar catch-phrases.
Look, you like Julia, good for you. Have fun with it.
This is why I see Julia as the Java for technical computing. It’s tackling a domain that’s more numeric and math-heavy, not your average data pipeline, and while it hasn’t yet reached the same breadth as Python, the potential is there. Hopefully, over time, its ecosystem will blossom in the same way.
To clarify exactly where I'm coming from, I'm going to expand on my thoughts here.
What is Julia's central conceit? It aims to solve "the two language" problem, i.e. the problem where prototyping or rapid development is done in a dynamic and interactive language like Python or MATLAB, and then moved for production to a faster and less flexible language like Rust or C++.
This is exactly what the speaker in the talk addresses. They are still using Julia for prototyping, but their production use of Julia was replaced with Rust. I've heard several more anecdotal stories of the exact same thing occurring. Here's another high profile instance of Julia not making it to production:
https://discourse.julialang.org/t/julia-used-to-prototype-wh...
Julia is failing at its core conceit.
Julia as a community have to start thinking about what makes a language successful in production.
Quote from the talk:
> "(developers) really love writing Rust ... and I get where they are coming from, especially around the tooling."
Julia's tooling is ... just not good. Try working several hundred thousand line project in Julia and it is painful for so many reasons.
If you don't have a REPL open all the time with the state of your program loaded in the REPL and in your head, Julia becomes painful to work in. The language server crashes all the time, completion is slow, linting has so many false positives, TDD is barebones etc. It's far too easy to write type unstable code. And the worst part is you can write code that you think is type stable, but with a minor refactor your performance can just completely tank. Optimizing for maintaining Julia code over a long period of time with a team just feels futile.
That said, is Python perfect? Absolutely not. There's so many things I wish were different.
But Python was designed (or at the very least evolved) to be a glue language. Being able to write user friendly interfaces to performant C or C++ code was the reason the language took off the way it did.
And the Python language keeps evolving to make it easier to write correct Python code. Type hinting is awesome and Python has much better error messages (static and runtime). I'm far more productive prototyping in Python, even if executing code is slower. When I want to make it fast, it is almost trivial to use PyO3 with Rust to make what I want to run fast. Rust is starting to build up packages used for scientific computing. There's also Numba and Cython, which are pretty awesome and have saved me in a pickle.
As a glue language Python is amazing. And jumping into a million line project still feels practical (Julia's `include` feature alone would prevent this from being tenable). The community is growing still, and projects like `uv` and `ty` are only going to make Python proliferate more.
I do think Julia is ideal for an individual researcher, where one person can keep every line of code in their head and for code that is written to be thrown away. But I'm certainly not betting the near future on this language.
Julia is the language to use in 2025 if what you’re looking for is a JIT-compiled, multiple-dispatch language that lets you write high-performance technical computing code to run on a cluster or on your laptop for quick experimentation, while also being metaprogrammable and highly interactive, whether for modelling, simulation, optimisation, image processing etc.
https://www.hpcwire.com/off-the-wire/julia-joins-petaflop-cl...
For large-scale physics simulations, Fortran or Julia are the obvious choices.
More and more over time, I’ve begun to think that the method JIT architecture is a mistake, that subtyping is a mistake.
Subtyping makes abundant sense when paired with multiple dispatch — so perhaps my qualms are not precise there … but it also seems like several designs for static interfaces have sort of bounced off the type system. Not sure, and can’t defend my claims very well.
Julia has much right, but a few things feel wrong in ways that spiral up to the limitations in features like this one.
Anyways, excited to check back next year to see myself proven wrong.
Like, what exactly is the alternative? Python? Too slow. Static languages? Unusable for interactive exploration and data science.
That leaves you with hybrids, like Python/Cython, or Python/Rust or Numba, but taken on their own term, these are absolutely terrible languages. Python/Rust is not safe (due to FFI), certainly not pleasant to develop in, and no matter how you cut your code between the languages, you always lose. You always want your Python part to be in Rust so you get static analysis, safety and speed. You always want your Rust part to be in Python, so you can experiment with it easier and introspect.
I largely think multiple dispatch works well in Julia, and it enables writing performant code in an elegant manner. I mostly have smaller gripes about subtyping and the patterns it encourages with multiple dispatch in Julia, and larger gripes about the lack of tooling in Julia.
But multiple dispatch is also a hammer where every problem in Julia looks like a nail. And there isn't enough discussion, official or community driven, that expands on this. In my experience the average developer to Julia tends to reach for multiple dispatch without understanding why, mostly because people keep saying it is the best thing since sliced bread.
wrt to hybrid languages, honestly, I think Python/Cython is extremely underrated. Sure you can design an entirely new language like Mojo or Julia, but imo it offers only incremental value over Python/Cython. I would love to peek into another universe where all that money, time and effort for Mojo and Julia went to Cython instead.
And I personally don't think Python/Rust is as bad. With a little discipline (and some tests), you can ensure your boundary is safe, for you and your team. Rust offers so much value that I would take on the pain of going through FFI. PyO3 simplifies this significantly. The development of `polars` is a good case study for how Rust empowers Python.
I think the Julia community could use some reflection on why it hasn't produced the next `polars`. My personal experience with Julia developers (both in-person and online) is that they often believe multiple dispatch is so compelling that any person that "saw the light" would obviously naturally flock to Julia. Instead, I think the real challenge is meeting users where they are and addressing their needs directly. The fastest way to grow Julia as a language is to tag along Python's success.
Would I prefer a single language that solves all my problems? Yes. But that single language is not Julia, yet, for me.
PS: I really enjoy your blog posts and comments.
Polars is a dataframe library. Yes, it features vectorized operations, but it is focused on columnar data manipulation, not numerical algorithm development. I might say that this is narrow framing, people are looking at Julia through the lens of a data scientist and not of an engineer or computational scientist.
Most "data scientist" code is exploratory (it's a prototype or a script for an one-off exploration) in nature. And my main gripe is that making that code production ready and maintainable over a long period of time is so difficult that I would switch to Rust instead. If I were going to switch to Rust, I might as well start with Python.
That’s not what I meant by “method JIT architecture” — I meant calling back into the compiler at runtime to specialize code when the types are known.
I am so excited - well done everyone!
I can't wait to try out trimming and see how well it actually works in its current experimental instantiation.
Congrats Julia team!
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.