Is Fortran Better Than Python for Teaching Basics of Numerical Linear Algebra?
Posted3 months agoActive3 months ago
loiseaujc.github.ioTechstoryHigh profile
heatedmixed
Debate
85/100
Numerical Linear AlgebraProgramming LanguagesEducation
Key topics
Numerical Linear Algebra
Programming Languages
Education
The article compares Fortran and Python for teaching numerical linear algebra, sparking a debate among commenters about the best language for education and the trade-offs between different programming languages.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
33s
Peak period
116
Day 1
Avg / period
25.2
Comment distribution126 data points
Loading chart...
Based on 126 loaded comments
Key moments
- 01Story posted
Sep 23, 2025 at 3:29 PM EDT
3 months ago
Step 01 - 02First comment
Sep 23, 2025 at 3:29 PM EDT
33s after posting
Step 02 - 03Peak activity
116 comments in Day 1
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 7, 2025 at 12:38 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45351624Type: storyLast synced: 11/20/2025, 4:44:33 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Step 1) Write the function using high level abstractions Step 2) Glance over the generated assembly and make sure that it vectorized the way you wanted.
Isn't that sth you would also need to do in Fortran? IMO Julia makes this so easy with its `@code_*` macros and is one of the main reasons why I use it.
Julia’s on the other hand, many times puts out very unoptimized code.
Mind you, last time I looked at Julia was 2-3 years ago, maybe things have changed.
But indeed there are almost certainly less performance surprises in Fortran
people like to complain about matlab as a programming language but if you're using it that way you're doing it wrong.
matlab (the core language) is awesome for expressing matrices and vectors and their operations as well as visualizing the results. matrix expressions in matlab look almost identical to how they look in mathematical notation (or how one might write them in an email). you shouldn't be using programming language flow control (or any of the other programming language features), you should be learning how to write for loops as vector and matrix operations and learning from the excellent toolboxes.
When you care about the math, Mathematica. It's a replacement for several pages of hand-written math, or a chalkboard.
When you care about the result, MatLab. It's a replacement for your calculator, and maybe Excel.
When you care about the resulting software? Python/Julia/Fortran.
Unicode support and a few other syntax niceties make translation from the blackboard to the editor nice and clean. Fortran is great but legibility and easy tooling like (reproducible) package managers are paramount in teaching
It’s far more legible for numerics than a lot of languages, maybe except Julia and Chapel. Julia was just driven in large part by teaching mathematics at mit and I think that shows
Julia has the fancy stuff aswell as being very legible and also having a nice REPL for instant feedback which is usefull for people learning (as well as multiple notebook implementations Pluto.jl or Jupyter) Chapel has even more of the fancy stuff like multiple loop types for different kinds of parallelism which is just wildly cool.
But they all have Opinions, which they are compelled to share.
Why should I use Fortran, for anything that isn't maintaining legacy code?
When is that fully bootstraped rustc coming?
Calling it a DSL is a bit rich given the history of computing but at least that way at least CS and developer types will know how to regard it.
For teaching linear algebra, MATLAB is unironically the best choice - as the language was originally designed for that exact purpose. The problem is that outside of a numerical methods class, MATLAB is a profound step backwards.
After using Character map or other "user friendly" methods to enter Greek letters as symbols on a computer, i would say, yes, people struggle with the use of Greek letters. Unless, of course, one has a Greek keyboard.
It's best used for internal calculations where the symbols better match the actual math, and makes it easier to compare with the original reference.
I'm not dismissing Julia. Actually, the first lines of my conclusions are
> In the end, when it comes to teaching the basics of numerical linear algebra, Python and Fortran are not that different. And in that regard, neither is Julia which I really like as well.
I feel like people got the impression that I'm saying they *should* use Fortran for teaching. That ain't the point, but maybe I did not convey it as clearly as I would have like. The point is : a programming language with strong typing, clear begin/end constructs, ensuring inputs to a function cannot be accidentally modified (otherwise it has to be a subroutine), etc actually makes it easier for the students to effectively *learn* computational thinking rather than having to battle with syntax errors and strange intricacies of a general-purpose language. Fortran is just an example which turns out to be historically related to number crunching.
Unicode support and greek letters sure can be useful when presenting code snippets in your slides, but it essentially is syntactic sugar coating. And, unfortunately, many students have no idea how to spell greek letters (e.g. \to for \tau, \fi for \phi, etc) and just end-up loosing time on aesthetic details rather than focusing on the learning objective.
Finally, you may not know it but Fortran does have a package manager. Check out fpm (https://github.com/fortran-lang/fpm). It basically is just like Pkg for Julia, also using a toml manifest, can be installed via conda, makes sure things are reproducible across compilers and platforms, etc.
- Matlab in the first few science lab courses + first CS course.
- C++ in second CS course
- Fortran for the scientific computing course
I found Fortran worse than matlab. The error messages were difficult to parse, and it was much more difficult to do step through debugging like in matlab.
But later I learned Python, and now use it professionally to do scientific computing, and I would give anything to go back to Fortran. Or use Rust or Julia. Or if Wolfram/Mathematica if that was possible. Anything but Python.
The fundamental problem with Python is that all the math is hacked into it, unlike Julia/Matlab/Mathematica where the math takes first priority and other values are secondary.
Julia would be easier to switch, but it's still months of work to port over existing libraries.
Teach them numerical algorithms and have the students contribute to a better language's BLAS, LAPACK, or numerical library like numpy, jax, scipy, etc. Be part of the solution.
But for anything with a userbase of more than ~15 people, C/C++ are widely preferred.
First time I saw this claim was over 9 years ago.
I will bet you any amount of money you like Python will be more popular than Fortran in 5,10,20 and 30 years.
Also, the downside is fortran does not have nice plotting capabilities without an external tool. At least I know of no nice libraries like matplotlib, which again is a point in just teaching them a more general purpose language from the get go to get used to it so they can plot and code in the same language...or perhaps, matlab/octave et al as others suggested. I feel like the niceness of fortran (namely, well defined type safe function/subroutine interfaces and easy path to writing performant code) that isn't offered by python is only useful after the first threshold of learning to put algorithm to paper. The literally second (and arguably for some fields, even more important) task of actually plotting results doesn't have the same convenience as the intrinsic procedures in fortran, whereas if they had learned julia or python, these tools would be at the very least be at the same convenience level of the array facilities, essentially behind a rather easy[^] to use library. In fact, in julia, you're already mostly there although it's not my cup of tea. Perhaps the answer is julia after all.
Does OP's courses just use an external program (like gnuplot) or a black box to plot?
[^] easy to use once you know how to program a little, of course.
Personally, I've only used ogpf, which is a single-file library, making it easier to run for beginners.
[0] https://fortran-lang.org/packages/graphics/
That said, the point of these being external libraries and thus making them a bit less convenient still sort of stands, as being external libraries means you need to link them which exposes more CS tier stuff (installing libraries, make files, etc) that distracts from just learning codes, which again just motivates using a tool that abstracts some of that behind a managed package and library system.
I'm assuming you could use things like lfortran in jupyter which I imagine might allow these things to be bundled, although I haven't followed that effort as of late.
Anyway, this is discussion is all for the sake of teaching students. For a python learning student, I assume they do not start from zero and instead told to using anaconda or some build script that gives them a standard jupyter install. From then on in the code they use, matplotlib and numpy appear on equal footing, a set of function calls that just have different prefixes, in their eyes. This surface level similarity is what I mean by them appearing to have convenience level. The fact that jupyter installs are pretty standard and have loads of documentation helps ameliorate potential issues during installation.
In fortran on the other hand, you do not need an external module for the array facilities (things like shape, dot_product, things for initialising arrays, etc) given the built-ins and the first class nature of arrays. However, you will need to `use` a module for plotting (fairly easy, essentially one line for importing) and link it and add it to the module path (potentially fraught). This is what I mean about them appearing on different footing, from a naive student's perspective.
While there are attempts to provide a nice package system for fortran, it's generally the wildwest out there just like it is for c++ and c, so unless the instructor essentially has them work only on lab machines they control, using external libraries seems to me to be a huge source of headaches when dealing with students once they go off to install it themselves.
Mathematica, matlab, maple, octave, etc.
I agree array languages would be great use for GPGPU, however they seem mostly to be interpreter based implementations.
Not at all. Co-dfns came out ~5 years ago. Here's a discussion including the author: https://www.reddit.com/r/ProgrammingLanguages/comments/k258e...
Here's an article about many others: https://codereport.github.io/GPUArrayLanguages/ including interviews with some of their makers.
Why? In which Uni do you not have an introductory programming class, which is a prerequisite for every other software based class?
Just use the language of your introductory class everyone will known it and everyone can think about the algorithms.
...however discussion on Fortan is inevitably dominated by those who once saw some F77 code years back and associate Fortran with punch cards.
Fortran 90 was a big update of the language, and modern Fortran versions and compilers are very good indeed. Not to mention they can be used with the old tried and tested libraries which are bombproof.
There is a reason Fotran is still used. It's simple, and you can *really* trust it.
I should get back to Fortran but it has changed a lot over the years.
I'm with Dijkstra on this one. https://www.cs.utexas.edu/~EWD/transcriptions/EWD08xx/EWD831...
(Now, if you had a proposal for switching math over to 0-based indexing ...)
But I have also seen places where 1-based indexing was used despite being "obviously wrong". I don't quite recall what it was, but there was sequence of objects A_1, A_2, ... and a natural way of combining A_k and A_l to get A_(k + l - 1). Had the indices been shifted by 1 to be 0-based, the result would have been A_(k + l), which would be much nicer to work with.
> The above has been triggered by a recent incident, when, in an emotional outburst, one of my mathematical colleagues at the University —not a computing scientist— accused a number of younger computing scientists of "pedantry" because —as they do by habit— they started numbering at zero.
C++ also has by far the best support for graphics (OpenGL/Vulkan) and GUI toolkits like Qt.
That's what sucks in python as a newbie. Wanted to play with random, which python docs says it's a builtin. Obscure error message. Search the internet, no fix in sight. After more searching i found out that i actually have to "import random" (builtin ??) and use a method ? Wtf.
So definitely python is not for newbies. Fortran code is much easier to underestand.
We learned the value of modularity decades ago. This is a sign of Fortran's age, not a good thing. Very few modern languages that aren't heavily domain-specific will have more than a bare minimum of functions in scope by default without needing to import them or a module containing them. The only relatively counterexample I can think of is PHP, and even it's grown namespaces over the years.
As someone who regularly teaches intro programming using Python, I assure you that students learning Python need to worry both about types and about syntax, and the fact that both are invisible does them less favours than you might think. Type errors happen all the time in Python, but they aren't caught until runtime and only when given the right test cases, and the error message points somewhere in the program that may be quite distant from the place where the problem actually is. Syntax errors are less common for experienced programmers, but newcomers struggle just as much with syntax in Python as they do in languages like C++ and Java (both of which I've also taught intro programmers using).
Pure python has a tendency to silently widen every floating point type to double. Numpy overlays a C ABI on top of python’s oversimplified type system, which complicates matters further.
I wouldn’t teach numerical linear algebra in any weakly typed language.
The professor is an excellent teacher and students love him. But other faculty worry that our students leave without any real exposure to Python, and that they keep reproducing this heavy OOP style even in situations where a simple lambda or a few lines of NumPy would be far more natural. (Lambdas are shown at the very end of the course, but mostly as a curiosity.)
That’s why I found the blog post so interesting: it shows how natural code can look in Python or Fortran. By contrast, our students’ code is weighed down by boilerplate. It makes me think that sometimes the real difficulty in teaching numerical analysis isn’t the language itself, or whether arrays start at 0 or 1, but the teaching approach that frames every problem through layers of abstraction.
Using C++ as a pedagogical tool for general teaching of such methods would be quite a choice.
That’s why some of us wonder whether a lighter approach (lambdas, NumPy in Python, etc.) might let students focus more directly on the numerical methods without so much boilerplate.
have you seen this? https://cppyy.readthedocs.io/en/latest/
At the same time, I find most argument bizarre, other - harmful.
From the bizarre region, students asking "What is 'import numpy as np'?". A legit question. If you want to avoid all technicalities, pen and paper is the right approach. All in all, they need to run code somehow and there will be questions like "What is 'implicit none'?"
If they are not asking such question, is is not because they focus some much on linear algebra. It's because they are lost - or at very least - have lost interest.
From ones (in my opinion) actively harmful:
> It’s about teaching the basics of scientific computing to engineering students with a limited programming experience.
This is a big red flag. If they have little background in programming, teach good standards of Python. Otherwise they will get worst patterns, and think that GOTO is the preferred way of coding.
If you think why a lot of academic code looks like mess, one of the reason is using archaic tools and techniques as a standard.
Also - if your focus in linear algebra and avoid magic, you CAN do it without any numpy, just Python lists.
> This is a big red flag. If they have little background in programming, teach good standards of Python.
That is a big issue, I'll admit it. And actually, in my Uni, I am a very strong proponent of making an intro to programming class mandatory before even being able to enroll in any other engineering classes. Unfortunately, I don't see this happening anytime soon mostly because the higher-ups have a distorted view of what programming actually is. And I believe it is the same in many French universities. May-be different elsewhere, I don't know, but in the mean time I have to make do with what I have.
> Otherwise they will get worst patterns, and think that GOTO is the preferred way of coding.
People keep on referring to `goto`. But that is a construct that has been considered bad since the Fortran 1990 standard, 35 years ago. Fair enough, there are plenty of legacy codes using it. Just like there are plenty of C code written 35 years which are just terrible by today standard. Modern Fortran (and by modern I mean anything following 2003 standard and more recent) does not use `goto`. And academics writing Fortran code today do not use `goto`, nor do they teach it.
> If you think why a lot of academic code looks like mess, one of the reason is using archaic tools and techniques as a standard.
Again, I think this is a very distorted view. When you need to run your code on 1000+ CPU, which is what I and my colleagues do, you better make sure your code makes use of all the most recent good practices and use industry-standard tooling not to waste CPU hours because of a stupid bug or what not.
The "numerical" part is a minefield because it will take all your math and demolish it. just about every theoretical result you proved out above will hold not true and require extra-special-handholding in code to retain _some_ utility.
As such I think a language which enables you to go as fast as possible from an idea to seeing if it crosses the numerical minefield unscathed is the one to use, and these days that is python. It is just so fast to test a concept out, get immediate feedback in the form of plotting or just plain dumb logging if you like, and you can nearly instantly share this with someone even if you're on ARM +linux & they are Intel+windows
The most problematic issue with python&numpy, as it relates to learning _numerical_ side of linear algebra, is making sure you haven't unintentionally promoted a floating point precision somewhere (for example: if your key claim is that an algorithm works entirely in a working precision of 32 bits but python silently promoted your key accumulator to 64 bits, you might get a misleaing idea of how effective the algorithm was) but these promotions don't happen in a vacuum and if you understand how the language works they won't happen.
edit: & I have worked professionally with fortran for a long time, having known some of the committee members and BLAS working group. so I have no particular bias against the language
Fortran is fantastic for numerical linear algebra because you see the interplay of hardware and algorithm design.
Scilab is fantastic to see how all the pieces come together.
This was the path I took, before going to Python, Go, and Rust.
Also the swapping of u and tmp doesn't work like that in python. Might in fortran.
They always want to teach the more elegant and divine method that, in reality, nobody uses.
I studied Pascal and Fortran when they could have taught me more widely used languages. Shame on them.
If Fortran is so useful and clear, just offer some lessons on it. Surely students will be enlightened and captivated by it.
Practicality beats purity.