Why Study Programming Languages (2022)
Posted3 months agoActive3 months ago
people.csail.mit.eduTechstoryHigh profile
calmmixed
Debate
70/100
Programming LanguagesLanguage DesignSoftware Development
Key topics
Programming Languages
Language Design
Software Development
The article discusses the value of studying programming languages, sparking a discussion on the motivations behind designing new languages and the relevance of language study in the era of LLMs.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
51
6-12h
Avg / period
15.6
Comment distribution109 data points
Loading chart...
Based on 109 loaded comments
Key moments
- 01Story posted
Oct 14, 2025 at 1:36 AM EDT
3 months ago
Step 01 - 02First comment
Oct 14, 2025 at 2:47 AM EDT
1h after posting
Step 02 - 03Peak activity
51 comments in 6-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 16, 2025 at 12:33 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45576623Type: storyLast synced: 11/20/2025, 4:17:19 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Because we can. Because a compiler is nothing more than a fancy text translator.
Outside affine types, all the praise for Rust's type system traces back to Standard ML from 1976.
The heretic of doing systems programming in GC enabled programming languages (GC in the CS sense, including RC), goes back to research at Xerox PARC, DEC and ETHZ, late 1970's, early 1980's.
Other things that we know, like dependent types, effects, formal proofs, capabilities, linear and affine type systems, are equally a few decades old from 1980's, early 1990's.
Unfortunely while we have progressed, it seems easier to sell stuff like ChatGPT than better ways to approach software development.
Rather from the sixties. E.g. OOP including dynamic dispatch, late binding, GC etc. appeared 1967 with Simula.
Edit: yes, yes it does describe the garbage collector.
The internet needs wires and routers, distributed computing need a good network (i.e. the internet), current-day AI needs GPUs and GPUs need silicon chips that defy the laws of physics. Really, looking at the EUV lithography process makes all of computer science feel insignifiant by comparison, everything about it is absurd.
The real progress is that now, we can implement the ideas from the 70s, the good ones at least. I don't want to diminish the work of the founders of computer science, there is real genius here, but out of the billions of people on this planet, individual geniuses are not in short supply, but the real progress come from the millions of people that worked on the industrial complex, supply chains and trade that lead to modern GPUs, among everything that define modern computing.
For this, we could look at intellectual property laws. Ideas are not protected. Neither by patents, nor copyright, nor trademark. If you want to make your idea worthy with regard to the law, you have to "fully specify" it, turning it into an invention (patent), or code (copyright).
not to promote FP but imperative stateful vs closures/function oriented is quite a strong example of that
a different paradigm can really be a massive intellectual tool
As it became easier to abstract away from pure assembly, people did just that. Some ideas stuck, some converged, and the ones with the most tenacious, adaptable and attractive communities remained.
Before I learned to code, no programming language was even remotely readable to me. But the more I learned, the more I could shed the notion that this was purely my fault, and accept that sometimes things are a certain way because someone found it interesting or useful. Applies to natural languages and mathematics, too.
It's super important because those concepts get measured, and absorbed into existing languages (as best they can), but that wouldn't have happened without the new languages
New concepts like Rust's "ownership model", Smalltalk's "Object Orientation", Lisp's "Functional programming", Haskell's "Lazy evaluation", Java's "Green threads"
Rust's "ownership model", is a simplification of Cyclone, AT&T's research on a better C, based on mix of affine and linear type systems.
https://en.wikipedia.org/wiki/Cyclone_(programming_language)
Haskell's "Lazy evaluation" was present in Miranda, before all related researchers came up with Haskell as common playground.
https://en.wikipedia.org/wiki/Miranda_(programming_language)
"History of Haskell"
https://www.microsoft.com/en-us/research/wp-content/uploads/...
Java's "Green threads" go back to systems like Concurrent Pascal.
https://dl.acm.org/doi/10.1145/775332.775335
And, I think, being better for what that audience is trying to do than existing tools. (But maybe that was implied in your statement.) This also implies adequate tooling and libraries.
And publicity, to reach that audience (though viral is better than corporate).
I sincerely want to ask if this was an ironic comment given the topic? Because obviously none of these core concepts were really new to the languages you ascribe them to.
To avoid feature bloat of unconnected pieces of ad-hoc syntax Java does the right thing and focuses on expressing easily-composable language building blocks.
As a last adopter language, Java has the luxory of cherry-picking good features of other languages but I think their line of thinking should be the reference moving forward.
>we create programming languages to experience new ideas; ideas that would have remained inaccessible had we stayed with the old languages.
A lot of my early expertise in performance analysis was heavily informed by my SIGPLAN membership. Many of the improvements showing up in compilers and interpreters would show up in some form there, and of course those developers were either involved in the papers or had access to the same research. So when some new version came out with a big explanation of what it did, I already had a reasonably good notion of how it worked.
It was a dark day when they got rid of the paper proceedings.
Did you.... just quote Blade? :-)
But my favorite will always be
“Some motherfuckers are always trying to ice skate uphill.”
> I encourage everyone to create the most absurd, implausible, and impractical languages. Chasing the measurable is often useful, expressing the expressible is insightful, but never forget the true goal of language design: to explore and create what isn’t.
Sorry, but this sounds more like an artsclass to me. Don't get me wrong, there was a point in time where exploration of the unknown was the only way to move forward. But these days we would need greater insights into higher-level language semantics and inherent tradeoffs to guide language-design and language evolution.
There is plenty to choose from and one can learn already so much just by reading up on the Java-EG mailing lists. Brian Goetz has a true academic mindset and I frequently feel inspired when I read his reasoning which is both highly structured and accessible.
Otherwise we would just be left with another compiler class. Compiler basics really aren't that difficult.
Indeed, it is, and that's the point! Being interfaces to computers for humans, programming languages sit at the intersection of computer science and humanities. Lots of people like to treat programming languages like they're math class, but that's only half the picture. The other half is usability, ergonomics, learnability, and especially community. Not to mention the form of the language is all about aesthetics. How many times has someone on Hacker News called a language "beautiful" or "ugly" referring to the way it looks? When people praise Python they talk about how easy it is to read and how pleasant it is to look at compared to C++. Or look at what people say about Elm error messages versus C++ template errors. Actually a lot of what's wrong with C++ could have been averted if the designers had paid more attention in art class.
> But these days we would need greater insights into higher-level language semantics and inherent tradeoffs to guide language-design and language evolution.
Here's a talk that argues there's much more fertile languages ground for ideas outside of the "programming languages are math" area, which has been thoroughly strip-mined for decades:
https://medium.com/bits-and-behavior/my-splash-2016-keynote-...
This author takes the perspective that programming languages are much greater than the sum of the syntax + semantics + toolchain + libraries, and treating them as such is limiting their potential.
That just is not true at all. These are all legitimate engineering tradeoffs, which any serious project has to balance. Calling this "aesthetics" is completely dishonest. These aren't arbitrary categories, these are meaningful distinctions engineers use when evaluating tools to write software. I think the students better understand what programming languages are than the teacher.
If you accept that a programming language is a tool and not just an academic game of terms, then all these questions have clear answers.
Agree, and we actually have both the standards and established methods to conduct representative tradeoff studies. But this knowledge is mostly ignored by CS and programming language designs. Even for Ada, there was little empirical evidence for specific design decision. A systematic survey discovered only 22 randomized controlled trials of textual programming language features conducted between the early 1950s through 2012, across six decades. This staggering scarcity explains why language designers rely heavily on intuition rather than evidence (see e.g. https://www.cs.cmu.edu/~NatProg/programminglanguageusability...).
No actually. Why is that important? I dont quite see why that is relevant. Could you elaborate?
Firstly, when using them to create software it's pretty obvious that experienced devs and people who understand theory have a greater ability to guide, curate and control them.
Secondly, as they improve in ability we can see a paradigm change for people using them at least as significant as the jump from assembly to high level languages. Most programmers would have no need to study assembly these days.
Either way, their omission (while appropriate for the year, if somewhat lacking in foresight) is a significant one that renders it somewhat dated already.
Edit: I assume this comment gets downvoted because people don't like where we are heading, not because they really think LLM programming capabilities won't continue to improve at a staggering pace.
The error rate of models make language design, tooling, testing methodology and human review more important than ever before. This demands language evolution. You could get faar with lax testing and language tooling with enough caution and skill. but when LLMs enter the picture, that no longer flies.
we need tooling, static analysis, testing paradigms, language design that restrict how dangerous the LLM is allowed to act.
natural language is faar to fuzzy to replace programming (system specification is already famously impossible thing to do right). If you think it truly will replace code, i highly suspect you work om webbdesign, where testing and reliability was always a secondary concern.
And even then, I think were already on the convergence platoe of LLM code. The companies are raising prices as diminishing improvement and balooning compute costs.
I think it is more interesting to see which languages are still used today and how popular these are. Because this is also tied to the human user/developer.
For instance, I used BASIC when I was young - not as a professional but as a hobbyist. I liked it too. I wouldn't use BASIC today because it would be entirely useless and inefficient.
I started with BASIC too. Also enjoyed BlitzBasic2 for a long time on the Amiga. That's where I learned programming… back then when programming was still fun.
It fall out of fashion, along with Pascal, Perl, Ruby, but that's just fashion.
Because BASIC simply doesn't have first-class functions, and they would be quite hard to represent in a BASIC-like syntax while keeping the language idiomatic. Even the unreasonably clunky C pattern of having a pointer to a function taking void* as its first argument (to account for closure captures) gets you a whole lot closer to functional programming than even the fanciest BASICs.
I don't have to relearn natural language every 5-10 years, but for some reason I'm expected to when it comes to programming.
How can a language be "inefficient"? You can say it lacked on expressiveness. Maybe was too verbose? But I would not place BASIC into the "verbose" category.
https://learn.microsoft.com/en-us/dotnet/visual-basic/
https://www.xojo.com/
https://www.mikroe.com/mikrobasic-pic
Just the three that come to my mind, of BASIC toolchains that people actually pay real money to use.
One thing I've learned over the years is that the language is almost irrelevant, the tooling and 3rd-party library support are much more important.
I wish even only half the OOP world actually understood it as the above.
IMHO, not the ideas were bad, but the execution of them was. Ideas were too difficult/unfinished/not battle-tested at the time. A desire for premature optimisation without a full understanding of the problem space. The problem is that most programmers are beginners, and many teachers are intermediate programmers at best, and managers don't understand what programmers actually do. Skill issues abound. "Drive a nail with a screwdriver" indeed.
Nowadays, Round-Trip Engineering might be ready for a new try.
There is no reason to study programming languages in 2025, other than as a historical curiosity - the same way one may study languages equally as pitiable as e.g. COBOL, Lisp, or MIPS assembly.
I call it, "wetware LLM prompt engineering".
Wouldn't LLM need to directly output machine code for that to be true?
[0] https://modelcontextprotocol.io/specification/2025-06-18/ser...
> Wouldn't LLM need to directly output the correct machine code for that to be true?
Why? Consider Gemini's recent performance at the International Collegiate Programming Contest [0], in which it solved a problem that no other human team was able to solve.
Wetware intelligence is itself obsolete, at least as concerns the domain of computing.
[0] https://deepmind.google/discover/blog/gemini-achieves-gold-l...
That problem: https://worldfinals.icpc.global/problems/2025/finals/problem...
Well LLMs finally offer that, and what they are proving is what programmers have known for decades -- natural language is a terrible way to specify a program to a computer. So what is happening in the LLM world is they are reinventing programming languages and software engineering. They're just calling it "prompt engineering" and "context engineering".
What this tell us is that natural languages are not only not sufficient for the task of programming, to make them sufficient you need to bring back all the properties you lost by ditching the programming language. Things like reliability, reproducibility, determinism, unambiguity are thrown away when you use an LLM, and context engineering / prompt engineering are ways of trying to get that back. They won't work well. What you really want is a programming language.
Downthread there is an example of an ICPC problem statement, [0] given as natural language, (modulo some inequalities and example program inputs/outputs) which was sufficient for Gemini to program & implement the correct solution where no other human could.
[0] https://worldfinals.icpc.global/problems/2025/finals/problem...
I also see two graphics, and several formal mathematical expressions. You can't modulo away all the not-natural language and then claim natural language alone was sufficient. I presume these things were added by the authors to increase clarity of the problem statement, and I agree with them. They used formal languages to specify all the important parameters in an unambiguous way, which was the right call! Otherwise we would all be left wondering at the semantics.
Anyway, I don't think this really responds to my point, because competition prompts are designed to be self-contained problem statements that are as clear and unambiguous as possible for competition purposes. And in this case, they switched to speaking in a formal language when being precise and unambiguous was most important.
On the other hand, my statement was about the task of programming, which typically involves solving ill-defined problems with specifications and requirements that shift over time. I've been in programming competitions, I've also been a programmer, and I don't find one to be really related to the other.
Human factors are very well studied and standardized, and there is a well-established discipline called "Human Factors Engineering", which also provides established test and evaluation methods. Human Factors research is considered solid and well-established because it has been built on rigorous experimental psychology and engineering principles developed over more than a century, with systematic methodology and empirical validation. Even if much of it is unknown or ignored by computer science or programming language design, there are many disciplines where Human Factors Engineering is critical (see e.g. ANSI/AAMI HE75).
Usability is therefore neither ill-defined nor hard to measure. Several ISO 9241 series standards address textual and command-based interaction directly relevant to programming language assessment. ETSI EG 202 116 provides extensive human factors guidelines for command language style. ITU-T Recommendation Z.361 provides guidelines for Human-Computer Interfaces in telecommunications management, incorporating the ISO 9241-10 dialogue principles. ISO/IEC TR 24772 addresses programming language vulnerabilities specifically from a human factors perspective.
E.g. Ada did have substantial human factors considerations documented in its design rationale, directly contradicting the notion that such principles don't apply to professional developers or programming languages. It's rather that computer science seems to continue ignoring established fields ("Not invented here"). Human factors in software development have been overlooked by researchers in the software engineering and development research areas, despite their obvious relevance. So what is lacking is primarily interest or willingness, not the fundamentals and means.
Also Alan Kay and the Xerox PARC team designed Smalltalk (as Papert did before with Logo) with profound human-centered considerations, and they even "tested" their early concepts with children.
Also some other languages explicitly state human-centered design goals (e.g. Python, Eiffel), but as with Pascal or Ada the approach was more based on expert judgment, formal analysis, and established principles, not practical studies.
People who’ve spent a long time programming have spent a long time optimizing everything about their work, and they’re willing to talk about it (most of them won’t shut up about it, even).
They may not have used standards such as the gp comment mentions, but they definitely considered human factors a lot.
E.g. TIMTOWTDI - There Is More Than One Way To Do It.
But that's not the only area in which they applied it.
Yes. In fact, in the early days, there were tools with names like sed2perl and awk2perl, or similar, IIRC. And those could convert code between their respective source and target languages.
And even easier if you know C and Unix.
A far more important measure is something like function points per month of developer time. See page 49 of https://www.ifpug.org/wp-content/uploads/2017/04/IYSM.-Thirt... for some data on that. ADA 95 did pretty well, around 11. Perl beat it with 15.
Of course that measure isn't perfect. Excel trounced both with over 30. But Excel is only appropriate for some kinds of projects.
This kind of seems like it's focusing too much on my exact word choice and less the actual intent of my question behind it. The question I have is why following established principles should matter; I don't think it should be particularly surprising that someone might assume that making a language more usable for humans would be related to the number of humans who end up deciding to use it, and if that's not the case, I wanted to understand why my intuition is wrong.
> there are still industries where it is important to reduce the human tendency to make mistakes by taking appropriate measures. Ada was created for such an industry from the very beginning
This is a good point that I hadn't considered; it definitely makes sense to me that some domains might be less tolerant to human errors than others, and those domains would better reflect how well-designed a language is for humans.
> The "level of suffering" experienced by most people is probably simply not great enough to systematically take such aspects into account. But there are still industries where it is important to reduce the human tendency to make mistakes by taking appropriate measures.
Reading this part a couple of times, I think this might be where the nuance lies. My colloquial understanding of what it means for something to be ergonomic (and even by the idea of what"level of suffering" would mean) isn't quite the same as the measurement of how likely it is for something to induce human error. This might just be a case where the common use of the term isn't the same as how it's used inside the field of study, but I would have expected that the ergonomics of a language and measurement of the "level of suffering" would be with respect to the programmer, not the one experiencing the use of the software that's developed as a result. That isn't to say I disagree with the idea that the end-user experience should ultimately be more important, but I think that might account for the disconnect between what you're describing here and what I would have expected from a discussion around "programming language ergonomics" (which also might explain the difference between Ada and the other languages mentioned in this thread).
Apparently I still don't understand your question, sorry. For what I understand, following established principles is part of the engineering profession; it has proven to be the right thing to do over decades, and it is part of engineering education.
> I would have expected that the ergonomics of a language and measurement of the "level of suffering" would be with respect to the programmer, not the one experiencing the use of the software that's developed as a result.
Usually not the "level of suffering" is measured in human factors engineering, but the time needed and degree of fulfillment of typical tasks a typical representative of a test group is suppost to perform. You can do that with different designs and can then conclude which one meets the performance requirements best. Human factors typically enter a specification as performance requirements (what functions shall the system implement and how well). Given a programming language, you could measure how long a typical programmer requires to complete a specific task and how many errors the implementation has in the first version.
I agree that following established principles is important, but my understanding is that the principles get established because they're better at leading to desirable outcomes. I'm trying to understand what the outcomes are that the principles you describe are intended to lead to. From your most recent two replies, my best interpretation is that it leads to fewer errors overall in the programs produced, but that wasn't as apparent to me from your first comment. I do think I understand now though.
The application of the principles of Human factors engineering to the design of systems reduces human errors, increases productivity, and enhances safety, health and comfort when interacting with these systems. For a programming language, taking human factors into account appropriately means that the target group of language users (i.e. programmers) is sufficiently capable of performing their tasks in all phases of a program's life cycle, e.g., they are not cognitively overwhelmed, and the likelihood of misunderstandings or mistakes is reduced. However, they should neither be unnecessarily restricted or hindered in their work, because also this creates unnecessary extraneous cognitive load that exhausts the programmer's limited working memory capacity. Human working memory can hold only 3-5 "chunks" of information simultaneously. This is a well-documented, hard biological constraint; when programming languages impose excessive formalism, they force programmers to juggle more mental "chunks" than working memory can handle. Self-explanatory code (which includes avoiding incomprehensible abbreviations or confusing syntax) reduces the cognitive load on the programmer. Ada's explicit human factors principle states: "Code is read more than written"; over a program's lifetime, especially in large, long-lived systems, code is read orders of magnitude more often than it's written; Ada's formalism optimizes for the more frequent activity (reading and maintenance) at the expense of the less frequent activity (initial writing). As a language designer, you therefore have to find the right balance, which of course is a function of your target audience, and the primary activities they will perform with the language.
ADA was designed with a goal of improving software reliability. Potentially at the cost of other factors, like programming speed. Real life projects using it demonstrated that the error rate of large ADA projects was around half of equivalent C or FORTRAN projects.
However popularity is determined by other factors. Such as personal productivity, and accessibility for novices. These other factors are often in direct opposition to long-term maintainability. It is good for productivity to be able to do things in whatever way is convenient. But that flexibility is a burden for the maintenance programmer. Likewise novices frequently create useful code, which becomes hard to maintain down the line. (Excel spreadsheets are a classic example of this.)
The result is that ADA was a good fit if you have a large project, time to market was not your top issue, and reliability and maintainability were top concerns. Many defense projects have these exact characteristics. Which is why ADA was developed and used there.
But consider a startup. Projects are small. The top concern is time to market. Maintainability will only become an issue if your product launches, gets to market, and succeeds. That's a future problem that can take care of itself.
That's why, when you look at startups, you find a high density of scripting languages. Which language has changed over time. For example Amazon used Perl, Facebook used PHP, and Instagram used Python. But all scripting languages share the similar characteristic of having fast initial development times, and poor long-term maintainability. (Yes, even Python. Internal data showing that is why Google began walking away from Python around 15 years ago.)
The main answer is that we have only a limited ability to modernize existing programming languages. For example, most languages are not null safe, because most languages are old and we can't make them null safe without breaking backward compatibility with most existing code. And we can't break backward compatibility for practical reasons. So Java will never be null safe, PHP will never be strongly or statically typed, etc.
So for fundamental language features, replacing older languages is the only way to achieve progress. Unfortunately that's a very slow process. Python, the currently most popular language, is already over 30 years old.
But that turns into the trap of short-term thinking - eventually you reach the point where you would have been better off throwing it away and starting over. You don't reach that in the year you throw it away, though, nor in the year after.
The catch is you will also have a bit of old code that cannot be modernized reasonably, and new code has to somehow interoperate with it. Which means languages can't break anything because it might be the one thing you can't figure out how to not use anymore even though you know better and would do it different if you started today. Worse often the problem is an early design decision and so the bad practice is everywhere and you can't get rid of it in any one place because everything depends on it.
“This class is about the study of programming languages.”
Where is that class?
This reminds me of recreational math & gamedev, you simply do whatever you feel is fun and design it exactly as you'd like it to be.
When I was learning Rust I started out just associating patterns with lib types. Need to dynamically hold items? Vec. Need a global mutex? install lazy_static.
This is fine if you're beginning, but at some point you need to read about why people choose this. 9/10 times there's a more elegant option you didn't know about because you just did what everyone else does. This separates programmers from coders.
The only reason I learned this was because my current company has programmers, not coders. I learned a ton from them
However, as a trench-line coder, I enjoy dabbling in languages to learn different techniques for achieving a similar set of goals without sacrificing pragmatism. In that sense, I rarely have the luxury to explore purely for exploration’s sake. So I wouldn’t describe abstraction, performance, or usability as “aesthetics,” nor would I spend time on a frivolous language that I know won’t gain much traction outside academia.
I like reading the perspectives of academics just to see how wildly different they are from those of the people I work with in the industry. This is probably a good thing.
Been there, done that: https://esolangs.org/wiki/Ziim
So to me the study languages was interesting from this DSL perspective.
5 more comments available on Hacker News