Scheme Reports at Fifty
Posted3 months agoActive2 months ago
crumbles.blogTechstory
calmmixed
Debate
40/100
Scheme Programming LanguageLanguage DesignStandardization
Key topics
Scheme Programming Language
Language Design
Standardization
The article 'Scheme Reports at Fifty' reflects on the history and future of Scheme language standardization, sparking discussion on the language's design and the value of standardization.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
4h
Peak period
6
6-8h
Avg / period
3.6
Comment distribution25 data points
Loading chart...
Based on 25 loaded comments
Key moments
- 01Story posted
Oct 19, 2025 at 10:45 AM EDT
3 months ago
Step 01 - 02First comment
Oct 19, 2025 at 2:15 PM EDT
4h after posting
Step 02 - 03Peak activity
6 comments in 6-8h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 20, 2025 at 8:44 AM EDT
2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45634528Type: storyLast synced: 11/20/2025, 8:42:02 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
The parens are so hard for me to follow and always have. I have yet to find an editor that fixes that. Perhaps I did not try enough or am not smart enough to acutally use the editors correctly.
Anyway interesting read I think
Working through all the exercises in "The Little Schemer" was a huge help for me when getting started. You start with a few primitives and build up all common tools from those with recursion, like how to build an addition function using just `add1` as an early example from the book.
And you're right—working through "The Little Schemer" was a game-changer for me too. There's something about gradually building up to complex concepts that really clicks, right? I wonder if there could be a way to create more beginner-friendly editors that visually guide you through the syntax while you code. Or even some sort of interactive tutorial embedded in the editor that helps by showing expected patterns in real-time.
The tension between users wanting features and implementers wanting simplicity is so prevalent in so many languages, isn't it? Makes me think about how important community feedback is in shaping a language's evolution. What do you all think would be a good compromise for Scheme—more features or a leaner report?
Editor-wise, you want an editor that does automatic indenting and some kind of matching parentheses highlighting. Emacs is one. (Once you've learned the language, you can use a fancy structural editor, but maybe don't confuse yourself with too many new things at once.)
Not quite structural editing, minor annoyances, but pretty decent.
Emacs was purpose-built for working in Lisp. Out-of-the-box it really helps with paren-matching by highlighting the matched bracket (of any type) when you cursor over a bracket (also works by highlighting the open when you type the close) and providing commands for traversing and selecting whole sexps. Those alone, combined with its smart indentation, will get you pretty far. Add something like Paredit or Parinfer if you want even more assistance with sexp manipulation.
print(foo)
get stuck at
(print foo)
I've always found the parenthesis comforting, we know where it starts and where it ends.
They're fine with:
And stuck at: I personally don't mind s-expr syntax, but not having any infix expressions or precedence levels means a lot more parentheses in idiomatic code.foo
and
(foo)
mean something different.
I now understand it similarly to the way in set theory x and {x} are different, but one is not used to the ordinary parenthesis symbol behaving in this way.
https://www.gnu.org/software/emacs/manual/eintr.html
Why? What are the advantages, in practice, of lazy evaluation as the default?
Having laziness by default means that functions compose properly by default; you don't have to worry about libraries providing an interface to your chosen incremental streaming library or whatever. I've seen friends working in strict dialects of Haskell forced to write out each combination of list functions by hand because otherwise they'd have to materialise large intermediate data structures where regular lazy Haskell simply wouldn't.
Ed Kmett has a couple of great posts about the value he's realised from laziness:
https://www.reddit.com/r/haskell/comments/l98v73/can_you_sha...
https://www.reddit.com/r/haskell/comments/5xge0v/today_i_use...
As for your first point, I think it's self-defeating: You claim "you don't have to worry about libraries providing an interface to your chosen incremental streaming library", but this requires "a careful choice of implementation" in those libraries with your chosen incremental streaming semantics in mind, which is the same thing but less explicit! And as long as mere mortals can't figure out the magic implementation of `sort` which makes incremental streaming work without explicit bindings, then what's the point?
Haskell is a great language for consuming libraries written by Ed Kmett, as your link demonstrates. Otherwise, it's difficult to work with.
This all assumes that you're willing to buy into a language that does immutable data structures by default, and are willing to rely on the work of people like Okasaki who had to work out how to do performant purely functional data structures. If you're willing to admit more mutability (I'm not), then you sit at different points in the design space.
It doesn't have laziness nor is a functional programming language, but it does scratch that itch a bit.
> ‘No’ is an entirely possible answer to this question. Already in the R6RS and R7RS small days, people were arguing that Scheme standardization should stop.
> If we went this way then, just like Racket in its default mode no longer claims to be a Scheme report implementation, Schemes would slowly diverge into different languages. Guile Scheme would one day simply be Guile; Chicken Scheme would be Chicken, and so on. Like the many descendants of Algol 60 and 68, and the many dialects of those descendants, each of these languages would have a strongly recognizable common ancestor, but each would still be distinct and, ultimately, likely incompatible.
This would doom all of those variants to irrelevance even more than they already are.
To the degree that people want Scheme to be a useful language for writing programs that solve real-world problems, it needs an ecosystem. And in order to compete with other languages, that ecosystem needs to commensurate with the scale that those other languages have. Otherwise, it doesn't matter how elegant the syntax is or how powerful the macro system is. If user needs to talk to a database and there isn't a good database library, they aren't going to pick the language.
The Scheme ecosystem is already tiny when you lump all languages and their packages into one. Fragment that, and you're probably below viability for all of them.
Now, it is fine if the goal of Scheme is not writing programs to solve real-world problems. It may be just a teaching language. But the evidence seems to be that it's hard to motivate programming students to learn a language that they ultimately won't end up using.