Étude in C Minor (2020)
Posted2 months agoActiveabout 2 months ago
zserge.comTechstory
calmpositive
Debate
20/100
Algorithmic MusicSound SynthesisProgramming
Key topics
Algorithmic Music
Sound Synthesis
Programming
The article explores generating music using simple algorithms and code, sparking discussion on sound synthesis, music generation, and related tools and techniques.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
7d
Peak period
9
156-168h
Avg / period
5
Comment distribution15 data points
Loading chart...
Based on 15 loaded comments
Key moments
- 01Story posted
Nov 4, 2025 at 6:18 PM EST
2 months ago
Step 01 - 02First comment
Nov 11, 2025 at 3:07 PM EST
7d after posting
Step 02 - 03Peak activity
9 comments in 156-168h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 12, 2025 at 11:14 AM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45817027Type: storyLast synced: 11/20/2025, 12:23:31 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Not exactly the point of the article, but this is all sort of wrong. CDs use a sample rate of 44.1 kHz per channel, not 22 kHz. I'd hazard this cuts down on rounding errors from having only one sample per 22kHz range. DAT used 48 kHz I believe to align evenly with film's 24 frames per second. 96 kHz is commonly used for audio today, and the additional accuracy is useful when editing samples without producing dithering artifacts within human hearing range.
20kHz is the top of the human hearing range, and picking something a little bit higher than 40kHz gives you room to smoothly roll off frequencies above the audible range without needing an extremely steep filter that would create a large phase shift.
It's practically impossible to design an artefact-free filter with a roll-off as steep as that. Every single person who says that 44.1k is enough "because Nyquist" has failed to understand this.
You can trade off delay against various artefacts, including passband ripple, non-linear phase smearing, and others. But the shorter the delay, the less true it is that you get out exactly what you put in.
It's a small thing. But if you're going to say you have something to say about sound, give me some sound to demonstrate your point.
Stagnated is not quite the right word, I think what computer music has been doing in the last couple decades is establish its primary instruments and techniques, the various audio DSLs, which is a fairly important thing musically speaking, it builds the culture and repertoire. Computer music is strongly rooted in how the musician interacts with the code, it is the strings of their guitar and I think we have barely touched on exploring that relationship yet. What is the prepared piano of computer music? how do I stick a matchbook between the strings of the code or weave a piece of yarn through it?
I hope more go back to exploring these very basic and simple ways of generating sound with computers and start experimenting with it, there is more out there than just ugens.
HAM RADIO stuff
https://dollchan.net/bytebeat/#4AAAA+kUli10OgjAQhK/Ci3R3XXTb...
How to create minimal music with code in any programming language - https://news.ycombinator.com/item?id=24940624 - Oct 2020 (78 comments)