Libcube: Find New Sounds From Audio Synths Easier
Posted3 months agoActive3 months ago
github.comTechstory
calmpositive
Debate
20/100
Audio SynthesisSound DesignMachine Learning
Key topics
Audio Synthesis
Sound Design
Machine Learning
LibCube is a tool that uses machine learning to help users discover new sounds from audio synthesizers, sparking discussion about the complexity of synthesizers and the potential for AI-generated sounds.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
7m
Peak period
12
108-120h
Avg / period
4
Comment distribution16 data points
Loading chart...
Based on 16 loaded comments
Key moments
- 01Story posted
Oct 18, 2025 at 11:37 AM EDT
3 months ago
Step 01 - 02First comment
Oct 18, 2025 at 11:43 AM EDT
7m after posting
Step 02 - 03Peak activity
12 comments in 108-120h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 23, 2025 at 4:09 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45628169Type: storyLast synced: 11/20/2025, 12:47:39 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Really? If you know about synths then it isn’t really that difficult. You have one or more sound generators, sine wave, saw tooth, etc. You have attack (how fast the sound reaches full potential), decay (how long that sound is played before it drops to sustain), sustain (how long the sound lingers), release (how long until the sound fades away to nothing).
You have filters for the sound, low pass, high pass, distortion, etc. that process the audio into output audio.
It’s basic sound engineering. However, using a neural net to find interesting combinations and be able to label them would be really fun to build a catalog of dreamscapes or tones.
“How long the sound lingers” It’s definitely time…
The envelope is a level but the knob’s use is time. “How long that the sound lingers before release”.
Some synths may have this as a pedal or a key you can hold but the purpose is definitely to lengthen the tone prior to release which is always time.
There’s a gate signal, typically activated by holding a key (though in a modular synth, there are many other potential gate sources). While the gate is open an ADSR envelope progresses from Attack -> Decay -> Sustain. It then remains on Sustain until the gate closes, at which point it enters the Release phase. So the amount of time it remains in Sustain is dictated by the gate signal. Notice there’s no G in ADSR, because the gate doesn’t come from the envelope.
What you’re describing is Hold, which some envelopes (AHDSR is one popular flavor) can do. Many Elektron groove boxes have hold stages on their envelopes.
In AHDSR, an open gate goes from Attack into Hold, where it retains its value for a set period of time after the attack, and then goes into the Decay phase and continues from there.
There are plenty of other kinds of envelope, and things that live somewhere between LFOs and envelopes called a function generator, which is often an AR envelope that can be looped. Then there are complex many-stage envelopes that were especially popular when digital synths were first coming onto the scene.
I’d also add that the description you gave of a synth architecture is generally considered East Coast synthesis. One or more oscillators going into a mixer/VCA, and then into a filter and possibly some effects is a very popular architecture made famous by Bob Moog. But there are other forms like West Coast synthesis where instead of having filters, you run gentler wave forms like a triangle through a wave folder, and/or a complex oscillator where you have a pair of oscillators that can cross modulate one another. So where East Coast synthesis takes harmonically rich waveforms and cuts harmonics away with filters, West Coast synthesis starts with harmonically tame waveforms and adds harmonics through various flavors of FM, wave shaping, etc.
Then you’ve got samplers, granular synthesis, physical modeling, additive synthesis, and a bunch of other types as well. The East Coast architecture is popular, but there’s a lot more out there.
Even the quote you use is intentionally incomplete. The crucial part of the parent quote is 'is determined by how long you "hold the key"'.
Go play with a synth, or even look at an ADSR envelope tutorial, and you'll see you were wrong. And not just wrong, but condescending and wrong.
Whether it’s a sine wave, a waveform, a bass line, a kick sample, a sawtooth, a square wave, doesn’t matter.
These things have been solved in the late 70s by Yamaha’s DX7 and Moog.
What most people find confusing isn’t the ADSR but the fact that there are multiple oscillators each with their own volumes and patch bays, inputs and outputs.
At the end of the day, they are all very much the same.
Sound Gen/Sample -> ADSR -> Filter chain -> patch out -> in -> master out.
> FM is Frequency Modulation which is what you can do with oscillators and the above mentioned envelope points.
This isn't what FM is. FM involves using one oscillator output to modify (modulate) the frequency of another oscillator. ADSRs can be used to shape the degree of modulation over time, but the sound isn't shaped through the ADSR in this case; rather the ADSR controls the sound generation stage by modifying the frequency modulation effect in the time domain.
That’s not different than what I said. It’s an oscillator…
OPs project looks like a cool way for plugin devs to make a demo slider.
https://magazine.raspberrypi.com/articles/aphex-twin-midimut...
Wondering if its as fun to listen to the generation process as I imagine ..