Fast Fourier Transforms Part 1: Cooley-Tukey
Posted4 months agoActive3 months ago
connorboyle.ioTechstory
calmpositive
Debate
40/100
Fast Fourier TransformAlgorithmsSignal Processing
Key topics
Fast Fourier Transform
Algorithms
Signal Processing
The post discusses the Cooley-Tukey algorithm for Fast Fourier Transform, sparking a discussion on its history, implementation, and related topics.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
23
0-12h
Avg / period
4.7
Comment distribution33 data points
Loading chart...
Based on 33 loaded comments
Key moments
- 01Story posted
Sep 18, 2025 at 5:28 AM EDT
4 months ago
Step 01 - 02First comment
Sep 18, 2025 at 7:14 AM EDT
2h after posting
Step 02 - 03Peak activity
23 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 25, 2025 at 11:28 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45287513Type: storyLast synced: 11/20/2025, 8:52:00 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Efficient decoding of LDPC codes also use the same idea. LDPCs were quite a revolution (pun intended) in coding/information theory.
On the other hand, something completely random, few days ago I found out that Tukey (then a Prof) and Feynman (then a student) along with other students were so enamored and intrigued by flexagons that they had set up an informal committee to understand them. Unfortunately their technical report never got published because the war intervened.
Strangely, it does not find a mention in Surely You're Joking.
https://www.cs.ubc.ca/~murphyk/Teaching/Papers/GDL.pdf
Check out the first paragraph
Two extremely influential papers appeared back to back in transactions information theory. This is one of them.The other is
https://vision.unipv.it/IA2/Factor graphs and the sum-product algorithm.pdf
Both are absolute gems of papers. The editor made sure that both appear in the same volume.
I agree both FFT and belief propagation can be expressed as message passing algorithms.
Doing the summation the naive way will be exponential in the number of variables. The goal is to this in an efficient way exploiting the distributive property and symmetry if any, much like in the FFT case.
This can be done efficiently, for example, when the graph is a tree. (Even if it isn't, one can pretend as if it is. Surprisingly that often works very well but that's a different topic entirely)
Read the paper it's not difficult to follow.
Actually... no? That's a constant factor optimization; the second expression has 75% the operations of the first. The FFT is algorithmically faster. It's O(N·log2(N)) in the number of samples instead of O(N²).
That property doesn't come from factorization per se, but from the fact that the factorization can be applied recursively by creatively ordering the terms.
Meaning that this takes k summations and one multiplication rather than k multiplications and k summations.
... Where k is the number of terms.
Obviously in practice these are implemented as (pairs of, for a complex FFT, though real-valued DCTs are much more common) machine words in practice, and modern multipliers and adders pipeline at one per cycle.
This is just my 2 cents, but I don’t have an intuition built for complex numbers.
Edit: as always, Wikipedia is a better source than comment pedantry: https://en.wikipedia.org/wiki/Fast_Fourier_transform#History
"Gauss wanted to interpolate the orbits from sample observations; his method was very similar to the one that would be published in 1965 by James Cooley and John Tukey, who are generally credited for the invention of the modern generic FFT algorithm."
> Freaking out about naming and attribution isn't really very informative.
It matters who gets the credit for an original idea. Cooley and Tukey are lionized as pioneers, but they are not.
Some personal preference:
I find it hard to read the grey text on a white background that you have, and it's probably just a fundamental limit of reader mode in firefox, but it doesn't render mathml right. To read it I zoomed in, but then there were CSS issues where the content overlapped the sidebar.
While |x| is common to reference the length of a set I've not really seen that to reference the number of elements in a vector in the fields where I've used discrete Fourier transforms. I've always just defined N as the length of my vector. I honestly read it at first as the norm of x, and the norm of F{x} and thought you might be about to talk about Parseval's theorem.
Enjoyable enough and accurate article though. Thanks!
I used "|x|" notation because I don't like introducing new unknown names if I don't have to. Too bad the annotation is ambiguous; I'll make a note about it.
If you right-click on the math blocks, you can change some of the parameters of the MathJAX renderer. One feature I've found helpful is the "click to zoom" which can be activated by following `Math Settings -> Zoom Trigger -> Click`.
I tried changing the text color. How does it look to you now?
Math notation is not great generally. There are canonical notations for somethings, and some times they're overloaded. Not much to do about it other than know about it.
Annoyingly you have to "know your audience" to get your math notation right for who you're presenting to. (You can never really do that on the Internet)
As an electrical engineer who's done a lot of DSP and worked with mathematicians I can point out some things that look either odd or normal depending on who I'm talking to. You can never really win with notation -- you'll always be wrong to someone =), but there are choices that are maybe less wrong for one discipline or another.
All that to say keep writing! You're doing pretty well!
And the problem is IMO too small to justify large dependencies. I only needed like 200×400 FFT as a minor component of a larger software.
Preach it, brother.