Random Attractors – Found Using Lyapunov Exponents (2001)
Posted3 months agoActive3 months ago
paulbourke.netResearchstory
calmpositive
Debate
20/100
Chaos TheoryFractalsAttractors
Key topics
Chaos Theory
Fractals
Attractors
The post shares visualizations of random attractors found using Lyapunov exponents, sparking discussion on their beauty, applications, and connections to other fields like neural networks and music.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
38m
Peak period
20
0-6h
Avg / period
7
Comment distribution28 data points
Loading chart...
Based on 28 loaded comments
Key moments
- 01Story posted
Sep 30, 2025 at 11:50 AM EDT
3 months ago
Step 01 - 02First comment
Sep 30, 2025 at 12:28 PM EDT
38m after posting
Step 02 - 03Peak activity
20 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 2, 2025 at 7:16 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45427059Type: storyLast synced: 11/20/2025, 5:42:25 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
As a counter, I found that if you add an incorrect statement or fact that lies completely outside the realm of the logic-attractor for a given topic that the output is severally degraded. Well more like a statement or fact that's "orthogonal" to the logic-attractor for a topic. Very much as if it's struggling to stay on the logic-attractor path but the outlier fact causes it to stray.
Sometimes less is more.
I've only skimmed it but it very much looks like what I've been imagining. It'd be cool to see more research into this area.
1: https://news.ycombinator.com/item?id=45427778 2: https://towardsdatascience.com/attractors-in-neural-network-...
I was wondering about LLMs specifically.
> It may diverge to infinity, for the range (+- 2) used here for each parameter this is the most likely event. These are also easy to detect and discard, indeed they need to be in order to avoid numerical errors.
https://superliminal.com/fractals/bbrot/
The above image shows the overall entire Buddhabrot object. To produce the image only requires some very simple modifications to the traditional mandelbrot rendering technique: Instead of selecting initial points on the real-complex plane one for each pixel, initial points are selected randomly from the image region or larger as needed. Then, each initial point is iterated using the standard mandelbrot function in order to first test whether it escapes from the region near the origin or not. Only those that do escape are then re-iterated in a second, pass. (The ones that don't escape - I.E. which are believed to be within the Mandelbrot Set - are ignored). During re-iteration, I increment a counter for each pixel that it lands on before eventually exiting. Every so often, the current array of "hit counts" is output as a grayscale image. Eventually, successive images barely differ from each other, ultimately converging on the one above.
Is it possible to use the Buddhabrot technique on the lyapunov fractals ?
I've personally found the technique very versatile and have had a lot of fun playing around with it and exploring different variations. Was excited enough about the whole thing that I created a website for sharing some of my explorations: https://www.fractal4d.net/ (shameless self-advertisement)
With the exception of some Mandelbrot-style images all the rest are produced by splatting complex-valued orbit points onto an image in one way or another.
And then it sort of fizzled out, because while it's interesting and gives us a bit of additional philosophical insights into certain problems, it doesn't do anything especially useful. You can use it to draw cool space-filling shapes.
To @esafak I suggest following @westurner’s post.
I like the concept of Stable Manifolds. Classifying types of them is interesting. Group symmetries on the phase space are interesting. Explaining this and more is not work I’m prepared to do here. Use Wikipedia, ask ChatGPT, enrol in a course on Chaos and Fractal Dynamics, etc.
The Wikipedia list you're indirectly referencing is basically a fantasy wishlist of the areas where we expected the chaos theory to revolutionize things, with little to show for it. "Chaos theory cryptography", come on.
I don't see how better understanding non-linear systems and global dynamics can be not be considered useful. For starters, better control of nonlinear systems/keeping them from turning them chaotic is incredibly useful. So many hard problems can be approximately reduced to "keep this non-linear system stable." Staying in the "edge of chaos" regime has proven to be an optimal choice for a plethora of problems.
So it's sort of like saying that the physics of black holes are very useful to us day-to-day because we want to make sure we don't fall into any black holes.
I'm not saying that chaos theory isn't interesting. It's just that it's pretty hard to find any concrete application of it, beyond hand-wavy stuff like "oh, it somehow helped us understand weather".
Multibody orbits are one such chaotic system, which means you can take advantage of that chaos to redirect your space probe from one orbit to another using virtually zero fuel, as NASA did with its ISEE-3 spacecraft.
Has progress stalled in this area? I don't know, but surely there are people working on it. In fact I recently saw an interesting post on HN about a new technique that among other things enables faster estimation of Lyapunov exponents: https://news.ycombinator.com/item?id=45374706 (search for "Lyapunov" on the github page).
Just because we haven't seen much progress, doesn't mean we won't see more. Progress never happens on a predictable schedule.
The code and text are at https://gitlab.com/fraserphysics/hmmds. From a Nix command line, "make book" builds the book in about 10 hours.
I'd be grateful for any feedback on the book or the software.
These techniques are the key unlocks to robustifying AI and creating certifiable trust in their behavior.
Starting with pre-deep neural network era stuff like LQR-RRT trees, to the hot topic today of contraction theory, and control barrier certificates in autonomous vehicles
People use chaos theory to make predictions about attractor systems that have lower error than other models.
It's fun (errr... for me at least) to translate the ancient basic code into a modern implementation and play around.
The article mentions that it's interesting how the 2d functions can look 3d. That's definitely true. But, there's also no reason why you can't just add on however many dimensions you want and get real many-dimensioned structures with which you can noodle around with visualizations and animations.