[HN Gopher] Visualizations of Random Attractors Found Using Lyap...
       ___________________________________________________________________
        
       Visualizations of Random Attractors Found Using Lyapunov Exponents
        
       Author : cs702
       Score  : 98 points
       Date   : 2025-09-30 15:50 UTC (7 hours ago)
        
 (HTM) web link (paulbourke.net)
 (TXT) w3m dump (paulbourke.net)
        
       | zparky wrote:
       | Similar, post on the Henon Attractor 4h ago:
       | https://news.ycombinator.com/item?id=45424223
        
         | cs702 wrote:
         | Also, from that page:
         | https://towardsdatascience.com/attractors-in-neural-network-...
        
           | dtj1123 wrote:
           | In the event that the first post led to this one, I'd be
           | curious to know what the intermediate internet rabbithole
           | consisted of.
        
       | AfterHIA wrote:
       | These visualizations are beautiful. I'm a musician at heart so I
       | really geek out about bifurcation maps. You get to see the
       | exquisite relationship between chaos and form. It's like nature
       | and math producing visual jazz. Thanks for a kick ass addition
       | cs702!
        
       | elcritch wrote:
       | This is how I envision LLMs working to some extent. As in that
       | the "logic paths" follow something like this where the markov-
       | chain-esque probabilities jump around the vector space. It
       | reminds me that to get the answer I want that I need to setup the
       | prompt to get me near the right "attractor logic" pathway. Once
       | in a close enough ballpark then they'll bounce to the right path.
       | 
       | As a counter, I found that if you add an incorrect statement or
       | fact that lies completely outside the realm of the logic-
       | attractor for a given topic that the output is severally
       | degraded. Well more like a statement or fact that's "orthogonal"
       | to the logic-attractor for a topic. Very much as if it's
       | struggling to stay on the logic-attractor path but the outlier
       | fact causes it to stray.
       | 
       | Sometimes less is more.
        
         | cs702 wrote:
         | Interesting. Nothing prohibits us from thinking of pretrained
         | LLMs as dynamical systems that take a token state and compute
         | an updated token state: _x_{n+1} = LLM(x_n)_ , starting from an
         | initial token state _x_0_. Surely we can compute trajectories
         | (without sampling, for determinism) and study whether LLMs
         | exhibit chaotic behavior. I don 't think I've seen research
         | along those lines before. Has anyone here?
        
           | elcritch wrote:
           | Looks like @cs702 [1] posted a related link where a NN
           | follows an attractor pattern!
           | 
           | I've only skimmed it but it very much looks like what I've
           | been imagining. It'd be cool to see more research into this
           | area.
           | 
           | 1: https://news.ycombinator.com/item?id=45427778 2:
           | https://towardsdatascience.com/attractors-in-neural-
           | network-...
        
             | cs702 wrote:
             | That's for a small and shallow neural network.
             | 
             | I was wondering about LLMs specifically.
        
               | elcritch wrote:
               | Well me too, but it shows that there is some basis for
               | the thinking. It's sort of surprising there's not more
               | exploration into the area.
        
       | cantor_S_drug wrote:
       | https://paulbourke.net/fractals/lyapunov/
       | 
       | > It may diverge to infinity, for the range (+- 2) used here for
       | each parameter this is the most likely event. These are also easy
       | to detect and discard, indeed they need to be in order to avoid
       | numerical errors.
       | 
       | https://superliminal.com/fractals/bbrot/
       | 
       | The above image shows the overall entire Buddhabrot object. To
       | produce the image only requires some very simple modifications to
       | the traditional mandelbrot rendering technique: Instead of
       | selecting initial points on the real-complex plane one for each
       | pixel, initial points are selected randomly from the image region
       | or larger as needed. Then, each initial point is iterated using
       | the standard mandelbrot function in order to first test whether
       | it escapes from the region near the origin or not. Only those
       | that do escape are then re-iterated in a second, pass. (The ones
       | that don't escape - I.E. which are believed to be within the
       | Mandelbrot Set - are ignored). During re-iteration, I increment a
       | counter for each pixel that it lands on before eventually
       | exiting. Every so often, the current array of "hit counts" is
       | output as a grayscale image. Eventually, successive images barely
       | differ from each other, ultimately converging on the one above.
       | 
       | Is it possible to use the Buddhabrot technique on the lyapunov
       | fractals ?
        
         | fractal4d wrote:
         | Seems to me that the images on Bourke's site _are_ produced
         | using the general "Buddhabrot" technique (splatting points onto
         | an image). Although each image appears to only represent a
         | single orbit sequence and the reject condition is inverted so
         | that only stable orbits are shown.
         | 
         | I've personally found the technique very versatile and have had
         | a lot of fun playing around with it and exploring different
         | variations. Was excited enough about the whole thing that I
         | created a website for sharing some of my explorations:
         | https://www.fractal4d.net/ (shameless self-advertisement)
         | 
         | With the exception of some Mandelbrot-style images all the rest
         | are produced by splatting complex-valued orbit points onto an
         | image in one way or another.
        
       | esafak wrote:
       | Is anyone doing anything besides visualizations with this chaos
       | stuff? I liked the article linked below depicting the state space
       | of artificial neurons: https://towardsdatascience.com/attractors-
       | in-neural-network-...
        
         | MountDoom wrote:
         | Not really. Fractals and chaos theory were a bit like
         | blockchain in that it was a "new kind of science" and it was
         | supposed to explain everything, and you could buy pop-science
         | books talking about the implications.
         | 
         | And then it sort of fizzled out, because while it's interesting
         | and gives us a bit of additional philosophical insights into
         | certain problems, it doesn't _do_ anything especially useful.
         | You can use it to draw cool space-filling shapes.
        
           | sxzygz wrote:
           | I don't think you're remotely correct, but I also don't know
           | how to dispute your ignorance in any useful way.
           | 
           | To @esafak I suggest following @westurner's post.
           | 
           | I like the concept of Stable Manifolds. Classifying types of
           | them is interesting. Group symmetries on the phase space are
           | interesting. Explaining this and more is not work I'm
           | prepared to do here. Use Wikipedia, ask ChatGPT, enrol in a
           | course on Chaos and Fractal Dynamics, etc.
        
             | MountDoom wrote:
             | I am quite familiar with this space and I will reassert
             | that its by far most significant application is making
             | pretty pictures.
             | 
             | The Wikipedia list you're indirectly referencing is
             | basically a fantasy wishlist of the areas where we expected
             | the chaos theory to revolutionize things, with little to
             | show for it. "Chaos theory cryptography", come on.
        
         | westurner wrote:
         | Chaos theory > Applications:
         | https://en.wikipedia.org/wiki/Chaos_theory#Applications
         | 
         | People use chaos theory to make predictions about attractor
         | systems that have lower error than other models.
        
         | cs702 wrote:
         | Well, engineers building physical systems like airplanes and
         | rockets use Lyapunov exponents to _avoid_ chaotic behavior. No
         | one sane wants airplanes or rockets that exhibit chaotic
         | aerodynamics!
         | 
         | Has progress stalled in this area? I don't know, but surely
         | there are people working on it. In fact I recently saw an
         | interesting post on HN about a new technique that among other
         | things enables faster estimation of Lyapunov exponents:
         | https://news.ycombinator.com/item?id=45374706 (search for
         | "Lyapunov" on the github page).
         | 
         | Just because we haven't seen much progress, doesn't mean we
         | won't see more. Progress never happens on a predictable
         | schedule.
        
           | DavidSJ wrote:
           | To add to this, a moderate amount of turbulence (a type of
           | chaotic fluid flow) in engines and wing surfaces is sometimes
           | deliberately engineered to improve combustion efficiency and
           | lift, and also chaotic flow can induce better mixing in heat
           | exchangers and microfluidics systems.
        
         | poslathian wrote:
         | Absolutely!
         | 
         | These techniques are the key unlocks to robustifying AI and
         | creating certifiable trust in their behavior.
         | 
         | Starting with pre-deep neural network era stuff like LQR-RRT
         | trees, to the hot topic today of contraction theory, and
         | control barrier certificates in autonomous vehicles
        
         | throwaway173738 wrote:
         | Chaos is an important part of Control Systems theory from what
         | I understand.
        
       | jheitmann wrote:
       | There's a book covering this and more from 1993 called "Strange
       | Attractors: Creating Patterns in Chaos" by Julian C. Sprott
       | that's freely available here:
       | https://sprott.physics.wisc.edu/SA.HTM
       | 
       | It's fun (errr... for me at least) to translate the ancient basic
       | code into a modern implementation and play around.
       | 
       | The article mentions that it's interesting how the 2d functions
       | can look 3d. That's definitely true. But, there's also no reason
       | why you can't just add on however many dimensions you want and
       | get real many-dimensioned structures with which you can noodle
       | around with visualizations and animations.
        
         | throwaway173738 wrote:
         | As an undergraduate I worked with some other Physics students
         | to construct an analog circuit using op amps that modeled one
         | of Sprott's equations and we confirmed experimentally that the
         | system exhibited chaotic behavior. We also used a
         | transconductance amplifier as a control parameter and swept
         | through the different states (chaotic, period windows) of the
         | circuit. We did not go as far as comparing the experimental and
         | predicted period windows while I was there but it was an
         | interesting project for us. At one point I turned up an article
         | in Physica D describing how to calculate the first Lyapunov
         | exponent using small data sets which we used to compute whether
         | we were in a period window or not.
        
       ___________________________________________________________________
       (page generated 2025-09-30 23:00 UTC)