[HN Gopher] Mandelbrot deep zoom theory and practice (2021)
___________________________________________________________________
Mandelbrot deep zoom theory and practice (2021)
Author : fanf2
Score : 172 points
Date : 2025-01-03 15:42 UTC (1 days ago)
(HTM) web link (mathr.co.uk)
(TXT) w3m dump (mathr.co.uk)
| swayvil wrote:
| Lucid. But a few pictures would be nice and relevant.
| r721 wrote:
| Illustrative YouTube video:
| https://www.youtube.com/watch?v=0jGaio87u3A
| yzdbgd wrote:
| It's so humbling to read how complex such calculations can get. I
| took a crack at making a JS client side zooming app a while back
| and it was miserably slow and would run out of memory as the
| number precision was limited by JS's max float size...
|
| Here it is nonetheless if anyone's curious :
|
| App : https://yzdbg.github.io/mandelbrotExplorer/
|
| Repo : https://github.com/yzdbg/mandelbrotExplorer
| epistasis wrote:
| What a great summary of deep knowledge for newcomers to quickly
| digest.
|
| I had just been (re)watching the Numberphile and 3blue1brown
| fractal videos this morning so this is a great complement.
| mg wrote:
| I have yet to see a Mandelbrot explorer written in Javascript
| that allows infinite zoom without losing detail and a good UI
| that works on desktop on mobile.
|
| Does anybody know one?
|
| If there is none, I would build one this year. If anyone wants to
| join forces, let me know.
| ccvannorman wrote:
| This is right up my alley :-) I'll message you
| QuadmasterXLII wrote:
| I made https://mandeljs.hgreer.com
|
| The real glory of it is the math - it's using Webassembly to
| calculate the reference orbit, and then the GPU to calculate
| all the pixels, but with an enormous amount of fussing to get
| around the fact that shaders only have 32 bit floats. The
| interface works on mobile and desktop, but if you have any tips
| on how to polish it, let me know.
| mg wrote:
| Hey, this is pretty cool!
|
| Have you considered publishing it under an open source
| license?
|
| Then I could see myself working on some features like:
| Selectable color palette, drag&drop and pinch-to-zoom on
| mobile and fractional rendering (so that when you move the
| position, only the new pixels get calculated).
| QuadmasterXLII wrote:
| https://github.com/HastingsGreer/mandeljs
|
| i'll whack a license on it later today
| morphle wrote:
| You could polish it by using variable size precision floats
| or at least quadruple size 128 bit floating point. This
| requires you to create a programming language compiler or use
| my parallel Squeak programming language (it is portable) and
| have that run on Webassembly or WebGL. It would be easier to
| have it run directly on CPU, GPU and Neural Engine hardware.
| The cheapest hardware today would be the M4 Mac mini or
| design your own chips (see my other post in this thread).
|
| An example of this polished solution is [1] but this example
| does not yet use high precision floating point [2].
|
| [1] https://tinlizzie.org/~ohshima/shadama2/
|
| [2] https://github.com/yoshikiohshima/Shadama
| QuadmasterXLII wrote:
| This is already implemented in it- I just did it in pen and
| paper and directly wrote the shader from my results instead
| of writing a language first.
|
| https://www.hgreer.com/JavascriptMandelbrot/#an_ugly_hack_t
| h...
| dspillett wrote:
| _> to get around the fact that shaders only have 32 bit
| floats_
|
| I wonder if there are places around the set where rounding
| through the iterations depending on the number format chosen,
| materially affects the shape (rather than just changing many
| pixels a bit so some smoothness or definition is lost).
| fanf2 wrote:
| You get effects somewhat like that from perturbation theory
| glitches, as discussed in the article.
| foobarrio2 wrote:
| I lost access to my original hn acct so I created this one just
| give you a heads up I'll be sending an email!
| dwaltrip wrote:
| I'm building one.
|
| I have an old "beta" release of sorts that's live:
| https://fracvizzy.com/. Change the color mode to "histogram",
| it creates much more interesting pictures imo. Doesn't really
| work on mobile yet fyi.
|
| I Just got back into the project recently. I'm almost done
| implementing smooth, "google maps" style continuous zoom. I
| have lots of ideas for smoother, more efficient exploration as
| well as expanded coloring approaches and visualization styles.
| I'm also working on features for posting / sharing what you
| find, so you can see the beautiful locations that others find
| and the visualization parameters they chose. As well as making
| it easy to bookmark your own finds.
|
| Infinite zoom is probably a long ways out (if ever), but with
| JS numbers you can zoom pretty far before hitting precision
| issues. There's _a lot_ to explore there. I 'd love to get
| infinite zoom someday though.
|
| Here's a few example locations I found quickly (I have more
| links saved somewhere....):
|
| *
| https://fracvizzy.com/?pos[r]=-0.863847413354&pos[i]=-0.2309...
|
| *
| https://fracvizzy.com/?pos[r]=-1.364501&pos[i]=-0.037646&z=1...
|
| *
| https://fracvizzy.com/?pos[r]=-0.73801&pos[i]=-0.18899&z=12&...
| jderick wrote:
| I wonder if this can generalize to mandelbulb?
| crazygringo wrote:
| In practice, the Mandelbulb is usually only computed to a few
| iterations (e.g. 20) in order to maintain smooth surfaces and
| prevent a lot of surfaces from dissolving into ~disconnected
| "froth".
|
| So deep zooms and deep iterations aren't really done for it.
|
| Also, it's generally rendered using signed distance functions
| which is a little bit more complicated. I haven't looked at the
| equations though to figure out if perturbation theory is easy
| to apply -- I'm guessing it would be, as the general principle
| would seem to apply.
| morphle wrote:
| In 1986 we wrote a parallel Mandelbrot program in assembly
| instructions in the 2K or 4K on-chip SRAM of 17 x T414 Transputer
| chips linked together with 4 x 10 Mbps links each into a cheap
| supercomputer [1]. It drew the 512 x 342 pictures on a Mac 128K
| as terminal at around 10 seconds per picture.
|
| I later wrote the quadruple-precision floating-point calculations
| in microcode [2] to speed it up by a factor of 10 and combined 52
| x T414 with 20 x T800 Transputers with floating point hardware
| into a larger supercomputer costing around $50K.
|
| With this cheap 72 core supercomputer it still would have taken
| years to produce the deep zoom of the Mandelbrot set [4] that
| took them 6 months with 12 CPU cores running 24/7 in 2010 [3]. In
| 2024 we can buy a $499 M4 Mac mini (20-36 'cores') and calculate
| it a a few days. If I link a few M4s together with 3x32 Gbps
| Thunderbolt links into a cheap supercomputer and write the
| assembly code for all the 36 cores (CPU+GPU+Neural Engine) I can
| render the deep zoom Mandelbrot almost in realtime (30 frames per
| second).
|
| That is Moore's law in practice. The T414 had 900,000
| transistors, the M4 has 28 billion transistors at 3nm (31.111
| times larger at 50% of the price).
|
| The M2 Ultra with 134 billion transistors and M4 Max (estimate
| 100 billion transistors) are larger chips than a M4 but they are
| relatively more expensive then the M4 so it is cheaper and faster
| to link together 13 x M4 than buy 1 x M2 Ultra or 6 x M4 instead
| of 1 x M4 Max. Cerebras or NVidia also make larger chips, but
| again, not as cheap and fast as the M4.
| Price/performance/Watt/dollar is what matters, you want the
| lowest energy (OPEX) to calculate as many floating point numbers
| as possible at the lowest purchase cost (CAPEX), you do not want
| the fastest chips.
|
| You will want to rewrite your software to optimize for the
| hardware. Even better would be to write the optimum software (for
| example in variable precision floating point and large integers
| in Squeak Smalltalk) and then design the hardware to execute that
| program with the lowest cost. To do that I designed my own
| runtime reconfigurable chips with reconfigurable core and
| floating point hardware precision.
|
| I designed a 48 trillion transistor Wafer Scale Integration (WSI)
| at 3nm with almost a million cores and a few hundred gigabyte
| SRAM on the wafer [5][6]. This unchipped wafer would cost around
| $30K. It would cost over $130 million to manufacture it at TSMC.
| This WSI would have 1714 times more transistors but cost only 60
| times at much, a 28 times improvement, but it is an Apples and
| oranges comparison. It would be more like a 100 times improvement
| because of the larger SRAM, faster on-chip links and lower energy
| cost of the WSI over the M4.
|
| The largest fastest supercomputers [7] cost $600 million. To
| match that with a cluster of M4 would cost around $300 million.
| To match it with my WSI design would cost $140 million total. For
| $230 million you get a cheap 3000 x WSI = 144 quadrillion
| transistor supercomputer immersed in a 10mx10mx10m swimming pool
| that is orders of magnitude faster then the largest fastest
| supercomputer and it would be at orders of magnitude lower cost,
| especially if you would run it on solar energy only [8], even if
| you would buy three 3000 wafer scale integration supercomputers
| ($410 million) and only run it during daylight hours and space it
| evenly around the equator in cloudless deserts. Energy cost
| dominates hardware costs over the lifetime of a supercomputer.
|
| All the numbers I mentioned are rounded off or estimates, to be
| accurate requires me to first define every part of the floating
| point math, describe the software calculations exactly, make
| accurate hardware definitions and would take me several
| scientific papers and several weeks to write.
|
| [1]
| https://www.bighole.nl//pub/mirror/homepage.ntlworld.com/kry...
|
| [2]
| https://sites.google.com/site/transputeremulator/Home/inmos-...
|
| [3] http://fractaljourney.blogspot.com
|
| [4] https://www.youtube.com/watch?v=0jGaio87u3A
|
| Maybe https://www.youtube.com/watch?v=zXTpASSd9xE took more
| calculations, it is unclear.
|
| [5] Smalltalk and Self Hardware
| https://www.youtube.com/watch?v=vbqKClBwFwI
|
| [6] Smalltalk and Self
| Hardwarehttps://www.youtube.com/watch?v=wDhnjEQyuDk
|
| [7] https://en.wikipedia.org/wiki/El_Capitan_(supercomputer)
|
| [8] https://www.researchgate.net/profile/Merik-
| Voswinkel/publica...
| morphle wrote:
| There are virtually no limits (for Mandelbrot and computing in
| general) because there are few limits on the growth of
| knowledge [5].
|
| In a few decades we will have learned to take CO2 (carbon
| dioxide) molecules out of the air [4] and rearrange the carbon
| atoms in 3D structures atom by atom [6]. We will be able to
| grow the transistors and the solar cells virtually for free.
| Energy will be virtually free, a squandrable abundance of free
| and clean energy [2]. At that point we will start automatically
| self-assembling Dyson Swarms constructions of solar cells with
| transistors on the back [1] on the Quebibyte scale to capture
| all the solar output of the sun [3] and get near-infinite
| compute for free. We would finally be able to explore the
| Mandelbrot space at full depth within our lifetime.
|
| [1] https://gwern.net/doc/ai/scaling/hardware/1999-bradbury-
| matr... and https://en.wikipedia.org/wiki/Matrioshka_brain
|
| [2] Bob Metcalfe Ethernet
| https://www.youtube.com/watch?v=axfsqdpHVFU
|
| [3] https://en.wikipedia.org/wiki/Kardashev_scale
|
| [4] Richard Feynman Plenty of Room at the Bottom
| https://en.wikipedia.org/wiki/There%27s_Plenty_of_Room_at_th...
|
| [5] David Deutsch: Chemical scum that dream of distant quasars
| https://www.youtube.com/watch?v=gQliI_WGaGk
|
| [6] https://www.youtube.com/watch?v=Spr5PWiuRaY and
| https://www.youtube.com/watch?v=r1ebzezSV6s
| LargoLasskhyfv wrote:
| I'd rather prefer to pursue the path of producing potent
| phytochemicals to unleash perfect psionic powers, thereby
| shortcutting the need for all these boring physical
| procedures, instead persisting mind over matter as an
| afterthought.
| fluoridation wrote:
| LOL.
|
| >We would finally be able to explore the Mandelbrot space at
| full depth
|
| What does that mean? The Mandelbrot set is infinitely
| intricate.
| mikestorrent wrote:
| This is the future I want to live in. Bountiful cheap energy
| is much more attractive than the futures most people write
| about now, that vary between some sort of managed decline,
| dystopianism, or other negativity.
|
| I hope to see you or your faithful recreation in the
| Matrioskha brain one day.
| morphle wrote:
| > I hope to see you or your faithful recreation in the
| Matrioskha brain one day.
|
| That where my earliest thoughts, 11 years old, that got me
| into computing: upload my brain into a Matrioskha brain or
| at least study the human brain by building a supercomputer
| to simulate it. I'm still working on building cheaper
| supercomputers and cheaper clean energy 50 years later.
| Deep zoom Mandel calculations are still a good benchmark to
| measure my progress.
|
| Three Sci-Fi stories that describe faithful recreation of a
| human in a computer [1][2][3].
|
| [1] https://en.wikipedia.org/wiki/Accelerando
|
| [2] https://en.wikipedia.org/wiki/The_Annals_of_the_Heechee
| and the last part of
| https://en.wikipedia.org/wiki/Heechee_Rendezvous
|
| [3] https://en.wikipedia.org/wiki/3001:_The_Final_Odyssey
| morphle wrote:
| >Bountiful cheap energy is much more attractive than the
| futures most people write about
|
| A squandrable abundance of free and clean energy means we
| solve the climate change and sixth mass-extinction crises
| in the next few years! It should be the only future we fund
| today. Just cheaper solar cells would do that. No new
| inventions needed.
|
| Cheap clean energy is the most important next step for
| humanities survival. It will only cost 100 million, maybe a
| few hundred million dollars at most [1].
|
| It also means we would get a money free economy as depicted
| in Star Trek The Next Generation. Even interstellar travel
| by solar laser and solar sail would become possible if
| energy is nearly free.
|
| [1] Alan Kay, How? When "What Will It Take?" Seems Beyond
| Possible, We Need To Study How _Immense Challenges_ Have
| Been Successfully Dealt With In The Past
| https://internetat50.com/references/Kay_How.pdf
| SideQuark wrote:
| There are fundamental physical limits to information density
| and computing power, which will limit growth of knowledge.
| mikestorrent wrote:
| Exceedingly interesting! Say, I have a board from back in the
| 80s that you may know about - nobody else I've asked has any
| idea. It's a "Parallon" ISA card from a company called Human
| Devices, that has something like 8 NEC V20s on it. I think it's
| an early attempt at an accelerator card, maybe for neural
| networks, not sure.
|
| Some reference about its existence here, in a magazine that
| (ironically? serendipitously?) features a fractal on the cover:
| http://www.bitsavers.org/magazines/Micro_Cornucopia/Micro_Co...
|
| Ever heard of such a thing? I think at this point, I'm trying
| to find someone who wants it, whether for historical purposes
| or actually to use.
| morphle wrote:
| Yes I've heard of such a thing [1], it is probably worth $50.
| This PCB board is just a cluster of 8 V20 (Intel 8088)
| compatible 16 bit processor, nothing to write home about. It
| is not considered an early attempt at an accelerator card.
| Depending on your definition many were done earlier [2] going
| back to the earliest computers 2000 years ago. My favorite
| would be the 16 processor Alto [3].
|
| In 1989 I build my 4th Transputer supercomputer for a
| customer who programmed binary neural networks.
|
| In those early days everyone would use Mandelbrot and Neural
| Networks as simple demo's and benchmarks of any chip or
| computer, especially supercomputers. So it is not ironical or
| serendipitous that a magazine would have a Mandelbrot and an
| article on a microprocessor in the same issue. My Byte
| Magazine article on Transputer and DIY supercomputers also
| described both together.
|
| [1] https://en.wikipedia.org/wiki/NEC_V20
|
| [2] https://en.wikipedia.org/wiki/History_of_supercomputing#:
| ~:t....
|
| [3] https://en.wikipedia.org/wiki/Xerox_Alto
| mikestorrent wrote:
| Thanks for answering, appreciated. I suppose I will just
| hang onto it as an interesting piece of history, though I
| never thought it would be worth much - more I'm just hoping
| to find someone out there who wants it for some personal
| reason so I can "send it home", so to speak.
|
| Is anyone doing anything with transputer technology now? Do
| you think it has a chance at resurgence?
|
| > the earliest computers 2000 years ago
|
| Typo, exaggeration, or a reference to something like the
| Antikythera Mechanism?
| morphle wrote:
| >Is anyone doing anything with transputer technology now?
|
| Yes, our Morphle Engine Wafer Scale Integration and our
| earlier SiliconSqueak microprocessor designs and
| supercomputers borrows many special features of the
| Transputer designs by David May. Also of the Alto design
| by Chuck Thacker, SOAR RISC by David Ungar and a few of
| the B5000-B6500 designs by Bob Barton. Most of all they
| build on the design of the Smalltalk, Squeak and
| reflective Squeak VM software designs by Alan Kay and Dan
| Ingalls.
|
| >reference to Antikythera Mechanism[5]?
|
| Yes. I could have referenced the Jacquard Loom [1],
| Babbage Analytical Engine or the later Difference engine
| [2], Alan Turing, Ada Lovelace and dozens of other
| contestants for 'first' and 'oldest' computational
| machines, they are all inaccurate, as are these
| lists[3][4]. Or I could have only referenced the ones
| from my own country [6].
|
| >Do you think it has a chance at resurgence?
|
| Yes, I hope my chips and WSI is that resurgence of
| European microprocessor chips. I just need less than a
| million euros in funding to start production of prototype
| WSIs. Even just a single customer buying a $130 million
| supercomputer is enough. YCombinator should fund me. A
| science grant or a little support from ASML R&D might
| also be enough to complete our resurgence. It will take
| 2-3 years from the moment of funding to go into mass
| production [5].
|
| [1] https://en.wikipedia.org/wiki/Jacquard_machine
|
| [2] https://en.wikipedia.org/wiki/Difference_engine
|
| [3] https://www.oldest.org/technology/computers/
|
| [4] http://www.computerhistories.org/
|
| [5] https://www.youtube.com/watch?v=wDhnjEQyuDk
|
| [5] https://en.wikipedia.org/wiki/Antikythera_mechanism
|
| [6] Oldest Dutch computers? https://www.cwi.nl/en/about/h
| istory/#:~:text=CWI%20developed....
|
| [7] https://www.youtube.com/watch?v=vbqKClBwFwI or
| https://www.youtube.com/watch?v=wDhnjEQyuDk or
| https://www.uksmalltalk.org/2022/06/smalltalk-and-self-
| hardw...
| ccvannorman wrote:
| Wonderful article on fractals and fractal zooming/rendering! I
| had never considered the inherent limitations and complications
| of maintaining accuracy when doing deep zooms. Some questions
| that came up for me while reading the article:
|
| 1. What are the fundamental limits on how deeply a fractal can be
| accurately zoomed? What's the best way to understand and map this
| limit mathematically?
|
| 2. Is it possible to renormalize a fractal (perhaps only "well
| behaved"/"clean" fractals like Mandelbrot) at an arbitrary level
| of zoom by deriving a new formula for the fractal at that level
| of zoom? (Intuition says No, well, maybe but with additional
| complexities/limitations; perhaps just pushing the problem
| around). ( _My experience with fractal math is limited._ ) I'll
| admit this is where I met my own limits of knowledge in the
| article as it discussed this as normalizing the mantissa, and the
| limit is that now you need to compute each pixel on CPU.
|
| 3. If we assume that there are fundamental limits on zoom,
| mathematically speaking, then should we consider an alternative
| that _looks_ perfect with no artifacts (though it would not be
| technically accurate) at arbitrarily deep levels of zoom? Is it
| in principle possible to have the mega-zoomed-in fractal appear
| flawless, or is it provable that at some level of zoom there is
| simply no way to render any coherent fractal or appearance of
| one?
|
| I always thought of fractals as a view into infinity from the 2D
| plane (indeed the term "fractal" is meant to convey a fractional
| dimension above 2). But, I never considered our limits as
| sentient beings with physical computers that would never be able
| to fully explore a fractal, thus it is only an infinity in idea,
| and not in reality, to us.
| ttoinou wrote:
| 1. No limit. But you need to find an interesting point, the
| information is encoded in the numerous digits of this (x,y)
| point for Mandelbrot. Otherwise you'll end up in a flat space
| at some point when zooming
|
| 2. Renormalization to do what ? In the case of Mandelbrot you
| can use a neighbor point to create the Julia of it and have
| similar patterns in a more predictable way
|
| 3. You can compute the perfect version but it takes more time,
| this article discusses optimizations and shortcuts
| ccvannorman wrote:
| 1. There must be a limit; there are only around 10^80 atoms
| in our universe, so even a universe-sized supercomputer could
| not calculate an arbitrarily deep zoom that required 10^81
| bits of precision. Right?
|
| 2. Renormalization just "moves the problem around" since you
| lose precision when you recalculate the image algorithm at a
| specific zoom level. This would create discrepancies as you
| zoom in and out.
|
| 3. You cannot; because of the fundamental limits on computing
| power. I _think_ you cannot compute a mathematically accurate
| and perfect Mandelbrot set at an arbitrarily high level of
| zoom, say 10^81, because we don 't have enough compute or
| memory available to have the required precision
| ttoinou wrote:
| 1. Mandelbrot is infinite. The number pi is infinite too
| and contains more information than the universe
|
| 2. I dont know what you mean or look for with normalization
| so I can't answer more
|
| 3. It depends on what you mean by computing Mandelbrot. We
| are always making approximations for visualisation by
| humans, that's what we're talking about here. If you mean
| we will never discover more digits in pi than there is
| atoms in the universe then yes I agree but that's a
| different problem
| adrianN wrote:
| Pi doesn't contain a lot of information since it can be
| computed with a reasonably small program. For numbers
| with high information content you want other examples
| like Chaitin's constant.
| thaumasiotes wrote:
| > Pi doesn't contain a lot of information since it can be
| computed with a reasonably small program.
|
| It can be described with a small program. But it contains
| more information than that. You can only compute finite
| approximations, but the quantity of information in pi is
| infinite.
|
| The computation is fooling you because the digits of pi
| are not all equally significant. This is irrelevant to
| the information theory.
| SideQuark wrote:
| No, it does not contain more information than the
| smallest representation. This is fundamental, and follows
| from many arguments, e.g., Shannon information,
| compression, Chaitan's work, Kolmogorov complexity,
| entropy, and more.
|
| The phrase "infinite number of 0's" does not contain
| infinite information. It contains at most what it took to
| describe it.
| thaumasiotes wrote:
| Descriptions are not all equally informative. "Infinite
| number of 0s" will let you instantly know the value of
| any part of the string that you might want to know.
|
| The smallest representation of Chaitin's constant is "O".
| This matches the smallest representation of pi.
| morphle wrote:
| We can create enough compute and SRAM memory for a few
| hundred million dollars. If we apply science there are
| virtually no limits within in a few years.
|
| See my other post in this discussion.
| fluoridation wrote:
| 1. You asked about the fundamental limits, not the
| practical limits. Obviously practically you're limited by
| how much memory you have and how much time you're willing
| to let the computer run to draw the fractal.
| earnestinger wrote:
| > could not calculate an arbitrarily deep zoom that
| required 10^81 bits of precision. Right?
|
| I'm here to nitpick.
|
| Number of bits is not strictly 1:1 to number of particles.
| I would propose to use distances between particles to
| encode information.
| Quekid5 wrote:
| ... and how would you _decode_ that information?
| Heisenberg sends his regards.
|
| EDIT: ... and of course the point isn't that it's 1:1
| wrt. bits and atoms, but I think the point was that there
| is obviously some maximum information density -- too much
| information in "one place" leads to a black hole.
| immibis wrote:
| Fun fact: the maximum amount of information you can store
| in a place is the entropy of a black hole, and it's
| proportional to the surface area, not the volume.
| immibis wrote:
| 10^81 zoom is easy. You run out of bits at 2^(10^81) or 2^1
| 00000000000000000000000000000000000000000000000000000000000
| 000000000000000000000.
| pepinator wrote:
| In the case of Mandelbrot, there is a self similar
| renormalization process, so you can obtain such a formula. For
| the "fixed points" of the renormalization process, the formula
| is super simple; for other points, you might need more
| computations, but it's nevertheless an efficient method. There
| is a paper of Bartholdi where he explains this in terms of
| automata.
| LegionMammal978 wrote:
| As for practical limits, if you do the arithmetic naively, then
| you'll generally need O( _n_ ) memory to capture a region of
| size 10^- _n_ (or 2^- _n_ , or any other base). It seems to be
| the exception rather than the rule when it's possible to use
| less than O( _n_ ) memory.
|
| For instance, there's no known practical way to compute the
| 10^100th bit of sqrt(2), despite how simple the number is. (Or
| at least, a thorough search yielded nothing better than
| Newton's method and its variations, which must compute all the
| bits. It's even worse than p with its BBP formula.)
|
| Of course, there may be tricks with self-similarity that can
| speed up the computation, but I'd be very surprised if you
| could get past the O( _n_ ) memory requirement just to
| represent the coordinates.
| calibas wrote:
| > What are the fundamental limits on how deeply a fractal can
| be accurately zoomed?
|
| This question is causing all sorts of confusion.
|
| There is no _fundamental_ limit on how much detail a fractal
| contains, but if you want to render it, there 's always going
| to be a _practical_ limit on how far it can accurately be
| zoomed.
|
| Our current computers kinda struggle with hexadecuple precision
| floats (512-bit).
| 65 wrote:
| How does an article about visualizing fractals manage to have
| ZERO images in it?
| ttoinou wrote:
| Well this is aimed fractalers like me who want to implement
| deep zooms ourselves, rather than a tutorial for newbies
| fractalf wrote:
| Haha my exact same reaction (and ive programmed a few)
| PaulHoule wrote:
| I have been thinking about the (I think) still unsolved problem
| that I got told about in grad school and recently saw mentioned
| in a 1987 issue of _Byte_ magazine.
|
| Namely people make these Poincare section plots for Hamiltonian
| systems like
|
| https://mathematica.stackexchange.com/questions/61637/poinca...
|
| That section that looks like a bunch of random dots is where
| chaotic motion is observed. There's a lot of reason to think that
| area should have more structure in it because the proof that
| there are an infinite number of unstable periodic orbits in there
| starts with knowing there are an infinite number of stable
| periodic orbits and that there is an unstable orbit on the the
| separatrix between them. Those plots are probably not accurate at
| all because the finite numeric precision interacts with the
| sensitivity to initial conditions. The _Byte_ article suggests
| that it ought to be possible to use variable precision math,
| bounding boxes and such to make a better plot but so far as I
| know it hasn 't been done.
|
| (At this point I care less about the science and more about
| showing people an image they haven't seen before.)
| loxias wrote:
| You subtly referenced one of my favorite facts.
|
| "In 1920, Pierre Fatou expressed the conjecture that -- except
| for special cases -- all critical points of a rational map of
| the Riemann sphere tend to _periodic_ orbits under iteration.
| ... can be interpreted to mean that for a dense set of
| parameters 'a', an attracting periodic orbit exists [in the
| logistic map]."
|
| Friggin' blew my mind at age 20. Still kinda does. :)
| thaumasiotes wrote:
| > There's a lot of reason to think that area should have more
| structure in it because the proof that there are an infinite
| number of unstable periodic orbits in there starts with knowing
| there are an infinite number of stable periodic orbits and that
| there is an unstable orbit on the the separatrix between them.
|
| An analogous result is true for the number line: between any
| two rational numbers, there is an irrational number.
|
| But that isn't cause to suspect that, if you look at a random
| piece of the number line, you should be able to see a lot of
| rationals. They're _there_. But you can 't see them. The odds
| of any given number you look at being irrational are 1.
| PaulHoule wrote:
| That is very relevant.
|
| In a plot like that one those groups of circles that you see
| are resonances where the winding ratio on the torus (how many
| degrees you go around per turn) is rational. The area between
| two resonances (or non-resonant behavior) has a little bit of
| chaos in those regular regions if you look closely: there is
| a certain chaotic zone. Those areas of pervasive chaos happen
| when the chaos area around the rationals covers everything,
| see
|
| https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Arnold%E2%8.
| ..
|
| which involves doing a sum over all the rationals.
|
| In the 2 degree of freedom case those tori are a solid wall
| so even if you have a chaotic zone around a resonance the
| motion is constrained by the tori. For N>2 though there are
| more dimensions and the path can go "around" the tori. You
| could picture our solar system of 8 planets having 24 degrees
| of freedom (although the problem is terribly non-generic
| because the three kinds of motion in a 1/r^2 field all have
| the same period) It sure seems that we are in a regular
| regime like one of the circles in that plot but we cannot
| rule out that over the course of billions of years that
| Neptune won't get ejected. See
|
| https://en.wikipedia.org/wiki/Arnold_diffusion
|
| which is poorly understood because nobody has found an attack
| on it. You have the same problem with numerical work that you
| do in the plot because sensitive dependence on initial
| conditions magnifies rounding error. Worse than that normal
| integrators like Runge-Kutta don't preserve all the geometric
| invariants of
|
| https://en.wikipedia.org/wiki/Symplectic_geometry
|
| so you know you get the wrong results. There is
|
| https://en.wikipedia.org/wiki/Symplectic_integrator
|
| but other than preserving that invariance those perform much
| worse than normal integrators. This is one of the reasons why
| the field has been stuck since before I got into it. This
|
| https://en.wikipedia.org/wiki/Interplanetary_Transport_Netwo.
| ..
|
| came out of people learning how to find chaotic trajectories
| to use for transfer orbits and is an exciting development
| though. Practically though you don't want to take a low
| energy-long time trajectory to Mars because you'll get your
| health wrecked by radiation.
| colordrops wrote:
| Has there been any progress on the analysis or formalization of
| the how or why of the Mandelbrot set's infinite beauty and
| complexity from such a simple formula? Sorry for the poor framing
| of my question... Just seems there is something to be learned
| from the set beyond just that it exists and looks cool.
| TacticalCoder wrote:
| I remember fractal zooms in real time on the Amiga 500 and they
| were using a trick: they'd only recompute a few horizontal and
| vertical lines at each frame, with most of the screen just being
| a copy/blit of the previous frame, shifted. After a few frames
| all the pixels from _n_ frames back were discarded.
|
| This was a nice optimization and the resulting animation was
| still looking nice. Good memories.
___________________________________________________________________
(page generated 2025-01-04 23:01 UTC)