[HN Gopher] Mandelbrot deep zoom theory and practice (2021)
       ___________________________________________________________________
        
       Mandelbrot deep zoom theory and practice (2021)
        
       Author : fanf2
       Score  : 108 points
       Date   : 2025-01-03 15:42 UTC (7 hours ago)
        
 (HTM) web link (mathr.co.uk)
 (TXT) w3m dump (mathr.co.uk)
        
       | swayvil wrote:
       | Lucid. But a few pictures would be nice and relevant.
        
         | r721 wrote:
         | Illustrative YouTube video:
         | https://www.youtube.com/watch?v=0jGaio87u3A
        
       | yzdbgd wrote:
       | It's so humbling to read how complex such calculations can get. I
       | took a crack at making a JS client side zooming app a while back
       | and it was miserably slow and would run out of memory as the
       | number precision was limited by JS's max float size...
       | 
       | Here it is nonetheless if anyone's curious :
       | 
       | App : https://yzdbg.github.io/mandelbrotExplorer/
       | 
       | Repo : https://github.com/yzdbg/mandelbrotExplorer
        
       | epistasis wrote:
       | What a great summary of deep knowledge for newcomers to quickly
       | digest.
       | 
       | I had just been (re)watching the Numberphile and 3blue1brown
       | fractal videos this morning so this is a great complement.
        
       | mg wrote:
       | I have yet to see a Mandelbrot explorer written in Javascript
       | that allows infinite zoom without losing detail and a good UI
       | that works on desktop on mobile.
       | 
       | Does anybody know one?
       | 
       | If there is none, I would build one this year. If anyone wants to
       | join forces, let me know.
        
         | ccvannorman wrote:
         | This is right up my alley :-) I'll message you
        
         | QuadmasterXLII wrote:
         | I made https://mandeljs.hgreer.com
         | 
         | The real glory of it is the math - it's using Webassembly to
         | calculate the reference orbit, and then the GPU to calculate
         | all the pixels, but with an enormous amount of fussing to get
         | around the fact that shaders only have 32 bit floats. The
         | interface works on mobile and desktop, but if you have any tips
         | on how to polish it, let me know.
        
           | mg wrote:
           | Hey, this is pretty cool!
           | 
           | Have you considered publishing it under an open source
           | license?
           | 
           | Then I could see myself working on some features like:
           | Selectable color palette, drag&drop and pinch-to-zoom on
           | mobile and fractional rendering (so that when you move the
           | position, only the new pixels get calculated).
        
             | QuadmasterXLII wrote:
             | https://github.com/HastingsGreer/mandeljs
             | 
             | i'll whack a license on it later today
        
           | morphle wrote:
           | You could polish it by using variable size precision floats
           | or at least quadruple size 128 bit floating point. This
           | requires you to create a programming language compiler or use
           | my parallel Squeak programming language (it is portable) and
           | have that run on Webassembly or WebGL. It would be easier to
           | have it run directly on CPU, GPU and Neural Engine hardware.
           | The cheapest hardware today would be the M4 Mac mini or
           | design your own chips (see my other post in this thread).
           | 
           | An example of this polished solution is [1] but this example
           | does not yet use high precision floating point [2].
           | 
           | [1] https://tinlizzie.org/~ohshima/shadama2/
           | 
           | [2] https://github.com/yoshikiohshima/Shadama
        
             | QuadmasterXLII wrote:
             | This is already implemented in it- I just did it in pen and
             | paper and directly wrote the shader from my results instead
             | of writing a language first.
             | 
             | https://www.hgreer.com/JavascriptMandelbrot/#an_ugly_hack_t
             | h...
        
           | dspillett wrote:
           | _> to get around the fact that shaders only have 32 bit
           | floats_
           | 
           | I wonder if there are places around the set where rounding
           | through the iterations depending on the number format chosen,
           | materially affects the shape (rather than just changing many
           | pixels a bit so some smoothness or definition is lost).
        
             | fanf2 wrote:
             | You get effects somewhat like that from perturbation theory
             | glitches, as discussed in the article.
        
         | foobarrio2 wrote:
         | I lost access to my original hn acct so I created this one just
         | give you a heads up I'll be sending an email!
        
         | dwaltrip wrote:
         | I'm building one.
         | 
         | I have an old "beta" release of sorts that's live:
         | https://fracvizzy.com/. Change the color mode to "histogram",
         | it creates much more interesting pictures imo. Doesn't really
         | work on mobile yet fyi.
         | 
         | I Just got back into the project recently. I'm almost done
         | implementing smooth, "google maps" style continuous zoom. I
         | have lots of ideas for smoother, more efficient exploration as
         | well as expanded coloring approaches and visualization
         | "styles". I'm also working on features for posting / sharing
         | what you find, so you can see the beautiful locations that
         | others find and the visualization parameters they chose. As
         | well as making it easy to bookmark your own finds.
         | 
         | Infinite zoom is probably a long ways out (if ever), but with
         | JS numbers you can zoom pretty far before hitting precision
         | issues. There's _a lot_ to explore there. I 'd love to get
         | infinite zoom someday though.
         | 
         | Here's a few example locations I found quickly (I have more
         | links saved somewhere....):
         | 
         | *
         | https://fracvizzy.com/?pos[r]=-0.863847413354&pos[i]=-0.2309...
         | 
         | *
         | https://fracvizzy.com/?pos[r]=-1.364501&pos[i]=-0.037646&z=1...
         | 
         | *
         | https://fracvizzy.com/?pos[r]=-0.73801&pos[i]=-0.18899&z=12&...
        
       | jderick wrote:
       | I wonder if this can generalize to mandelbulb?
        
       | morphle wrote:
       | In 1986 we wrote a parallel Mandelbrot program in assembly
       | instructions in the 2K or 4K on-chip SRAM of 17 x T414 Transputer
       | chips linked together with 4 x 10 Mbps links each into a cheap
       | supercomputer [1]. It drew the 512 x 342 pictures on a Mac 128K
       | as terminal at around 10 seconds per picture.
       | 
       | I later wrote the quadruple-precision floating-point calculations
       | in microcode [2] to speed it up by a factor of 10 and combined 52
       | x T414 with 20 x T800 Transputers with floating point hardware
       | into a larger supercomputer costing around $50K.
       | 
       | With this cheap 72 core supercomputer it still would have taken
       | years to produce the deep zoom of the Mandelbrot set [4] that
       | took them 6 months with 12 CPU cores running 24/7 in 2010 [3]. In
       | 2024 we can buy a $499 M4 Mac mini (20-36 'cores') and calculate
       | it a a few days. If I link a few M4s together with 3x32 Gbps
       | Thunderbolt links into a cheap supercomputer and write the
       | assembly code for all the 36 cores (CPU+GPU+Neural Engine) I can
       | render the deep zoom Mandelbrot almost in realtime (30 frames per
       | second).
       | 
       | That is Moore's law in practice. The T414 had 900,000
       | transistors, the M4 has 28 billion transistors at 3nm (31.111
       | times larger at 50% of the price).
       | 
       | The M2 Ultra with 134 billion transistors and M4 Max (estimate
       | 100 billion transistors) are larger chips than a M4 but they are
       | relatively more expensive then the M4 so it is cheaper and faster
       | to link together 13 x M4 than buy 1 x M2 Ultra or 6 x M4 instead
       | of 1 x M4 Max. Cerebras or NVidia also make larger chips, but
       | again, not as cheap and fast as the M4.
       | Price/performance/Watt/dollar is what matters, you want the
       | lowest energy (OPEX) to calculate as many floating point numbers
       | as possible at the lowest purchase cost (CAPEX), you do not want
       | the fastest chips.
       | 
       | You will want to rewrite your software to optimize for the
       | hardware. Even better would be to write the optimum software (for
       | example in variable precision floating point and large integers
       | in Squeak Smalltalk) and then design the hardware to execute that
       | program with the lowest cost. To do that I designed my own
       | runtime reconfigurable chips with reconfigurable core and
       | floating point hardware precision.
       | 
       | I designed a 48 trillion transistor Wafer Scale Integration (WSI)
       | at 3nm with almost a million cores and a few hundred gigabyte
       | SRAM on the wafer [5][6]. This unchipped wafer would cost around
       | $30K. It would cost over $130 million to manufacture it at TSMC.
       | This WSI would have 1714 times more transistors but cost only 60
       | times at much, a 28 times improvement, but it is an Apples and
       | oranges comparison. It would be more like a 100 times improvement
       | because of the larger SRAM, faster on-chip links and lower energy
       | cost of the WSI over the M4.
       | 
       | The largest fastest supercomputers [7] cost $600 million. To
       | match that with a cluster of M4 would cost around $300 million.
       | To match it with my WSI design would cost $140 million total. For
       | $230 million you get a cheap 3000 x WSI = 144 quadrillion
       | transistor supercomputer immersed in a 10mx10mx10m swimming pool
       | that is orders of magnitude faster then the largest fastest
       | supercomputer and it would be at orders of magnitude lower cost,
       | especially if you would run it on solar energy only [8], even if
       | you would buy three 3000 wafer scale integration supercomputers
       | ($410 million) and only run it during daylight hours and space it
       | evenly around the equator in cloudless deserts. Energy cost
       | dominates hardware costs over the lifetime of a supercomputer.
       | 
       | All the numbers I mentioned are rounded off or estimates, to be
       | accurate requires me to first define every part of the floating
       | point math, describe the software calculations exactly, make
       | accurate hardware definitions and would take me several
       | scientific papers and several weeks to write.
       | 
       | [1]
       | https://www.bighole.nl//pub/mirror/homepage.ntlworld.com/kry...
       | 
       | [2]
       | https://sites.google.com/site/transputeremulator/Home/inmos-...
       | 
       | [3] http://fractaljourney.blogspot.com
       | 
       | [4] https://www.youtube.com/watch?v=0jGaio87u3A
       | 
       | Maybe https://www.youtube.com/watch?v=zXTpASSd9xE took more
       | calculations, it is unclear.
       | 
       | [5] Smalltalk and Self Hardware
       | https://www.youtube.com/watch?v=vbqKClBwFwI
       | 
       | [6] Smalltalk and Self
       | Hardwarehttps://www.youtube.com/watch?v=wDhnjEQyuDk
       | 
       | [7] https://en.wikipedia.org/wiki/El_Capitan_(supercomputer)
       | 
       | [8] https://www.researchgate.net/profile/Merik-
       | Voswinkel/publica...
        
         | morphle wrote:
         | There are virtually no limits (for Mandelbrot and computing in
         | general) because there are few limits on the growth of
         | knowledge [5].
         | 
         | In a few decades we will have learned to take CO2 (carbon
         | dioxide) molecules out of the air [4] and rearrange the carbon
         | atoms in 3D structures atom by atom [6]. We will be able to
         | grow the transistors and the solar cells virtually for free.
         | Energy will be virtually free, a squandrable abundance of free
         | and clean energy [2]. At that point we will start automatically
         | self-assembling Dyson Swarms constructions of solar cells with
         | transistors on the back [1] on the Quebibyte scale to capture
         | all the solar output of the sun [3] and get near-infinite
         | compute for free. We would finally be able to explore the
         | Mandelbrot space at full depth within our lifetime.
         | 
         | [1] https://gwern.net/doc/ai/scaling/hardware/1999-bradbury-
         | matr... and https://en.wikipedia.org/wiki/Matrioshka_brain
         | 
         | [2] Bob Metcalfe Ethernet
         | https://www.youtube.com/watch?v=axfsqdpHVFU
         | 
         | [3] https://en.wikipedia.org/wiki/Kardashev_scale
         | 
         | [4] Richard Feynman Plenty of Room at the Bottom
         | https://en.wikipedia.org/wiki/There%27s_Plenty_of_Room_at_th...
         | 
         | [5] David Deutsch: Chemical scum that dream of distant quasars
         | https://www.youtube.com/watch?v=gQliI_WGaGk
         | 
         | [6] https://www.youtube.com/watch?v=Spr5PWiuRaY and
         | https://www.youtube.com/watch?v=r1ebzezSV6s
        
           | LargoLasskhyfv wrote:
           | I'd rather prefer to pursue the path of producing potent
           | phytochemicals to unleash perfect psionic powers, thereby
           | shortcutting the need for all these boring physical
           | procedures, instead persisting mind over matter as an
           | afterthought.
        
           | fluoridation wrote:
           | LOL.
           | 
           | >We would finally be able to explore the Mandelbrot space at
           | full depth
           | 
           | What does that mean? The Mandelbrot set is infinitely
           | intricate.
        
           | mikestorrent wrote:
           | This is the future I want to live in. Bountiful cheap energy
           | is much more attractive than the futures most people write
           | about now, that vary between some sort of managed decline,
           | dystopianism, or other negativity.
           | 
           | I hope to see you or your faithful recreation in the
           | Matrioskha brain one day.
        
         | mikestorrent wrote:
         | Exceedingly interesting! Say, I have a board from back in the
         | 80s that you may know about - nobody else I've asked has any
         | idea. It's a "Parallon" ISA card from a company called Human
         | Devices, that has something like 8 NEC V20s on it. I think it's
         | an early attempt at an accelerator card, maybe for neural
         | networks, not sure.
         | 
         | Some reference about its existence here, in a magazine that
         | (ironically? serendipitously?) features a fractal on the cover:
         | http://www.bitsavers.org/magazines/Micro_Cornucopia/Micro_Co...
         | 
         | Ever heard of such a thing? I think at this point, I'm trying
         | to find someone who wants it, whether for historical purposes
         | or actually to use.
        
           | morphle wrote:
           | Yes I've heard of such a thing [1], it is probably worth $50.
           | This PCB board is just a cluster of 8 V20 (Intel 8088)
           | compatible 16 bit processor, nothing to write home about. It
           | is not considered an early attempt at an accelerator card.
           | Depending on your definition many were done earlier [2] going
           | back to the earliest computers 2000 years ago. My favorite
           | would be the 16 processor Alto [3].
           | 
           | In 1989 I build my 4th Transputer supercomputer for a
           | customer who programmed binary neural networks.
           | 
           | In those early days everyone would use Mandelbrot and Neural
           | Networks as simple demo's and benchmarks of any chip or
           | computer, especially supercomputers. So it is not ironical or
           | serendipitous that a magazine would have a Mandelbrot and an
           | article on a microprocessor in the same issue. My Byte
           | Magazine article on Transputer and DIY supercomputers also
           | described both together.
           | 
           | [1] https://en.wikipedia.org/wiki/NEC_V20
           | 
           | [2] https://en.wikipedia.org/wiki/History_of_supercomputing#:
           | ~:t....
           | 
           | [3] https://en.wikipedia.org/wiki/Xerox_Alto
        
             | mikestorrent wrote:
             | Thanks for answering, appreciated. I suppose I will just
             | hang onto it as an interesting piece of history, though I
             | never thought it would be worth much - more I'm just hoping
             | to find someone out there who wants it for some personal
             | reason so I can "send it home", so to speak.
             | 
             | Is anyone doing anything with transputer technology now? Do
             | you think it has a chance at resurgence?
             | 
             | > the earliest computers 2000 years ago
             | 
             | Typo, exaggeration, or a reference to something like the
             | Antikythera Mechanism?
        
               | morphle wrote:
               | >Is anyone doing anything with transputer technology now?
               | 
               | Yes, our Morphle Engine Wafer Scale Integration and our
               | earlier SiliconSqueak microprocessor designs and
               | supercomputers borrows many special features of the
               | Transputer designs by David May. Also of the Alto design
               | by Chuck Thacker, SOAR RISC by David Ungar and a few of
               | the B5000-B6500 designs by Bob Barton. Most of all they
               | build on the design of the Smalltalk, Squeak and
               | reflective Squeak VM software designs by Alan Kay and Dan
               | Ingalls.
               | 
               | >reference to Antikythera Mechanism[5]?
               | 
               | Yes. I could have referenced the Jacquard Loom [1],
               | Babbage Analytical Engine or the later Difference engine
               | [2], Alan Turing, Ada Lovelace and dozens of other
               | contestants for 'first' and 'oldest' computational
               | machines, they are all inaccurate, as are these
               | lists[3][4]. Or I could have only referenced the ones
               | from my own country [6].
               | 
               | [1] https://en.wikipedia.org/wiki/Jacquard_machine
               | 
               | [2] https://en.wikipedia.org/wiki/Difference_engine
               | 
               | [3] https://www.oldest.org/technology/computers/
               | 
               | [4] http://www.computerhistories.org/
               | 
               | [5] https://en.wikipedia.org/wiki/Antikythera_mechanism
               | 
               | [6] Oldest Dutch computers? https://www.cwi.nl/en/about/h
               | istory/#:~:text=CWI%20developed....
        
       | ccvannorman wrote:
       | Wonderful article on fractals and fractal zooming/rendering! I
       | had never considered the inherent limitations and complications
       | of maintaining accuracy when doing deep zooms. Some questions
       | that came up for me while reading the article:
       | 
       | 1. What are the fundamental limits on how deeply a fractal can be
       | accurately zoomed? What's the best way to understand and map this
       | limit mathematically?
       | 
       | 2. Is it possible to renormalize a fractal (perhaps only "well
       | behaved"/"clean" fractals like Mandelbrot) at an arbitrary level
       | of zoom by deriving a new formula for the fractal at that level
       | of zoom? (Intuition says No, well, maybe but with additional
       | complexities/limitations; perhaps just pushing the problem
       | around). ( _My experience with fractal math is limited._ ) I'll
       | admit this is where I met my own limits of knowledge in the
       | article as it discussed this as normalizing the mantissa, and the
       | limit is that now you need to compute each pixel on CPU.
       | 
       | 3. If we assume that there are fundamental limits on zoom,
       | mathematically speaking, then should we consider an alternative
       | that _looks_ perfect with no artifacts (though it would not be
       | technically accurate) at arbitrarily deep levels of zoom? Is it
       | in principle possible to have the mega-zoomed-in fractal appear
       | flawless, or is it provable that at some level of zoom there is
       | simply no way to render any coherent fractal or appearance of
       | one?
       | 
       | I always thought of fractals as a view into infinity from the 2D
       | plane (indeed the term "fractal" is meant to convey a fractional
       | dimension above 2). But, I never considered our limits as
       | sentient beings with physical computers that would never be able
       | to fully explore a fractal, thus it is only an infinity in idea,
       | and not in reality, to us.
        
         | ttoinou wrote:
         | 1. No limit. But you need to find an interesting point, the
         | information is encoded in the numerous digits of this (x,y)
         | point for Mandelbrot. Otherwise you'll end up in a flat space
         | at some point when zooming
         | 
         | 2. Renormalization to do what ? In the case of Mandelbrot you
         | can use a neighbor point to create the Julia of it and have
         | similar patterns in a more predictable way
         | 
         | 3. You can compute the perfect version but it takes more time,
         | this article discusses optimizations and shortcuts
        
           | ccvannorman wrote:
           | 1. There must be a limit; there are only around 10^80 atoms
           | in our universe, so even a universe-sized supercomputer could
           | not calculate an arbitrarily deep zoom that required 10^81
           | bits of precision. Right?
           | 
           | 2. Renormalization just "moves the problem around" since you
           | lose precision when you recalculate the image algorithm at a
           | specific zoom level. This would create discrepancies as you
           | zoom in and out.
           | 
           | 3. You cannot; because of the fundamental limits on computing
           | power. I _think_ you cannot compute a mathematically accurate
           | and perfect Mandelbrot set at an arbitrarily high level of
           | zoom, say 10^81, because we don 't have enough compute or
           | memory available to have the required precision
        
             | ttoinou wrote:
             | 1. Mandelbrot is infinite. The number pi is infinite too
             | and contains more information than the universe
             | 
             | 2. I dont know what you mean or look for with normalization
             | so I can't answer more
             | 
             | 3. It depends on what you mean by computing Mandelbrot. We
             | are always making approximations for visualisation by
             | humans, that's what we're talking about here. If you mean
             | we will never discover more digits in pi than there is
             | atoms in the universe then yes I agree but that's a
             | different problem
        
             | morphle wrote:
             | We can create enough compute and SRAM memory for a few
             | hundred million dollars. If we apply science there are
             | virtually no limits within in a few years.
             | 
             | See my other post in this discussion.
        
             | fluoridation wrote:
             | 1. You asked about the fundamental limits, not the
             | practical limits. Obviously practically you're limited by
             | how much memory you have and how much time you're willing
             | to let the computer run to draw the fractal.
        
         | pepinator wrote:
         | In the case of Mandelbrot, there is a self similar
         | renormalization process, so you can obtain such a formula. For
         | the "fixed points" of the renormalization process, the formula
         | is super simple; for other points, you might need more
         | computations, but it's nevertheless an efficient method. There
         | is a paper of Bartholdi where he explains this in terms of
         | automata.
        
         | LegionMammal978 wrote:
         | As for practical limits, if you do the arithmetic naively, then
         | you'll generally need O( _n_ ) memory to capture a region of
         | size 10^- _n_ (or 2^- _n_ , or any other base). It seems to be
         | the exception rather than the rule when it's possible to use
         | less than O( _n_ ) memory.
         | 
         | For instance, there's no known practical way to compute the
         | 10^100th bit of sqrt(2), despite how simple the number is. (Or
         | at least, a thorough search yielded nothing better than
         | Newton's method and its variations, which must compute all the
         | bits. It's even worse than p with its BBP formula.)
         | 
         | Of course, there may be tricks with self-similarity that can
         | speed up the computation, but I'd be very surprised if you
         | could get past the O( _n_ ) memory requirement just to
         | represent the coordinates.
        
         | calibas wrote:
         | > What are the fundamental limits on how deeply a fractal can
         | be accurately zoomed?
         | 
         | This question is causing all sorts of confusion.
         | 
         | There is no _fundamental_ limit on how much detail a fractal
         | contains, but if you want to render it, there 's always going
         | to be a _practical_ limit on how far it can accurately be
         | zoomed.
         | 
         | Our current computers kinda struggle with hexadecuple precision
         | floats (512-bit).
        
       | 65 wrote:
       | How does an article about visualizing fractals manage to have
       | ZERO images in it?
        
         | ttoinou wrote:
         | Well this is aimed fractalers like me who want to implement
         | deep zooms ourselves, rather than a tutorial for newbies
        
       | PaulHoule wrote:
       | I have been thinking about the (I think) still unsolved problem
       | that I got told about in grad school and recently saw mentioned
       | in a 1987 issue of _Byte_ magazine.
       | 
       | Namely people make these Poincare section plots for Hamiltonian
       | systems like
       | 
       | https://mathematica.stackexchange.com/questions/61637/poinca...
       | 
       | That section that looks like a bunch of random dots is where
       | chaotic motion is observed. There's a lot of reason to think that
       | area should have more structure in it because the proof that
       | there are an infinite number of unstable periodic orbits in there
       | starts with knowing there are an infinite number of stable
       | periodic orbits and that there is an unstable orbit on the the
       | separatrix between them. Those plots are probably not accurate at
       | all because the finite numeric precision interacts with the
       | sensitivity to initial conditions. The _Byte_ article suggests
       | that it ought to be possible to use variable precision math,
       | bounding boxes and such to make a better plot but so far as I
       | know it hasn 't been done.
       | 
       | (At this point I care less about the science and more about
       | showing people an image they haven't seen before.)
        
       ___________________________________________________________________
       (page generated 2025-01-03 23:00 UTC)