[HN Gopher] Numbers Are Leaves
       ___________________________________________________________________
        
       Numbers Are Leaves
        
       Author : nodar-d
       Score  : 154 points
       Date   : 2024-12-26 14:23 UTC (2 days ago)
        
 (HTM) web link (www.christo.sh)
 (TXT) w3m dump (www.christo.sh)
        
       | max_entropy wrote:
       | This talks about representing numbers as graphs whilst showing
       | that the graphs can appear to be reminiscent of leaves, not that
       | the numbers are leaf nodes of graphs, which is where I thought it
       | was going at first.
        
       | willvarfar wrote:
       | Beautiful!
       | 
       | (And now we all go cross eyed trying to see if there's a pattern
       | for factoring in there somewhere...)
        
       | nyrikki wrote:
       | > I would like to understand why numbers looks like leaves and
       | not some other fractal like snowflakes for example (the inverse
       | question, why biological leaves have this specific self-similar
       | structure may yield more fruit).
       | 
       | Snowflakes are dendrite formations highly influenced by how
       | nucleation happens, the graph layout choices highly impact what
       | you will see;
       | 
       | Von Neumann Ordinals are possibly better approached as
       | hierarchical branching processes, and being deterministic,
       | Tokunaga self-similarity is one fairly elegant path to follow in
       | the deterministic case such as this. (IMHO)
        
       | aithrowawaycomm wrote:
       | > I think a fair thing to ask is if the force-directed layout
       | engine is uniquely responsible for the leaf-like structure and
       | this has nothing to do with the Von Neumann ordinals themselves.
       | 
       | > To check this I generated some Collatz trees and they ended up
       | looking like microbes. I think it's safe to say the answer is no.
       | 
       | "Uniquely responsible" seems to be doing too much work. These
       | structures are reminiscent of dendritic fractals coming about
       | from diffusion-limited aggregation in electrochemical
       | deposition[1], which is a transitional phenomenon with a phase
       | diagram (molar concentration of the electrolyte vs voltage) There
       | is a sweet spot in the middle where you get intricate dendritic
       | crystals: outside of that you get smooth layers or big blobs /
       | spikes without much internal structure.
       | 
       | I suspect it is primarily the modeling of the forces which is
       | responsible for the behavior, with the topological structure of
       | the tree somewhat indirectly corresponding to the chemical and
       | electrical parameters. I am by no means an expert but it seems
       | likely to me that these von Neumann dendrites vs the structure of
       | Collatz trees is a fairly shallow relationship: lots of trees
       | with similar graph-theoretic properties to the von Neumann trees
       | would also demonstrate the leaf-like fractals, but with no
       | meaningful relationship to the natural numbers. But it would be
       | interesting to make this more precise.
       | 
       | [1] https://en.wikipedia.org/wiki/Diffusion-limited_aggregation
        
       | ben_w wrote:
       | > why biological leaves have this specific self-similar structure
       | may yield more fruit
       | 
       | I hope the pun was intentional :)
       | 
       | That said, I think the actual answer is in part due to the choice
       | of presentation rather than the mathematical structure -- Force-
       | Directed graph layout -- despite the line immediately after this
       | quote.
       | 
       | The outer surface of a leaf doesn't physically blend into other
       | leaves that it touches, so it gets repelled mechanically during
       | growth, and it is also connected mechanically to the parts it
       | grew from.
       | 
       | Consider the failure of Collatz trees to look like leaves as a
       | counterpoint to the failure of the `dot` graphs to look like
       | leaves.
        
       | 1970-01-01 wrote:
       | Too few examples. Get to 2039484 and report back if it looks like
       | a leaf or a blob.
        
         | openquery wrote:
         | The representation of n has 2^n vertices. 2^2039484 might be a
         | bit too large to render...
        
       | philipov wrote:
       | I went into it thinking they meant numbers are leaf nodes of a
       | graph, but no: they meant the entire graph _looks like_ a leaf.
        
       | TZubiri wrote:
       | > I would like to understand why numbers looks like leaves
       | 
       | 1st of all they don't. The graph doesn't look like pinnatids or
       | palmatids. There is some resemblance of an alternating
       | disposition of leaves, but that's not the shape of the leaf
       | itself but the distribution of them, and it's a stretch.
       | 
       | Secondly, I'll take the generous interpretation of the question
       | which is, why the graph looks mathematically like leaves, and not
       | a question of biological impact, whatever resemblance is a
       | coincidence which brings us to
       | 
       | Third, OP is playing a game with numbers with no objective goal
       | (yes all maths is this, but this is straight up chmess), there is
       | no ultimate meaning to be derived of it.
        
         | Etheryte wrote:
         | I think the last point doesn't really hold its own in any way.
         | Many discoveries throughout history have started from someone
         | just playing around with an idea, toying with it at first, but
         | eventually becoming obsessed. The thing is, there's no way to
         | tell beforehand. It might be a toy with no ultimate use or
         | meaning, or it might lead to something entirely novel somewhere
         | down the line. That's why play, in a very broad sense, is a
         | core part of science and invention.
        
           | TZubiri wrote:
           | If 1 out of 10 chmessicians discover something useful, the
           | discoverer had good taste and deserves credit.
           | 
           | There is no insurance redistributing credit amongst all of
           | the pointless searches.
           | 
           | The guys who invented imaginary numbers or eigenvectors
           | weren't just throwing darts at a board and got "lucky".
        
         | ulbu wrote:
         | abstraction for abstraction's sake? pure abstraction?
         | abstracted abstraction?
        
       | isoprophlex wrote:
       | This has almost nothing to do with numbers or ZFC and almost
       | everything to do with how graph layout algo's produce their
       | outputs...
        
       | dr_dshiv wrote:
       | I rather like the idea that deep in the platonic realm of pure
       | mathematics, there is something that is _alive._
        
       | putzdown wrote:
       | Am I wrong in thinking that the visual below "Finally with 5 the
       | self-similarity continues as expected" is wrong? I'm thinking
       | that the second-to-right and rightmost subtrees need to be
       | larger, to have a sequence of four and five nodes respectively in
       | their rightmost branches.
        
       | hulium wrote:
       | These are just binomial trees, which might be familiar to those
       | who know their classic data structures.
       | 
       | https://en.wikipedia.org/wiki/Binomial_heap
        
       | macawfish wrote:
       | I love the imagination here and imagination in general but the
       | framing really stretches it... I think this whole article would
       | be more substantive if it was a little more grounded in the
       | concept of induction.
       | 
       | Well and if it would admit very openly that in the sentence
       | "numbers are leaves" the word "are" is about the existence of an
       | isomorphism between the natural numbers and a series of nested
       | sets... but that it's _far_ from the only isomorphism, induction
       | is _everywhere_ in math and computer science. I imagine some
       | little kid reading this and clinging to a reductionist idea that
       | "numbers are (only) leaves", but they aren't _just_ leaves.
       | 
       | Anyhow the setup is really nice and inspiring, it just ended up
       | feeling like a tease. Hope to see a followup!
        
         | openquery wrote:
         | Author here. Wasn't expecting to see this on the front page!
         | 
         | I'm really very far from a mathematician and this was a write
         | up of a fun side project. I think the title would be
         | unforgivably misleading in a formal context (if this was a
         | paper claiming any new insights) but really it was a fun side
         | project I wanted to right about. Maybe you read this and
         | learned a little bit about set theory if you had no idea what
         | it was (much like myself).
         | 
         | In general I resent popular science (especially in theoretical
         | physics) which tries to reduce deep and interesting topics to
         | poorly thought out analogies - but again my positioning here is
         | not to educate per se. Or Michio Kaku style orating which
         | assumes string theory a priori and later you have conversations
         | with people who think string theory is established and tested
         | because they watched a 40 minute video of him on YT.
         | 
         | Having said all this I need to get better and giving titles to
         | the things I write - my other post about trying to build AGI in
         | Rust got similar criticism.
         | 
         | Either way thanks for the feedback!
        
           | Vox_Leone wrote:
           | I'm under the impression that, at least theoretically, Von
           | Neumann's principles of self-replication, game theory, or
           | optimization in the context of designing neural network
           | structures.
           | 
           | You could think about organizing a neural network with layers
           | or nodes that are indexed by Von Neumann ordinals, where the
           | structure of the network follows the natural progression of
           | ordinals. For example:
           | 
           | Each layer or node in the neural network could correspond to
           | a finite ordinal (such as 0, 1, 2, etc.) or transfinite
           | ordinal (like oo, o+1o+1, etc.). The way the network expands
           | and evolves could follow the ordering and progression
           | inherent in the Von Neumann ordinal system.
           | 
           | This could lead to an architecture where early layers (low
           | ordinals) represent simpler, more basic computations (e.g.,
           | feature extraction or basic transformations). Later layers
           | (higher ordinals) could correspond to more complex, abstract
           | processing or deeper, more abstract representations.
           | 
           | But I'm afraid there is no hardware substrate upon which to
           | build such a thing.
        
       | nthingtohide wrote:
       | I think people will like the following tangent.
       | 
       | https://en.m.wikipedia.org/wiki/Benacerraf%27s_identificatio...
       | 
       | In the philosophy of mathematics, Benacerraf's identification
       | problem is a philosophical argument developed by Paul Benacerraf
       | against set-theoretic Platonism and published in 1965 in an
       | article entitled "What Numbers Could Not Be". Historically, the
       | work became a significant catalyst in motivating the development
       | of mathematical structuralism.
       | 
       | The identification problem argues that there exists a fundamental
       | problem in reducing natural numbers to pure sets. Since there
       | exists an infinite number of ways of identifying the natural
       | numbers with pure sets, no particular set-theoretic method can be
       | determined as the "true" reduction.
       | 
       | What Numbers Could Not Be
       | 
       | https://youtu.be/H5SocLNkT9M?si=Fk2Hmpw3yOtDW7GS
        
         | bubblyworld wrote:
         | Thanks, that was an interesting rabbit hole. Although I can't
         | help but feel there's a philosophical map/territory confusion
         | here. Like, sure, numbers can't possibly just "be" sets because
         | there are many different models of the naturals in (ZF) set
         | theory, even in higher order logics. But I feel like a
         | Platonist would just counter that of course this is the case -
         | we are simply _modelling_ the properties of the  "true"
         | naturals with these sets, in the same way that differential
         | equations can model the behaviour of fluids without "being"
         | water. Nobody writes down Navier-Stokes and expects to get wet!
        
       | itishappy wrote:
       | Because the representation is a ragged tree. If it were a uniform
       | depth it would look like a blob, but because different arms have
       | different lengths those create structure in the layout algorithm.
        
       | diyseguy wrote:
       | Perhaps if he was able to use some kind of force-directed 3-D
       | algorithm, instead of a 2-D one, they might resemble something
       | other than leaves, which could be interesting.
        
       | epgui wrote:
       | Setting aside the rest of the article, there is one thing I've
       | never really understood the motivation for, and I think this
       | article really highlights it well.
       | 
       | > "Well congratulations this works! We can represent numbers
       | using singleton sets (sets with one element). However, it would
       | be nice if our sets had some more structure. Specifically we
       | would like the set corresponding to the number n to have n
       | elements."
       | 
       | Why? What's the motivation here?
       | 
       | It seems to me like `next(x) = {x}` is simpler than `next(x) = x
       | [?] {x}`, and I'm not totally clear on what the extra complexity
       | buys us.
       | 
       | I am of course familiar with the structure `next(x) = x [?] {x}`,
       | having seen it in textbooks and in a set theory / mathematical
       | foundations class, but I feel like I've never really understood
       | what insight this structure captured. It seems like it's always
       | presented matter-of-factly.
       | 
       | Anyone?
        
         | tromp wrote:
         | Representing the natural number n by a set of size n turns out
         | to be rather useful. And the obvious choice for a set of n
         | elements is the representations of the n natural numbers
         | 0..n-1.
        
           | epgui wrote:
           | I think I can see why... But the obvious question to me
           | becomes:
           | 
           | - What do the operations of addition and multiplication look
           | like with this structure? Doesn't this seem a bit complex?
        
             | bpoudev wrote:
             | I think it's (like so many things) a question of tradeoffs.
             | Programmers often think of complexity (and hence
             | performance) of operations, but that is not important to a
             | mathematician.
             | 
             | The fundamental operation, the successor function, does not
             | look much different, S(n) = n [?] {n} vs S(n) = {n}.
             | Mathematics usually defines addition in terms of this
             | function, so that n + m = 1 + (n-1) + m and 0 + m = m. This
             | can be done via induction and works equally well,
             | regardless of which "implementation" we choose. Similarly,
             | multiplication is repeated addition. Seen in this way, both
             | "implementations" of natural numbers leads to horribly
             | inefficient, but ultimately very similar, addition and
             | multiplication operations.
             | 
             | However, the representation S(n) = n [?] {n} leads to a
             | very simple definition of "a finite set of size n". It is
             | simply a set, which has a bijection between it and n. This,
             | in turn, leads to a much easier arithmetic. Instead of
             | manipulating a specific set representing a given number, we
             | can say that any set of size n can represent the number n.
             | Then addition simply becomes disjoint union, and
             | multiplication becomes Cartesian product, from which things
             | like associativity and commutativity can be proven much
             | easier than in the inductive definition.
        
         | xanderlewis wrote:
         | Look up 'transitive sets' if you haven't heard of them already.
         | 
         | The primary answer to your question is that in the von Neumann
         | definition the ordinals are transitive and well ordered by the
         | epsilon (membership) relation, which is a pair of stipulations
         | that can then be used as a _definition_ of the ordinals if you
         | like. This in turn is nice for many other reasons!
         | 
         | Also, you can make convenient definitions like defining the
         | supremum of a set of ordinals to be the union of that set. The
         | union of two of your ordinals usually won't be an ordinal.
         | 
         | In general, it's nice to have an ordinal simply be _the set of
         | its predecessors_ , which is something that this definition
         | implies.
        
           | epgui wrote:
           | Thanks for the pointers!
        
         | noam_k wrote:
         | Not a professional mathematician, but you have the benefit of
         | set operations mapping to functions you're familiar with.
         | 
         | For example, set union becomes the max function.
        
           | epgui wrote:
           | That one is neat, but how do you define addition and
           | multiplication on these structures?
        
             | xanderlewis wrote:
             | You use (transfinite, if your ordinals are large enough)
             | recursion! Just define a + 1 to be the successor of a --
             | succ(a) -- and then, assuming we've defined a + b, define
             | 
             | a + (b + 1) := (a + b) + 1 = succ(a + b)
             | 
             | (it's only slightly more complicated for infinite ordinals)
             | 
             | You can do a similar thing for multiplication, and
             | exponents, and so on.
             | 
             | Technically, you have to use induction to prove that this
             | definition indeed works to define the operations for all
             | ordinals.
        
         | Chinjut wrote:
         | Encoding n as the set of all m < n is called "von Neumann
         | ordinals". Encoding (positive) n as just the singleton set
         | containing n's immediate predecessor is called "Zermelo
         | ordinals". The main advantage of using the former
         | representation rather than the latter is that it allows
         | uniformly encoding not just finite ordinals, but also
         | transfinite ordinals, many of which do not have an immediate
         | predecessor. E.g., in the von Neumann ordinal system, the
         | infinite set of all finite ordinals may itself be interpreted
         | as an ordinal value larger than every finite one. (And then the
         | set of finite ordinals [?] {the set of finite ordinals} becomes
         | yet a larger transfinite ordinal still, and so on...)
        
         | d-lisp wrote:
         | It always felt arbitrary to me :                   next(x)={x}
         | would give               1={0}         2={{0}}
         | 3={{{0}}}              Kuratowski's encoding gives :
         | 0=O=()         1={O}=(0)         2={O,{O}}=(0,1)
         | 3={O,{O},{O,{O}}}=(0,1,2)                  The cardinal of N is
         | n and         every element in N are the predecessors of n.
         | Von Neuman's encoding gives :         0=O
         | 1=0U{0}={O,{O}}         2=1U{1}={O,{O},{O,{O}}}
         | Now the cardinal of N is n+1, and n is the maximum         of
         | the set N defining n.              Both Von Neuman's and
         | Kuratowski's encoding allows us to define ordered tuples, but I
         | cannot understand how to write the tuples for Von Neuman's in
         | the context of natural numbers.
         | 
         | 2 is {O,{O},{O,{O}}} with Von Neuman's we can recognize 0 and 1
         | as the first and second element of the tuple : what is the
         | third one ?
        
           | openquery wrote:
           | 2 = {0, 1} = {O,{O}} by the Von Neumann ordinal definition.
        
           | epgui wrote:
           | For Von Neuman:                   1 = O U {O} = {O}
           | 2 = 1 U {1} = {O,{O}}
        
             | d-lisp wrote:
             | True, then it doesn't differ from Kuratowski's encoding or
             | am I missing something ?
        
       | sagebird wrote:
       | A more compact and beautiful relation exists between integers and
       | finite rooted trees exist, imo.
       | 
       | David W. Matula found a correspondence between trees and integers
       | using prime factorization, and reported it in 1968 in SIAM: "A
       | Natural Rooted Tree Enumeration by Prime Factorization", SIAM
       | Rev. 10, 1968, p.273 [1]
       | 
       | Others have commented on it before, search the web for Matula
       | Numbers
       | 
       | I independently found this relation when working on a bar code
       | system that was topologically robust to deformation. I wrote a
       | document that explained this relation here[2].
       | 
       | I created an interactive javascript notebook that draws related
       | topological diagrams for numbers. [3]
       | 
       | [1] http://williamsharkey.com/matulaSIAM.png
       | 
       | [2] https://williamsharkey.com/integer-tree-isomorphism.pdf
       | 
       | [3]
       | https://williamsharkey.com/MatulaExplorer/MatulaExplorer.htm...
        
         | sagebird wrote:
         | Sorry - I believe I am off topic as this is not relevant given:
         | 
         | "This indirectly enforces the idea that sets cannot have
         | duplicate elements, as set membership is defined purely by the
         | presence or absence of elements. For example:"
         | 
         | So there is a constraint on what sort of trees are allowed in
         | this -forrest- which would preclude most finite rooted trees.
        
       | photonthug wrote:
       | A related rabbit hole that you can jump down in FoM is that
       | Zermelo's ordinals and von Neumann ordinals cannot both be true
       | at the same time. Wikipedia's intro to the topic [0] is a
       | starting place, see also [1] which might be more in depth.
       | 
       | [0]
       | https://en.m.wikipedia.org/wiki/Benacerraf%27s_identificatio...
       | [1] https://plato.stanford.edu/entries/philosophy-
       | mathematics/#W...
       | 
       | > If you don't know why set theory is important, it is because
       | set theory is the foundation of all of mathematics.
       | 
       | Nitpick maybe but the types and categories people might prefer
       | sets as "a" foundation instead of "the"? These 3 things are the
       | most useful, get the most attention, and have benefited from the
       | most serious efforts. But IMHO one of the cool things about math
       | is that if you're willing to squint and work at it, then many
       | alternative foundations are possible. For example Conway's
       | surreals[2] hint that you can get numbers/sets by starting with
       | even _games_ as a primitive. I can 't quickly find refs, but the
       | visualizations here hint that starting with graph theoretic
       | axioms can lead to sets instead of vice-versa and I think people
       | have worked on that too. Who knows whether alien math builds
       | everything else up starting from geometry or probability, etc.
       | 
       | [2] https://en.wikipedia.org/wiki/Surreal_number
        
       | sproutini wrote:
       | > If you don't know why set theory is important, it is because
       | set theory is the foundation of all of mathematics.
       | 
       | Sorry to burst your bubble, but as far as we know, that isn't
       | true in the slightest. It's a logical positivist view abandoned
       | after Goedel and Turing.
       | 
       | At best: What we hope is true is that there often is some
       | axiomatic system where a specific mathematical lemma makes sense
       | when redefined into something similar but not the same.
        
       | afpx wrote:
       | You may be interested in percolation theory
       | 
       | https://en.m.wikipedia.org/wiki/Percolation_theory
        
       | some-unique-x wrote:
       | Despite several negative comments, I thought the author did a
       | great job of explaining and playing with set theory (which, as
       | can be seen by the response, is good fun.)
       | 
       | I do take some issue with intepreting set theory's membership
       | relation in terms of the tree child relation, though.
       | 
       | First, the child relation is presumably transitive, whilst set
       | membership is not. (The _subset_ relation is transitive.
       | Presumably it is  'direct child' relation we have in mind here.)
       | 
       | Second, as seen in the third diagram, nodes don't map well to set
       | entities, because the same entity can be a member of distinct
       | sets, but these would count as distinct nodes on some trees.
       | E.g., in the diagram both leaf nodes are distinct, but they both
       | represent the empty set, and hence should be identical. So the
       | identity of sets is not preserved in the tree encoding.
       | 
       | But this is picky -- a lovely read. Thanks author!
        
       ___________________________________________________________________
       (page generated 2024-12-28 23:00 UTC)