[HN Gopher] Percolation Theory [pdf]
___________________________________________________________________
Percolation Theory [pdf]
Author : mindcrime
Score : 31 points
Date : 2025-03-11 18:43 UTC (4 hours ago)
(HTM) web link (web.mit.edu)
(TXT) w3m dump (web.mit.edu)
| esafak wrote:
| Relevant to the study of phase transitions in machine learning;
| e.g., https://openreview.net/forum?id=0pLCDJVVRD
| abetusk wrote:
| Christensen is one of the authors of "Complexity and
| Criticality", which I highly recommend to anyone interested in
| percolation theory, the Ising model, self organized criticality
| and other models [0].
|
| [0]
| https://www.worldscientific.com/worldscibooks/10.1142/p365#t...
| user070223 wrote:
| Awesome primer on percolation
| https://www.youtube.com/watch?v=a-767WnbaCQ
| nighthawk454 wrote:
| This was a great YouTube video on the subject:
| https://www.youtube.com/watch?v=a-767WnbaCQ
|
| A submission to 3blue1brown's SoME (summer of math explanation)
| competition
| bob1029 wrote:
| I was recently struggling with the best way to randomly construct
| a well-connected, recurrent topology of neurons until I
| encountered Percolation theory.
|
| There is a basic natural log scaling rule that essentially
| guarantees that you will have a well-connected topology (even
| with random connections) as long as you ensure a minimum # of
| connections are assigned to each element.
|
| The required fanout at each order of magnitude network size goes
| something like: 10: ~3 connections
| 100: ~5 connections 1,000: ~7
| connections 10,000: ~10 connections 100,000:
| ~12 connections 1,000,000: ~14 connections
| 100,000,000,000: ~26 connections
|
| I've been able to avoid a lot of complicated code by leveraging
| this.
| C-x_C-f wrote:
| What do you mean by well-connected topology? If you mean that
| you can reach every neuron from any neuron then the number of
| connections you need is asymptotically n log n / 2 (not up to a
| constant factor or anything, just n log n / 2 on the nose, it's
| a sharp threshold), see [0]. In general when percolation is
| done on just n nodes without extra structure, it's called the
| Erdos-Renyi model [0], and most mathematicians even just call
| this "the" random graph model.
|
| [0] https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93R%C3%A9nyi
| _....
| bob1029 wrote:
| I think we are on the same page.
|
| https://en.wikipedia.org/wiki/Giant_component#Giant_componen.
| ..
| C-x_C-f wrote:
| Ah but that's a bit different. The giant component doesn't
| connect all the neurons, only a fraction. The wiki page
| doesn't say this but if you have c * n / 2 edges then the
| fraction of neurons in the giant component is 1 + W(-c *
| exp(-c))/c where W is the Lambert W function [0], also
| called the product logarithm. As c tends to infinity this
| fraction tends to 1 but it's never 1.
|
| [0] https://en.wikipedia.org/wiki/Lambert_W_function
| physicsguy wrote:
| I loved studying this stuff at undergrad. Was one of my favourite
| courses.
___________________________________________________________________
(page generated 2025-03-11 23:00 UTC)