[HN Gopher] Conway's Gradient of Life
___________________________________________________________________
Conway's Gradient of Life
Author : networked
Score : 381 points
Date : 2024-10-09 09:45 UTC (3 days ago)
(HTM) web link (hardmath123.github.io)
(TXT) w3m dump (hardmath123.github.io)
| dvh wrote:
| Few days ago I posted this:
| https://news.ycombinator.com/item?id=41743887
| mbauman wrote:
| Atavising was new to me. From
| https://nbickford.wordpress.com/2012/04/15/reversing-the-gam... :
|
| > First of all, while I said "Predecessorifier" in the talk,
| "Ataviser" seems to be the accepted word, coming from "Atavism",
| which the online Merriam-Webster dictionary defines as
| "recurrence of or reversion to a past style, manner, outlook,
| approach, or activity".
| Jerrrrrrry wrote:
| "its like showing a solved rubiks cube and asking what the
| scramble was"
|
| ^ this analogy may be the best I've seen in a long time.
| Etherlord87 wrote:
| I disagree, getting a scramble is trivial for a Rubik's cube
| (because any scramble will do, and unlike with Conway's Game of
| Life, here going in reverse is simple). If there was a
| particular scramble and you want to recover it, you just can't
| do it without some additional information.
| scotty79 wrote:
| It's a good analogy for someone who doesn't know how to solve
| a Rubik's cube.
| Jerrrrrrry wrote:
| breaking analogies is the only reason people comment at
| this point
|
| ive caught myself doing this
| xfreesoft wrote:
| https://github.com/vtempest/Automated-Automata One idea is to
| have the gradient patterns emergent from simple rules calculating
| prior numbers with no need for hand of God intrvention.
| Hugsun wrote:
| Interesting. I can't zoom on mobile which is frustrating.
| conorpo wrote:
| Seeing alot of parallels to the article by Stephen Wolfram posted
| a few days back on the fundamental nature of time .
| sleepingreset wrote:
| which?
| andai wrote:
| https://news.ycombinator.com/item?id=41782534
| jwood27 wrote:
| I made a related crossword puzzle. You can find it here if you
| want to give it a try! https://jacobw.xyz/projects/crossword/
| versteegen wrote:
| The objective function here defines a Markov random field (MRF)
| with boolean random variables and certain local statistics of
| nearest neighbours, either uniform if the target is a white
| image, or varying with location to produce an image. MRFs define
| Gibbs probability distributions, which you can sample from (which
| will already produce a good image here) or perform gradient
| ascent on to reach a local maxima. The negative log-likelihood of
| the MRF distribution is equal to the loss function of the
| original optimisation problem, so the maximum likelihood estimate
| (MLE) (there will often be multiple due to symmetry) of the MRF
| is the optimal solution(s) to the original problem. (But in
| general the MLE can look completely different to a sample.)
|
| The statistics are 9th-order (of 3x3 blocks of pixels) but of a
| simple form which are hardly more expressive than 2nd-order
| nearest neighbour statistics (in terms of the different textures
| that they can reproduce) which are well known. In the approximate
| case where you only care about the average value of each pixel I
| think it would collapse to 2nd-order. Texture synthesis with MRFs
| with local statistics is discretized (in space) Turing reaction-
| diffusion. I did my PhD on this topic.
|
| Probably the most influential early paper on this kind of simple
| texture model, where you will see similar patterns, is:
|
| Cross & Jain, 1983, PAMI, Markov Random Field Texture Models
| pyinstallwoes wrote:
| Anything you've found to be additionally interesting or curious
| along this path or different but somewhat related?
|
| Are you still working on this topic or other things now?
| versteegen wrote:
| I should mention Restricted Boltzmann Machines (RBMs, which
| Hinton got the Nobel for): it's really natural to use hidden
| variables in formulating a problem like this: one lattice of
| binary variables for GoL time step 0, a second for timestep
| 1, and optionally a third of continuous variables (or eg
| 256-level greyscale) for the unquantised target image. (Keep
| stacking RBMs for more timesteps.) Then you have a bi/tri-
| partite undirected graphical model. RBMs can be generalised
| in many ways and one of them is to replace the 2nd-order
| (pairwise) interactions between variables with higher-order
| interactions like these 9th order ones -- their most
| essential property is being bipartite so you can infer each
| variable on a layer independently given the other (or
| neighbouring) layers. The article didn't say how the target
| (timestep 1) configuration was picked, but you can
| simultaneously optimise for the timestep 0->1 error and the
| timestep 1->image distance, which could give a better looking
| result.
|
| And when you have a (two-layer) RBM you can integrate out the
| hidden variables and get a MRF without them (as I first
| described), or vice-versa. Which is more useful depends.
|
| Or if you're more interested in textures, RBMs are used for
| texture and image modelling (e.g. [1]), and higher order
| statistics are far more interesting. In virtually all papers
| they take the form of the responses of linear filters (like
| that 3x3 filter in OP, often wavelets for natural images).
| But you can use completely different statistics. I found that
| long-range local binary patterns (LBPs), used for texture
| recognition, are good for texture generation too.
|
| I've switched away from textures, but energy-based models are
| harder to escape. A surprising intersection of old and new
| topics at [2].
|
| [1] R Liao, S Kornblith, M Ren, D Fleet & G Hinton, 2022,
| "Gaussian-Bernoulli RBMs Without Tears"
| https://arxiv.org/pdf/2210.10318
|
| [2] C Zhang, B Jia, Y Zhu & S-C Zhu, 2024, "Human-level few-
| shot concept induction through minimax entropy learning"
| https://www.science.org/doi/10.1126/sciadv.adg2488
| christina97 wrote:
| I feel like just doing simulated annealing on the starting grid
| would work better and be faster to implement?
|
| (Not saying the goal was working well and being fast to
| implement.)
| versteegen wrote:
| Simulated annealing (basically MCMC sampling with a temperature
| schedule) is how you optimise or sample the equivalent MRF,
| which I discussed in my other comment. You can hope to escape
| local minima using annealing, and lower the temperature to zero
| to fall into a local minima, minimising the noise added by
| annealing. In practice if you're trying to produce something
| that looks like a target image as in the article I'm pretty
| sure the results will be indistinguishable. If you actually
| cared about how many individual pixels are correct, yes,
| annealing is better than gradient descent. That's why
| stochastic gradient descent is used in ML.
| chillee wrote:
| It's always fun when people use autodiff in packages like PyTorch
| for completely unrelated usecases :)
| rustybolt wrote:
| Feels to me like there is no need for backpropagation. I think
| you can just iteratively grab a random pixel and flip it of that
| would bring you closer to the target after one step.
|
| It would probably work even better if you tweak the loss function
| with some kind of averaging/blurring filter.
| quantadev wrote:
| This is just the brute force solution. I'm pretty sure that's
| no more efficient than guessing all pixels at once and trying
| to check of that worked or not.
| akoboldfrying wrote:
| Guessing all pixels at once and then checking is massively
| worse, since it's basically the first step of GP's proposed
| approach -- which then iteratively changes things in a way
| that never makes it worse.
| quantadev wrote:
| It's still in a class of pure "guessing" because just
| because something looks "correct" early on is meaningless
| two steps into the future. Everything will have a 50/50
| probability of being "correct" based on any given scenario.
| What you're saying is somewhat analogous to predicting that
| a coin flip will land on 'heads' if it landed on heads at
| the last flip, or even 20 of the last flips in a row. I'm
| actually not a great statistician myself, but I think I'm
| right on this one. :)
| drofnarc wrote:
| I'm so glad to see others working on this. I've attempted it too,
| but not with any measure of success.
|
| Trying to trade space for time, I used a model that gives every
| cell a set of all 512 of the possible 3x3 neighborhoods that
| could have caused that cell's present state ("alibis"). It then
| goes to each cell, comparing its alibis to those of neighboring
| cells and eliminating mutually impossible ones from either set.
| This step has to be repeated until no more alibis are shed in a
| pass.
|
| When it finally stabilizes, the model is a solution kernel that
| can then field further manual guesses against it. If a cell's
| alibis all agree it was dead in the "before", there's no need to
| guess, but if they're not unanimous, what if we hazard a guess
| one way or the other for a bit? How does that ripple through the
| rest of the board? If any of the cells ran completely out of
| alibis given a certain guess, that guess was clearly not a proper
| solution, and it's time to back out and try a different one. If
| there's no solution at all, that's a Garden of Eden.
|
| Ultimately I wanted to generate not just one solution, but all
| the solutions for a given board. I got stumped because I wasn't
| convinced I wasn't still working in 2**(n*m) time or worse trying
| guesses against the kernel.
|
| It's a really fascinating problem, so much so that I even made a
| pico8 game about it years ago! Even the 6x6 grids are really
| tough!
| pyinstallwoes wrote:
| What do you think might be the path / solution but needs
| playful experiments ? Ideas to seed others?
| networked wrote:
| You may like the discrete algorithm from
| https://a.tulv.in/finding-mona-lisa-in-the-game-of-life.html.
| HN discussed it at the time in
| https://news.ycombinator.com/item?id=26384403, which was where
| I found this story's link.
| ddejohn wrote:
| Really cool to see. Was curious about the author's other work and
| was very annoyed that the blog does not link to the author's
| other posts, either with a "next/previous" link, an archive, or a
| homepage where a list of posts can be viewed. The only way to see
| other posts is to subscribe to the RSS. I can't imagine there
| being a good reason for having such a bad UX.
| ngruhn wrote:
| Very cool!
| d5ve wrote:
| I saw an interesting video on the same topic last week. By Brian
| Haidet (AlphaPhoenix) on reversing the GOL - it turns out that NP
| hard is hard! https://www.youtube.com/watch?v=g8pjrVbdafY
| OskarS wrote:
| Lovely video, and it was enormously personally satisfying to me
| while he was describing his evolutionary algorithm I just sat
| thinking "why not just use a SAT solver, this seems like
| child's play for z3?" and then hearing him go "then I asked
| Reddit and someone suggested using a SAT solver". Hell yeah,
| good job, brain! I mean, not as good a job as Brian who
| actually implemented it and made the video and everything, but
| still!
| andai wrote:
| Inspired by this, I stayed up all night building this!
|
| https://jsbin.com/josuxarufu/1/edit?js,console,output
|
| It tries to "evolve" the target image (right) by making random
| changes to the origin (left) -- output after 1 step is in center.
|
| It's eventually supposed to be a genetic algorithm, but for now
| it just goes through it pixel by pixel, and if the pixel doesn't
| match the target, it inverts one of its neighbors. If that's
| beneficial (checking loss in a 5x5 rect centered on the test
| pixel), we keep the change. I get loss down to about 25% then it
| plateaus.
|
| Feel free to fiddle, fork, etc! (I stayed up pretty late so the
| code is bad! Suggestions welcome!)
| andai wrote:
| Another strat I thought about is to precompute all possible
| outputs for all possible inputs. This is feasible for small
| tiles (e.g. 5x5 grid has ~35 million permutations), maybe a
| sliding window could be used. I'm stuck on how to deal with
| tile overlap, and the fact that many outputs have more than one
| input -- you'd want to check all possible input tiles against
| all possible neighboring input tiles?
| RedNifre wrote:
| Wouldn't it be better to have the target image as grayscale or
| black/white instead of dithered?
| andai wrote:
| Maybe! That's a good idea! I guess I'd get natural dithering
| as the result of running a loss function on small windows
| against a grayscale image. (Per-pixel loss would be very high
| though, I guess you'd want to take the average brightness for
| some chunk?) I'm not sure how dithering is supposed to work
| though, maybe there's some improvements there over just
| taking the average brightness?
| hansworst wrote:
| The article asks the following:
|
| > I can't help but wonder if there's a reaction-diffusion-model-
| esque effect at work here as well
|
| There are continuous approximations of the game of life that show
| this, for example this implantation:
|
| https://smooth-life.netlify.app
| kaba0 wrote:
| I found this post absolutely fascinating! I got thinking, is
| there a Turing-complete simple abstraction that _is_
| differentiable, without approximation? I have heard about Neural
| Turing machines (but only as a layman), but from a quick peek
| they are similarly hard to reason about as other NNs?
| seanhunter wrote:
| Not exactly an answer to your question, but in the ballpark and
| hopefully you might find interesting
| https://arxiv.org/pdf/2404.17625
|
| In the section on "Automatic differentiation" he considers the
| question of differentiable computation graphs.
| turnsout wrote:
| The emerging pattern looks a little like Blue Noise!
| f1shy wrote:
| Related: https://news.ycombinator.com/item?id=41817808
| alcover wrote:
| There's a continuous GoL called LENIA.
|
| https://www.youtube.com/watch?v=PlzV4aJ7iMI (in French)
|
| https://news.ycombinator.com/item?id=34108724
| atulvi wrote:
| Related: https://a.tulv.in/finding-mona-lisa-in-the-game-of-
| life.html
|
| Albeit, using a different stochastic technique
___________________________________________________________________
(page generated 2024-10-12 23:02 UTC)