[HN Gopher] Darwin Godel Machine: Open-Ended Evolution of Self-I...
       ___________________________________________________________________
        
       Darwin Godel Machine: Open-Ended Evolution of Self-Improving Agents
        
       Author : tzury
       Score  : 26 points
       Date   : 2025-06-11 18:07 UTC (4 hours ago)
        
 (HTM) web link (arxiv.org)
 (TXT) w3m dump (arxiv.org)
        
       | jinay wrote:
       | I recently did a deep dive on open-endedness, and my favorite
       | example of its power is Picbreeder from 2008 [1]. It was a simple
       | website where users could somewhat arbitrarily combine pictures
       | created by super simple NNs. Most images were garbage, but a few
       | resembled real objects. The best part is that attempts to
       | replicate these by a traditional hill-climbing method would
       | result in drastically more complicated solutions or even no
       | solution at all.
       | 
       | It's a helpful analogy to understand the contrast between today's
       | gradient descent vs open-ended exploration.
       | 
       | [1] First half of https://www.youtube.com/watch?v=T08wc4xD3KA
       | 
       | More notes from my deep dive:
       | https://x.com/jinaycodes/status/1932078206166749392
        
         | publicdaniel wrote:
         | Did you see their recent paper building on this? Throwback to
         | Picbreeder!
         | 
         | https://x.com/kenneth0stanley/status/1924650124829196370
        
           | jinay wrote:
           | Ooh I haven't, but this is exactly the kind of follow-up I
           | was looking for. Thanks for sharing!
        
         | jinay wrote:
         | Timestamped link to the YouTube video:
         | https://youtu.be/T08wc4xD3KA?t=124
        
         | bwest87 wrote:
         | This video was fascinating. I didn't know about "open
         | endedness" as a concept but now that I see it, of course it's
         | an approach.
         | 
         | One thought... in the video, Ken makes the observation that it
         | takes way more complexity and steps to find a given shape with
         | SGD vs. open-endedness. Which is certainly fascinating.
         | However...
         | 
         | Intuitively, this feels like a similar dynamic is at play with
         | the "birthday paradox". That's where if you take a room of just
         | 23 people, there is a greater than 50% chance that two of them
         | have the same birthday. This is very surprising to most people.
         | It seems like you should need way more people (365 in fact!).
         | The paradox is resolved when you realize that your intuition is
         | asking how many people it takes to have _your_ birthday. But
         | the situation with a room of 23 people is implicitly asking for
         | just one connection among _any two_ people. Thus you don 't
         | have 23 chances, you have 23 ^ 2 = 529 chances.
         | 
         | I think the same thing is at work here. With the open-ended
         | approach, humans can find _any_ pattern at _any_ generation.
         | With the SGD approach, you can only look for _one_ pattern. So
         | it 's just not an apples to apples comparison and sort of
         | misleading / unfair to say that open-endedness is way more
         | "efficient", because you aren't asking it to do the same task.
         | 
         | Said another way, I think with the open-endedness, it seems
         | like you are looking for thousands (or even millions) of shapes
         | simultaneously. With SGD, you're kinda flipping that around,
         | and looking for exactly 1 shape, but giving it thousands of
         | generations to achieve it.
        
       | yodon wrote:
       | Is this essentially genetic algorithms for the LLM era?
        
         | mountainriver wrote:
         | Yep, the interesting thing is genetic algorithms previously
         | were mostly good at course search and less good at fine search.
         | 
         | They also often converge to a local minima, and are costly.
         | 
         | It'll be interesting to see if LLMs change that, or whether we
         | are just approximating something a gradient could do better
        
       | clayhacks wrote:
       | Earlier discussion: A deep dive into self-improving AI and the
       | Darwin-Godel Machine
       | https://news.ycombinator.com/item?id=44174856
        
       | seu wrote:
       | Yes, seems interesting, but honestly, an abstract that includes
       | sentences such as "accelerate AI development and allow us to reap
       | its benefits much sooner" and "paths that unfold into endless
       | innovation" sounds like written by the marketing team of a AI
       | company.
        
       | behnamoh wrote:
       | So it's basically "throw spaghetti at the wall and see what
       | sticks". It works in evolution because evolution doesn't have an
       | end goal to achieve in a certain amount of time, but for AI we
       | want to know how long it takes to go from performance A to B.
       | Then again, this paper might be yet another validation of the
       | bitter truth of machine learning.
        
       | darepublic wrote:
       | In the abstract the reference to 'safety' gave me pause. For one
       | it seems doubtful that the AI could ever improve enough to cause
       | serious trouble, unless of course you equipped it with things
       | that just about any piece of software could create trouble with
       | --elevated permissions, internet access, network endpoints etc.
       | 
       | They mention putting it in a sandbox which I assume to just mean
       | something like a VM or docker container. I wonder if that would
       | be sufficient if the AI truly reached singularity level
       | intelligence. Could it figure out some kind of exploit to break
       | free of its sandbox, and transmit its code over the internet for
       | further replication?
        
         | whattheheckheck wrote:
         | It already has and its controlling humans to do it!!!
        
         | Teever wrote:
         | You may be interested in this link[0]. Someone posted it in
         | another thread yesterday.
         | 
         | [0] https://www.aisi.gov.uk/work/replibench-measuring-
         | autonomous...
        
       ___________________________________________________________________
       (page generated 2025-06-11 23:00 UTC)