[HN Gopher] Researchers Discover a More Flexible Approach to Mac...
       ___________________________________________________________________
        
       Researchers Discover a More Flexible Approach to Machine Learning
        
       Author : dnetesn
       Score  : 96 points
       Date   : 2023-02-18 14:31 UTC (8 hours ago)
        
 (HTM) web link (nautil.us)
 (TXT) w3m dump (nautil.us)
        
       | iandanforth wrote:
       | C. elegans are _weird_. They are a crazy minimalist system mostly
       | _without_ spiking neurons. From the perspective of building
       | intelligence they aren 't a great model. They are like reading
       | code from a demoscene competition; terse, multi-purposed, highly
       | compressed.
       | 
       | In contrast the mammalian neocortex looks more like reading a
       | high level language with all the loops unrolled. Highly
       | repetitive, relatively standard and _massively verbose_. I much
       | prefer studying that for inspiration of what I want to turn into
       | a modular code base.
       | 
       | While I love the authors focus on real-time perception at varying
       | resolutions of timescale, I just don't see their approach
       | catching on while the current progress in more standard neural
       | nets remains so rapid.
        
         | joe_the_user wrote:
         | Neuroscientists simply don't understand the mammal nervous
         | system with any degree of completeness and accuracy. Looking at
         | a mammal isn't like looking at code of any kind but very
         | partial map of code. Maybe you can get inspirations from this
         | but no one currently has anything very convincing as model of
         | how learning happens at this level.
        
         | zadler wrote:
         | Ok so this is talking about biological code as if it were
         | programming... fun to read, Tell us more? Do the stages of
         | evolution have different "coding styles"? Is everything a monad
         | like in Haskell?
        
           | RosanaAnaDana wrote:
           | I mean, most people in this space don't even venture out of
           | animalia.
           | 
           | There are many highly networked systems in biology where we
           | know computation occurs, but we rarely consider it because we
           | don't consider it intelligent. The casual example is fungal
           | networks, but those don't hold a candle to the level of
           | networking in a plant body thanks to plasmodesma.
           | 
           | the whole conversation around intelligence has been great,
           | but it's mostly reminded me of how lacking and myopic our
           | considerations of what intelligence actually is. The recent
           | chapter of the saga has been great in terms of goal post
           | movers, but I feel like we're no closer to a coherent
           | definition of intelligence other than 'ill know it when I see
           | it'. My prediction from this recent spat is that we'll have
           | AGI before we know it, we won't know it as or recognize it as
           | AGI, it won't fit convenient definitions, we'll have almost
           | no clue how, and definitely no clue why it works, and we
           | still mostly won't agree if it's intelligence or not.
           | Effectively, it feels as if right now we're arguing about
           | whether or not automobiles are horses, and laughing about how
           | silly it is they can't do some of the things horses can do.
           | Sure, but it all seems rather besides the point.
        
             | YeGoblynQueenne wrote:
             | How are we going to get AGI without understanding
             | intelligence? By sheer luck? Either we're on a path to AGI
             | because we understand something about intelligence, or we
             | 're not because we don't.
             | 
             | Things don't just happen agickally, just because someone
             | really wants them to happen. If they did, we'd have solved
             | the bigger problems first: world peace, world hunger,
             | poverty, and free energy for all. And those are problems
             | that we at least understand- unlike intelligence.
        
             | oezi wrote:
             | The recent episode with Sidney/Bing-GPT was raised my
             | anticipation of a Terminator/Skynet/Cyberdyne incident
             | quite drastically.
             | 
             | It seems we might just need one malicious prompt,
             | capabilities to perform HTTP requests and open ssh
             | connections and larger persistent memory to get to a goal-
             | driven AI which works to end humanity.
        
               | zadler wrote:
               | If it happens, how long until they start expediting the
               | actionables in the physical world?
               | 
               | I am quite worried tbh. Because, the reports coming from
               | the Bing chatbot are quite unlike ChatGPT; it appears to
               | be a bit more egotistical, and from some perspectives,
               | programming ego into an AI is a dangerous game, akin to
               | giving it a fitness function in order to problem solve
               | it's behaiviours... I don't know, I feel like we are
               | already well on our way to AGI, and it is dangerous. And
               | the reason is because of the game theory on AI
               | development right now; every company will be aware of
               | their obligations with regard to ethics and the law, but
               | no company will want to trust that the others live to the
               | same standard of ethics, and they know that the game is
               | likely to run away from them.
        
         | IIAOPSW wrote:
         | honestly, the first one you described sounds like the far
         | better model to try and turn into a code base. No boiler plate,
         | no frivolous framework jigging, distilled to its raw functional
         | essence. A simple system of emergent complexity strikes me as a
         | much better candidate for intelligence than a kludge that gains
         | complexity from just having a lot of parts and rules to begin
         | with.
        
           | xg15 wrote:
           | Small system aren't always simple or easy to understand.
           | Famous code examples are Duff's Device [1] or the "wtf" fast
           | inverse square root function [2]. Both functions are just a
           | few lines long but usually leave people scratching their
           | heads until they learn about the trick.
           | 
           | Especially with Demoscene code, it's common to exploit all
           | kinds of specific hardware effects or re-parse the assembly
           | code to a different instruction sequence [3]. This kind of
           | stuff may be a lot more complex than a larger system, it just
           | hides its complexity in a compressed representation.
           | 
           | [1] https://en.wikipedia.org/wiki/Duff%27s_device
           | 
           | [2] https://en.wikipedia.org/wiki/Fast_inverse_square_root#Ov
           | erv...
           | 
           | [3] https://reverseengineering.stackexchange.com/questions/20
           | 587...
        
       | flashfaffe2 wrote:
       | TEDx video from Ramin hasani explaining his algorithm.
       | 
       | https://www.youtube.com/watch?v=RI35E5ewBuI
        
       | [deleted]
        
       | super256 wrote:
       | Related discussion with 66 comments:
       | https://news.ycombinator.com/item?id=34707055
        
       ___________________________________________________________________
       (page generated 2023-02-18 23:00 UTC)