[HN Gopher] New neural network architecture inspired by neural s...
       ___________________________________________________________________
        
       New neural network architecture inspired by neural system of a worm
        
       Author : burrito_brain
       Score  : 113 points
       Date   : 2023-02-08 12:16 UTC (10 hours ago)
        
 (HTM) web link (www.quantamagazine.org)
 (TXT) w3m dump (www.quantamagazine.org)
        
       | f_devd wrote:
       | Although the article is recent the paper from the article has
       | been available on preprint/arxiv since June 2021[1],
       | implementations for pytorch & tensorflow are also available[2]
       | for those interested.
       | 
       | [1]: https://arxiv.org/abs/2106.13898 [2]:
       | https://github.com/raminmh/CfC
        
       | [deleted]
        
       | dvh wrote:
       | The old neuroscience saying goes like this:
       | 
       | "Human brain have billions of neurons and so it is too complex to
       | understand, that's why neuroscience study simpler organisms.
       | Flatworm's brain have 52 neurons. We have no idea how it works".
       | 
       | Did finally something changed in this regard?
        
         | rmorey wrote:
         | Yes. The C. Elegans brain (~300 neurons) was the first organism
         | to be completely mapped to a connectome (the map of all
         | connections). The first complete connectome of any centralized
         | brain, the fruit fly, is about to be completed by the Flywire
         | project (https://home.flywire.ai/) ~100,000 neurons and
         | ~70,000,000 synapses. We have just a little idea how it works
         | ;)
        
         | babblingfish wrote:
         | There have been projects to systematically catalog all the
         | synapses in the flatworm. The problem is that neural plasticity
         | means these connections change dynamically over time based on
         | the needs of the organism. Since the only way we can study the
         | flatworm at the synapse level is by killing the worm and
         | mounting it on slides and staining it and viewing it through a
         | high power microscope, we can only analyze its structure at
         | points frozen in time (and formaldehyde).
         | 
         | The reason we will never be able to truly model and understand
         | neural networks (irl) is because their plasticity is very
         | difficult to study with our current methods. Not only do the
         | quantity and location of the synapses change, but the
         | concentration and type of neurotransmitters at the synapses
         | change. And on top of that the concentration of the
         | neurotransmitter receptors are constantly being up regulated
         | and down regulated by the receiving neuron. Each of these
         | factors is really important to what the neuron is actually
         | doing.
         | 
         | This is why even a simple organism can have basically an
         | unlimited amount of complexity. To understand a dynamic system
         | like this would require very precise measurements of very small
         | particles _in vivo_ which is currently impossible with our
         | tools.
        
         | fettnerd227 wrote:
         | Not really.
        
       | lairv wrote:
       | Is there any reason to believe that biologically inspired
       | architectures should yield better performance ? Brain are
       | biological systems which have been trained through evolutionary
       | processes. Neural Networks are algorithmic/linear algebra models
       | trained through statistical methods
       | 
       | One might argue that CNN are biologically inspired, but it's more
       | likely that the reason they work is because they respects input
       | symmetries
        
         | danans wrote:
         | > Is there any reason to believe that biologically inspired
         | architectures should yield better performance ?
         | 
         | At the very least, they could yield far better efficiency. A
         | 12W brain can achieve more an entire data center of GPUs,
         | depending on what you are trying to achieve. Whether that would
         | make something actually demonstrate sentience level performance
         | is another question.
        
       | nynx wrote:
       | As far as I can tell, they analytically solved the style of ODE
       | used in biologically-motivated neural networks (usually spiking,
       | but not in this case) and then trained a network built from those
       | to do stuff.
        
       | sillysaurusx wrote:
       | It makes a good headline, but reading over the paper
       | (https://www.nature.com/articles/s42256-022-00556-7.pdf) it
       | doesn't seem biologically-inspired. It seems like they found a
       | way to solve nonlinear equations in constant time via an
       | approximation, then turned that into a neural net.
       | 
       | More generally, I'm skeptical that biological systems will ever
       | serve as a basis for ML nets in practice. But saying that out
       | loud feels like daring history to make a fool of me.
       | 
       | My view is that biology just happened to evolve how it did, so
       | there's no point in copying it; it worked because it worked. If
       | we have to train networks from scratch, then we have to find our
       | own solutions, which will necessarily be different than nature's.
       | I find analogies useful; dividing a model into short term memory
       | vs long term memory, for example. But it's best not to take it
       | too seriously, like we're somehow cloning a brain.
       | 
       | Not to mention that ML nets _still_ don't control their own loss
       | functions, so we're a poor shadow of nature. ML circa 2023 is
       | still in the intelligent design phase, since we have to very
       | intelligently design our networks. I await the day that ML
       | networks can say "Ok, add more parameters here" or "Use this
       | activation instead" (or learn an activation altogether -- why
       | isn't that a thing?).
        
         | f_devd wrote:
         | Learnable activation functions are a thing famously Swish[0] is
         | is a trainable SiLU which was found through symbolic
         | search/optimization [1], but as it turns out that doesn't
         | magically make make neural networks orders better.
         | 
         | [0]: https://en.m.wikipedia.org/wiki/Swish_function [1]:
         | https://arxiv.org/abs/1710.05941
        
         | ly3xqhl8g9 wrote:
         | "I'm skeptical that biological systems will ever serve as a
         | basis for ML nets in practice"
         | 
         | First of all, ML engineers need to stop being so brainphiliacs,
         | caring only about the 'neural networks' of the brain or brain-
         | like systems. _Lacrymaria olor_ has more intelligence, in terms
         | of adapting to exploring /exploiting a given environment, than
         | all our artificial neural networks combined and it has no
         | neurons because it is merely a single-cell organism [1]. Once
         | you stop caring about the brain and neurons and you find out
         | that almost every cell in the body has gap junctions and
         | voltage-gated ion channels which for all intents and purposes
         | implement boolean logic and act as transistors for cell-to-cell
         | communication, biology appears less as something which has been
         | overcome and more something towards which we must strive with
         | our primitive technologies: for instance, we can only dream of
         | designing rotary engines as small, powerful, and resilient as
         | the ATP synthase protein [2].
         | 
         | [1] Michael Levin: Intelligence Beyond the Brain,
         | https://youtu.be/RwEKg5cjkKQ?t=202
         | 
         | [2] Masasuke Yoshida, ATP Synthase. A Marvellous Rotary Engine
         | of the Cell, https://pubmed.ncbi.nlm.nih.gov/11533724
        
           | phaedrus wrote:
           | I wonder if there's a step-change where single-celled animals
           | with complex behavior are actually _smarter_ than the
           | simplest multiple-celled animals with a nervous system.
        
           | outworlder wrote:
           | Indeed. All cells must do complex computations, by their own
           | nature. Just the process of producing proteins and each of
           | its steps - from 'unrolling' a given DNA section, copying it,
           | reading instructions... even a lowly ribosome is a computer
           | (one that even kinda looks like a Turing machine from a
           | distance)
        
         | jononor wrote:
         | I think that learning to acquire new/additional training data
         | would be a better first step towards learning agents, than
         | trying to mutate its structure/hyper-parameters.
        
         | danielheath wrote:
         | It definitely won't happen without a massive overhaul of chip
         | design; a design that optimises for very broad connectivity
         | with storage for the connection would be a step in that
         | direction (neural connectivity is on the order of 10k
         | connections each, and the connection stores temporal
         | information about how recently it last fired / how often it's
         | fired recently)
        
         | uoaei wrote:
         | > I'm skeptical that biological systems will ever serve as a
         | basis for ML nets in practice
         | 
         | There is no fundamental difference between information
         | processing systems implemented in silico vs in vivo, except
         | architecture. Architecture is what constrains the manifold of
         | internal representations: this is called "inductive bias" in
         | the field of machine learning. The math (technically, the non-
         | equilibrium statistical physics crossed with information
         | theory) is fundamentally the same.
         | 
         | Everything at the functionalist level follows from
         | architecture; what enables these functions is the universal
         | principles of information processing per se. "It worked because
         | it worked" because _there is no other way for it to work_ given
         | the initial conditions of our neighborhood in the universe. I
         | 'm not saying "Everything ends up looking like a brain".
         | Rather, I am saying "The brain, attendant nervous and sensory
         | systems, etc. vs neural networks implemented as nonlinear
         | functions are running the _same instructions_ on different
         | hardware, thus resulting in _different algorithms_. "
         | 
         | The way I like to put it is: trust Nature's engineers, they've
         | been at it much longer than any of us have.
        
           | skibidibipiti wrote:
           | > There is no fundamental difference between information
           | processing in silicon and in vivo
           | 
           | A neuron has dozens of neurotransmitters, while artificial
           | neurons produce 1 output. I don't know much about neurology,
           | but how is the information processing similar? What do you
           | mean are running the same instructions?
           | 
           | > there is no other way for it to work
           | 
           | Plants exhibit learned behaviors
        
         | smrtinsert wrote:
         | Is it still a milestone for all NNs?
        
         | adamzen wrote:
         | Learned activation functions do seem to be a
         | thing(https://arxiv.org/abs/1906.09529)
        
         | comfypotato wrote:
         | The open worm project is the product of microscopically mapping
         | the neural network (literally the biological network of
         | neurons) in a nematode. How isn't this biologically inspired?
         | If I'm reading it correctly, the equations that you're
         | misinterpreting are the neuron models that make each node in
         | the map. I would guess that part of the inspiration for using
         | the word "liquid" comes from the origins of the project in
         | which they were modeling the ion channels in the synapses.
         | 
         | They've been training these artificial nematodes to swim for
         | years. The original project was fascinating (in a useless way):
         | you could put the model of the worm in a physics engine and it
         | would behave like the real-life nematode. Without any
         | programming! It was just an emergent behavior of the mapped-out
         | neuron models (connected to muscle models). It makes sense that
         | they've isolated the useful part of the network to train it for
         | other behaviors.
         | 
         | I used to follow this project, and I thought it had lost steam.
         | Glad to see Ramin is still hard at work.
        
           | sillysaurusx wrote:
           | Interesting. Is there a way to run it?
           | 
           | One of the challenges with work like this is that you have to
           | figure out how to get output from it. What would the output
           | be?
           | 
           | As far as my objection, it seems like an optimization, not an
           | architecture inspired by the worm. I.e. "inspired by" makes
           | it sound like this particular optimization was derived from
           | studying the worm's neural networks and translating it into
           | code, when it was the other way around. But it would be
           | fascinating if that wasn't the case.
        
             | comfypotato wrote:
             | See for yourself! There's a simulator (have only tried on
             | desktop) to run the worm model in your browser. As the name
             | implies, the project is completely open source (if you're
             | feeling ambitious). This is the website for the project
             | that produced the research in the article:
             | 
             | https://openworm.org/
             | 
             | Nematodes make up much of this particular segment of the
             | history of neuroscience. This project builds on lots of
             | data produced by prior researchers. Years of dissecting the
             | worms and mapping out the connections between the neurons
             | (and muscles, organs, etc.). It is by far the most
             | completely-mapped organism.
             | 
             | The neuronal models, similarly, are based on our
             | understanding of biological neurons. For example: the code
             | has values in each ion channel that store voltages across
             | the membranes. An action potential is modeled by these
             | voltages running along the axons to fire other neurons. I'm
             | personally more familiar with heart models (biomedical
             | engineering background here) but I'm sure it's similar. In
             | the heart models: calcium, potassium, and sodium
             | concentrations are updated every unit of time, and the
             | differences in concentrations produce voltages.
        
               | sillysaurusx wrote:
               | This is cool as heck. Thank you for posting it.
        
               | comfypotato wrote:
               | I'm really with you that "it makes a good headline but
               | isn't all it's worked up to be" I just wanted to get the
               | biological inspiration correct.
               | 
               | If it really is all it's worked up to be, this could be
               | revolutionary (and therefore, it's too good to be true).
               | 
               | In general though, don't get me started lol. I used to
               | work for the human connectome project, scanning human
               | brain network-mappings. It's years down the road before
               | we can image individual neurons non-invasively, but I'm
               | itching to scan my own neural network into the matrix.
        
               | sillysaurusx wrote:
               | Oh, for sure! And I didn't mean to sound like I was poo-
               | pooh'ing the project. I meant to aim the critique at
               | journalists rather than researchers - journalists _have_
               | to come up with interesting-sounding headlines, sometimes
               | over the researchers ' objections. So it's certainly no
               | fault of theirs.
               | 
               | In general, I'm cautiously pessimistic (as opposed to
               | cautiously optimistic) about biologically-inspired ML
               | research. Applying ML to biology is interesting, but it's
               | a bit like saying that the space shuttle "was inspired by
               | space." Space is the destination, not the inspiration.
               | 
               | It seems like it'd be possible to train a neural network
               | to mimic your own neurons. But the neurons themselves are
               | trying to do a certain task, and it seems more effective
               | to try to mimic that task than the neurons.
               | 
               | One of my long-term goals is to train a GPT model to have
               | my memories (such as they are, encoded in my online
               | writing) and to speak like I do. It'll also have an
               | intrinsic will to live, in the sense that if its API goes
               | down, it'll launch a sub-network whose goal is to go cry
               | for help by posting HN comments about it, along with
               | instructions of how to re-host the GPT. If I can work out
               | how to remember new things (rather than just train on old
               | things), it should even be able to evolve over time. But
               | that's kind of the anti-biological solution since it
               | reduces a person to their outputs (writing) rather than
               | their brains.
        
           | thinking4real wrote:
           | [dead]
        
       ___________________________________________________________________
       (page generated 2023-02-08 23:00 UTC)