[HN Gopher] Learning produces a hippocampal cognitive map in the...
       ___________________________________________________________________
        
       Learning produces a hippocampal cognitive map in the form of a
       state machine
        
       Author : sebg
       Score  : 173 points
       Date   : 2023-08-11 01:57 UTC (2 days ago)
        
 (HTM) web link (www.biorxiv.org)
 (TXT) w3m dump (www.biorxiv.org)
        
       | Animats wrote:
       | Actual paper text.[1]
       | 
       | How the mouse's brain is scanned, very intrusively.[2] That's
       | both impressive and scary. They're scanning the surface of part
       | of the brain at a good scan rate at high detail. They're seeing
       | the activation of individual neurons. This is much finer detail
       | than non-intrusive functional MRI scans.
       | 
       | Does the data justify the conclusions? The maze being used is a
       | simple T-shaped maze. The "state machine" supposedly learned is
       | extremely simple. They conclude quite a bit about the learning
       | mechanism from that. But now that they have this experimental
       | setup working, there should be more results coming along.
       | 
       | [1]
       | https://www.biorxiv.org/content/10.1101/2023.08.03.551900v2....
       | 
       | [2]
       | https://bmcbiol.biomedcentral.com/articles/10.1186/s12915-01...
        
         | ImHereToVote wrote:
         | So technically we have the technology to simulate a human
         | brain. Just not anywhere near real time. And not at any
         | semblance of reasonable cost. And not guaranteed to simulate
         | the important parts.
        
           | marcosdumay wrote:
           | If you mean it on the trivial meaning that "computers most in
           | principle be able to simulate a brain", then yes, and it has
           | been obvious for many decades already.
           | 
           | If you want to say that we know what algorithms to use to
           | simulate a brain, then no, and this paper is one advance on
           | the goal of knowing those algorithms. But it does not go all
           | the way there.
        
           | aatd86 wrote:
           | At the civilian level. Who knows what exists out there... :|
        
             | jiggawatts wrote:
             | The civilian level is the state of the art. The chip
             | industry is at the cutting edge, there is nothing beyond it
             | that is available _at scale_ , and in this instance: scale
             | matters!
             | 
             | There are some small exceptions of course: RSFQ digital
             | logic is insanely fast (hundreds of gigahertz), but nobody
             | has scaled it to large integrated circuits.
             | 
             | Supercomputers are built with somewhat esoteric parts, but
             | not secret unobtainium. At least in principle the same RDMA
             | switches and network components are commercially available.
             | Similarly, the specialised CPUs like the NVIDIA Grace
             | Hopper are available, although I doubt any wholesalers have
             | it in stock!
             | 
             | To believe otherwise is to believe that governments
             | (plural!) have secretly hidden tens of billions in cutting
             | edge chip fabs, tens of billions of chip design shops, and
             | more.
             | 
             | In reality the government buys their digital electronics
             | from the same commercial suppliers you and I do.
             | 
             | Only a handful of specialised circuits are made in secret,
             | such as radar amplifiers.
        
               | aatd86 wrote:
               | Are processors and processor speed the only limiting
               | factor in terms of applications? (probably that they are
               | fast enough anyway, could be a non-factor, communication
               | between neuron is not that fast compared to clock speed
               | if I remember well)
               | 
               | Especially in an era in which the recorded data can be
               | fed to an algorithm that can approximate dynamic brain
               | maps with more or less accuracy?
        
               | jiggawatts wrote:
               | One concern is the lack of ethics, or more accurately,
               | the different ethical considerations in the spy agencies.
               | 
               | They have every motivation to capture personal phone
               | calls and text chats in bulk and run them all through an
               | LLM-like training regime so that they can ask it
               | questions like: "Does so-and-so plan a terrorist attack?"
               | 
               | Somewhere out there in an NSA data centre there is a
               | model being trained on your emails, right now.
        
         | 60654 wrote:
         | There appear to be two _very_ interesting results:
         | 
         | 1. We can observe how the state machine gets generated, first
         | just a jumble of locations in a hub and spokes topology (no
         | correlations), then some correlations start happening pairwise,
         | making a kind of a beads on a string topology, and then finally
         | the mental model snaps marvelously to two completely separate
         | paths that meet at ends. It's amazing to see these mental
         | models get formed _in vivo_ out of initial unstructured
         | perceptions.
         | 
         | 2. In addition to standard HMM modeling, authors find that a
         | "biologically plausible recurrent neural network (RNN) trained
         | using Hebbian learning" can mimic some of this (but not
         | exactly). But more interestingly, they find that LSTMs or
         | transformers _cannot_. Which makes sense structurally, but it
         | 's a good reminder for those who believe the anthropomorphic
         | hype that transformers have memory or other such (they don't :)
         | ).
         | 
         | The scanning is indeed very intrusive, though.
        
           | calf wrote:
           | Couldn't memoryless neural networks still possibly learn the
           | Next-State function of a Finite-State machine? Depending on
           | the training algorithm. Especially if the eventual usage of
           | such networks is to be called over and over again to generate
           | the next token; conceptually this to me seems analogous to
           | the process of finitely unrolling a while loop or a computer
           | pipeline.
        
       | padolsey wrote:
       | It seems to reflect the general way we understand the brain
       | right? Wiring together/firing together? Then ~ abra cadabra ~
       | meaningful blobs of brain buzzy stuff emerge from seemingly
       | simple rules? It seems beautifully pure that mind maps are
       | literally "mind maps" in a sense, a bit like we have grid cells
       | arranged proximally to mirror physical spaces as we walk through
       | them.
        
         | samstave wrote:
         | We need to study hummingbirds deeper, and ravens
         | 
         | They have the largest hippocampus to brain ratio and have 3D
         | spatial memory of all their food source locations as well as
         | (with ravens) who their human enemies are.
         | 
         | We can learn a lot about memory from these birds
        
       | pizzafeelsright wrote:
       | Teaching my kid has been fun and educational.
       | 
       | The rule in the house is we don't say "I don't know." If we don't
       | know something, we are required to think about it and then ask a
       | question.
       | 
       | Recently, he asked how an audio recoding dog trainer worked in
       | terms of how it "went back up" because he couldn't see the
       | internals. He knew that it went down and back up, and then knew
       | that it was not electronic, but mechanical. I asked him to think
       | about it. He thought, and I could see his mind working, thinking
       | about everything in his mind where a toy of his would go down and
       | up. He sat for around 20 seconds and asked, is it a spring? I was
       | quite impressed considering he is 4 years old and was able to
       | come to this conclusion.
       | 
       | There is a map we create, a list of things that go up and down.
       | From that list of things that can go up and down, knowing it was
       | not a pulley or a plunger, because it returned to its original
       | state, he's able to limit it to the one object that would work.
       | 
       | The biggest jumps in my education have been directly related to
       | people mapping Concepts and ideas instead of memorization. That's
       | like the idea that everything is a file. From that framework you
       | can pull out questions like, can I read, do I have permissions to
       | read, can I write, so now when someone explains to me I knew
       | shiny object, like a fancy Wiz Bang database, I asked a couple
       | questions and generally know how it works.
        
         | aio2 wrote:
         | That's something I do too, but a bit modified: instead of going
         | to someone about a problem, we try and figure it out instead,
         | and if we can't, we go and ask help while explaining our
         | original solution.
        
       | throwaway290 wrote:
       | (in mice)
        
         | SubiculumCode wrote:
         | There are some important differences between mice and human
         | hippocampi, including different long range
         | connections...however the overall patterns of organization
         | across the hippocampal subfields e.g. heavy recurancy in CA3,
         | sparse separation of signals in dentate gyrus, etc...these are
         | very similar in structure and response patterns between
         | species. ..gotta love the spiking data in human epilepsy
         | patients
        
         | insanitybit wrote:
         | My naive perspective is that the foundational properties of the
         | brain are probably really similar between mice and humans. For
         | example, we have put human brain cells into rats and the brain
         | cells have done... something.
         | 
         | The chemistry is probably different in a bunch of ways "rats
         | evolved to use this hormone to feel X, we use it to tell us Y"
         | or some other such thing, but structurally I'd imagine that
         | neurons function similarly.
         | 
         | Anyone know more?
        
           | chaxor wrote:
           | Lack of Wernicke's and semantic representation similarity is
           | a big difference for one
        
           | convolvatron wrote:
           | even if there are structural differences, just the fact that
           | we can establish a loose isomorphism between our knowledge
           | about computational models and the cognitive processes of any
           | living creature seems like a really profound step forward for
           | cognitive science if it holds up.
           | 
           | there isn't any issue about clinical relevance
        
             | insanitybit wrote:
             | I agree
        
         | xjay wrote:
         | 2020-12: Some research done with human subjects regarding how
         | the brain reacts when we're reading code.
         | 
         | > The researchers saw little to no response to code in the
         | language regions of the brain. Instead, they found that the
         | coding task mainly activated the so-called multiple demand
         | network. This network, whose activity is spread throughout the
         | frontal and parietal lobes of the brain, is typically recruited
         | for tasks that require holding many pieces of information in
         | mind at once, and is responsible for our ability to perform a
         | wide variety of mental tasks. [1]
         | 
         | [1] https://news.mit.edu/2020/brain-reading-computer-code-1215
        
         | Nowado wrote:
         | It bothers me way less when it's not a medical treatment
         | research.
        
       | giardini wrote:
       | tl;dr or ELI5 please?
        
         | gridspy wrote:
         | A hidden state machine plus a neural net appears to be similar
         | to how mice learn to navigate a maze.
         | 
         | If you hold them still and probe their brain while they
         | navigate in VR you see a state-machine map appear in their
         | mind. That map varies if the VR map varies.
        
         | x86x87 wrote:
         | The brain is just a bunch of ifs
        
       | hanniabu wrote:
       | I really hate how people always try to put programming labels on
       | brain activity
       | 
       | "This is a state machine", "this is a natural net", "it's running
       | a coroutine", "it's garbage collection"...
        
       | mdp2021 wrote:
       | Good news (are suggested) since the hippocampus is one of the few
       | places in which neural regeneration is possible.
        
       | bannedbybros wrote:
       | [dead]
        
       | ilaksh wrote:
       | They mention an RNN with Hebbian learning. What's the SOTA for
       | that? I found this: https://github.com/JonathanAMichaels/hebbRNN
       | 
       | Is there anything optimized for GPU or TPU?
        
       ___________________________________________________________________
       (page generated 2023-08-13 23:00 UTC)