[HN Gopher] How Animals Map 3D Spaces Surprises Brain Researchers
       ___________________________________________________________________
        
       How Animals Map 3D Spaces Surprises Brain Researchers
        
       Author : jonbaer
       Score  : 74 points
       Date   : 2021-10-15 11:04 UTC (11 hours ago)
        
 (HTM) web link (www.quantamagazine.org)
 (TXT) w3m dump (www.quantamagazine.org)
        
       | MR4D wrote:
       | How birds can fly into a tree at speed, and land on a branch
       | without poking an eye out (or anything else) several times per
       | day is amazing to me.
       | 
       | "Bird-brain" may be an insult to people, but I'd bet their 3-D
       | processing capabilities blow away an RTX 3090 - especially when
       | considering power use and heat.
       | 
       | We have a lot left to learn from them.
        
         | qq4 wrote:
         | Birds are also much smaller than you and I. Bugs, fish, humans
         | etc can all navigate a complex environment at speed.
        
         | hypertele-Xii wrote:
         | And yet you'd be hard-pressed to find a single human on the
         | planet willing to trade their brain in for a GPU.
        
           | crazysim wrote:
           | Pretty sure there are a bunch on Mechanical Turk. Slow
           | though.
        
           | trhway wrote:
           | You actually describe the NVDA's next major revenue source.
           | Initially of course it will be just a co-processor to the
           | wet-ware, and with time it will start to beat it in
           | computational power, features, ... Upgradeability and cloud
           | connectivity of course are among its advantages from the
           | start.
        
         | trhway wrote:
         | the trick i observed crows doing - somewhat similar to the
         | opening of the Cobra maneuvers - they come about a foot below
         | branch or whatever their landing target is and by quickly
         | angling up they extinguish their horizontal speed by trading it
         | for that foot of the height - ie. they get 0 horizontal speed
         | right over the branch. Thus they avoid the overwise would be
         | needed horizontal speed slowdown in the level flight and
         | associated stall risk.
        
         | clairity wrote:
         | also amazing are gibbons. they literally fly through 3d tree-
         | space using branches that flex generously, making the split-
         | second routing calculations they're making seem unbelievable.
        
         | [deleted]
        
       | doodlebugging wrote:
       | After reading this article I have decided that the researchers
       | earlier conclusions about how the rats keep track of their
       | environment is suffering from a serious defect introduced by the
       | researchers at the start.
       | 
       | Their earlier work involved studying rat and bat navigation in
       | unnatural environments - a "2D" space which was really a 3D space
       | constrained to be along a single level. This forced or even
       | allowed the rat brains to dumb things down to a situation where
       | they had no need or opportunity to consider things outside that
       | unnaturally limited space and so their brains naturally optimized
       | for that simplistic situation.
       | 
       | You have dumbed down their environment to the point where you
       | have established a minimal ability that they need to master in
       | order to function and navigate that environment.
       | 
       | Then when you discover that their memory encodes navigational
       | information from this dumbed-down, unnatural environment in a
       | regularized grid, this should be no surprise. They are minimizing
       | the energy required to thrive in that environment.
       | 
       | Once you allow them to navigate in a more natural environment you
       | should expect to see this regular grid disappear since their
       | options for reaching a destination are no longer constrained to a
       | single path along most of the route to their destination. Their
       | brains will have to incorporate many more clues to guide their
       | route selection and those clues could be time-varying as in the
       | case where they are tracking the source of an odor as they search
       | for food so they will need to monitor air flow and direction,
       | odor intensity, potential obstacles between themselves and the
       | source, alternate routes which present themselves as they move
       | through space, threat detection from predators or bait traps,
       | etc. In short, there so many variables that will need to be
       | evaluated once you remove the flat plane constraints that led to
       | their grid discovery that it should be no surprise to discover
       | that a more complex environment uses different optimizations,
       | some of which are encoded in a regularized notation and others,
       | probably situational events on the path-picking decision tree,
       | are more random.
        
         | exporectomy wrote:
         | > you should expect to see this regular grid disappear since
         | their options for reaching a destination are no longer
         | constrained to a single path along most of the route to their
         | destination.
         | 
         | That's a post hoc rationalization. All those complicating
         | factors can exist in 2D too. Are you predicting that the 2D
         | hexagonal structure will also be lost in a 2D environment with
         | odors/etc., and this conclusion about it being 2D vs 3D is
         | false?
        
         | bckr wrote:
         | I think it's important to see all of these situations. The very
         | fact that the brain optimizes its maps based on constrained
         | environments is awesome. Also, the technology had to be built
         | to do the previous 2d studies, which then bootstrapped the 3d
         | studies. Knowing that under certain circumstances the neural
         | maps had a very "clean architecture" may have saved a lot of
         | time and effort once these more random 3d structures were being
         | analyzed.
        
           | doodlebugging wrote:
           | I agree. This is awesome work and is a great starting point.
           | The regular gridding to me indicates that the rats learned
           | the routes and found them easy and maybe even boring so after
           | a point, they were not challenged intellectually in the
           | navigation. Once their environment was modified, the rats
           | began to engage more of their brain to learn and manage the
           | choices that each situation required.
           | 
           | I just thought it was funny that the researchers were
           | surprised to see such a large difference in how rats handle
           | complexity when we know going in that rats are pretty
           | intelligent. Those rats probably know each researcher, have
           | favorites, remember routes, and get bored with those tasks
           | like we would be. It becomes muscle memory for them so they
           | go through the motions to get the treat they know is waiting.
        
         | jjoonathan wrote:
         | Ruthless problem reduction and minimization is good
         | experimental design. "Let's, like, let all the variables vary,
         | man" is a recipe for failing to learn anything at all, rather
         | than learning one small piece of a bigger puzzle.
        
           | doodlebugging wrote:
           | It is a lot like programming. Start off defining the
           | algorithm to handle the general case and refine it until it
           | works with no issues. Then add in edge cases so that you tune
           | it to handle more real-world situations. That is the way to
           | build a solid, fail-safe code base. If you truly understand
           | the problem you can eventually code enough to handle any
           | real-world scenario. Don't be surprised if the code looks
           | hairy and nothing like the original though.
           | 
           | In their case, they started with a generalized "2D" simple
           | case and concluded after study that a real-world scenario
           | would look very similar or would be a neat permutation of
           | their original grid-optimized discovery. They found instead
           | that they had discovered that the rat brains optimized their
           | navigation to the difficulty of the task and that once you
           | add enough difficulty, the brains spread the computational
           | resources in a different optimization in order to handle the
           | newfound complexity. Patterns were still detectable but they
           | did not follow the original hexagonal grid optimization
           | discovered in the initial tests.
           | 
           | This is, as you say, a process of learning one small piece of
           | a bigger puzzle. In this case, they found that the complexity
           | of the puzzle is higher than the initial guess they made
           | after collating and analyzing all the data from the simple
           | experiments. Useful things were learned but I was attempting
           | to note that they have apparently made assumptions based on a
           | very simple case that turned out not to be entirely accurate
           | and that they appear to be surprised by that instead of
           | accepting that the simplicity of the tasks used to generate
           | the data may have biased the results. No doubt they will
           | continue to add complexity and learn new things, and win more
           | awards for their pioneering work.
        
       | TheCoreh wrote:
       | Reading the article I couldn't stop thinking: Isn't 3D space
       | perhaps being mapped by a "crumpled" 2D space filling surface?
       | 
       | Real world environments are rarely really 3D, and a 2D surface is
       | perhaps the sweet spot in complexity/degrees of freedom for most
       | practical cases? This would explain why 3D mazes are so much more
       | disorienting than 2D mazes, for example.
        
         | JabavuAdams wrote:
         | Sorry if I'm Mx-splaining -- I don't know your background, but
         | I'm interested in/working on this stuff.
         | 
         | Your crumpled space-filling surface makes me think of
         | manifolds.
         | 
         | Machine-learning researchers often speak loosely of e.g. the
         | "manifold"* of possible images of naturalistic scenes embedded
         | within the space of all possible images. The networks are
         | presumably learning a lower-dimensional representation of the
         | world than the dimension of all possible combinations of sense
         | impressions. If you have a 100 pixel by 100 pixel image and
         | each pixel can have 256 intensity levels, then that's 256 to
         | the power of ten thousand possible distinct images. If each
         | image is a point in an abstract space of all possible images,
         | then that space has 256 to the ten thousand dimensions. But the
         | vast, vast, vast majority of the volume of that space
         | corresponds to images that just look like static to humans. So
         | the thinking is that we internally represent images as some
         | learned non-linear transformation to a much lower dimensional
         | set of features that actually correspond to stuff we experience
         | / see.
         | 
         | A really simple canonical example is the Swiss Roll dataset.
         | You only need two parameters (numbers) to specify it fully, but
         | you can embed it (nonlinearly) in a 3D space.
         | http://people.cs.uchicago.edu/~dinoj/manifold/swissroll.html
         | 
         | In terms of neuroscience, there are competing ideas (as
         | always). There's a body of work that tries to show that as we
         | learn, the brain encodes our high-dimensional sense-impressions
         | in the lowest possible (most efficient) internal
         | representations. However, some recent work seems to imply that
         | instead the brain uses as high a dimension as possible, but up
         | to some limit that demarcates the transition between being
         | differentiable and not differentiable. They found a beautiful
         | power-law:
         | https://www.biorxiv.org/content/10.1101/374090v1.full
         | 
         | * When ML researchers speak of such a manifold, it's kind of
         | loosey-goosey because a set that includes isolated points that
         | don't have a continuous region around them aren't actually
         | manifolds. In contrast, in the computer graphics and meshing
         | literature people speak of non-manifold geometry which is
         | isolated points and lines that you can't triangulate with 2d or
         | 3d elements. I.e. 1d or 0d elements.
        
           | whatshisface wrote:
           | > _When ML researchers speak of such a manifold, it 's kind
           | of loosey-goosey because a set that includes isolated points
           | that don't have a continuous region around them aren't
           | actually manifolds._
           | 
           | So they mean "subset" when they say "manifold?"
        
             | JabavuAdams wrote:
             | I suppose so, but they really are trying to convey that all
             | the points lie close to some lower-dimensional crumpled
             | surface. So a subset that was e.g. just a lattice in the
             | high-dimensional space wouldn't fit this mental image.
             | 
             | E.g. Generate 3d points that are on a 2d plane +- some
             | small random offset normal to the plane. If the points are
             | isolated, without each having a local neighbourhood, it's
             | not technically a manifold. But, you could describe the
             | data-set as lying on or near a plane to within some
             | tolerance.
             | 
             | So they're trying to describe something that is much more
             | specific / strongly constrained than an arbitrary subset,
             | but it doesn't meet the very stringent (and frankly
             | idealized) requirements of a manifold. I wonder whether
             | there is a math-object to describe this?
             | 
             | EDIT> Maybe it's just a matter of saying "close to some
             | lower-dimensional manifold", rather than "on a lower-
             | dimensional manifold."
        
               | whatshisface wrote:
               | > _I wonder whether there is a math-object to describe
               | this?_
               | 
               | Here is how I would phrase it: there is a probability
               | distribution in the higher-dimensional space that
               | expresses P(this image | given that it's a natural
               | image). The level sets of the probability distribution
               | are manifolds.
        
           | TheCoreh wrote:
           | > Sorry if I'm Mx-splaining -- I don't know your background
           | 
           | Not at all! All of this is far more advanced than my
           | knowledge on the topic, and very interesting! Thanks for
           | sharing
           | 
           | That idea of representing in the highest dimensionality
           | possible, with some constraint is also very interesting. In
           | that case perhaps the 3D space is being represented in a
           | higher dimensional form that makes it more convenient for
           | some neural processing purpose (e.g. just like we use
           | homogenous coordinates) The 2D-2D case is then just a happy
           | coincidence where the highest representation that makes sense
           | maps 1 to 1 with the actual data.
        
             | JabavuAdams wrote:
             | We should be careful to distinguish between the
             | dimensionality of the physical space, the dimensionality of
             | image data coming in from the retina, and the
             | dimensionality of the navigational representation.
             | 
             | Going back to the image example, a 100x100 pixel image is
             | 2D in that it can be shown on a screen, or printed on a
             | page, or laid flat on a plane. But the (abstract) space of
             | all possible images is #intensities_per_pixel to the
             | 100x100 = 10000 power.
             | 
             | It's abstract in that each point in this space is not a
             | location in the external world, but specifies one
             | particular image. If you're familiar with phase spaces or
             | configuration spaces from physics, it's like that.
             | 
             | The other thing is that we don't seem to have direct access
             | to a 2d or 3d position-tracker sense. So instead, we have
             | to build up some internal representation for navigation,
             | based on our senses which as outlined for images are much
             | higher-dimensional than the physically allowable positions
             | in the world they're sensing. Robotics SLAM is one
             | approach.
             | 
             | Then finally, there's the dimensionality of the neural
             | representation itself. Let's say that your internal
             | navigation map is represented by the firing-pattern of a
             | population of neurons (i.e. more than one). Define a time-
             | step, say one millisecond. For simplicity, consider each
             | neuron to just be have an "activity" in the range 0.0 to
             | 1.0 per time-step by e.g. counting the number of spikes
             | emitted per time-step and dividing by some number to get
             | things in a nice range. So now you can represent the
             | activity of a neuron by one number per timestep. If you
             | have a population of 100 neurons, then that's a 100 number
             | string. But ... you can also think of it as a point in a
             | 100 dimensional space. Each point is one particular pattern
             | of neural activations. The entire space is all of the
             | possible patterns of neural activations. Again, this is not
             | a space in the sense of physical positions in the world or
             | in the brain. It's an abstract space. But all the vector
             | space math works.
             | 
             | SO we're perceiving 3D by means of retinas that take way
             | more than 3 measurements at an instant, and maybe our brain
             | is finding correlations so that it can represent these in a
             | distributed way in << input_dimensions, but >
             | world_dimensions.
        
       | pharke wrote:
       | So are the hexagonal latices for 2D grid cells just an artifact
       | of whatever sphere packing[0] algorithm the neurons are using?
       | 
       | [0]https://en.wikipedia.org/wiki/Sphere_packing
        
         | JabavuAdams wrote:
         | I love sphere packing, but we don't know that neurons do sphere
         | packing. Check out Shimada et. al. Bubble Mesh for some fun
         | sphere-packing algos.
        
           | pharke wrote:
           | I find this paragraph from the article intriguing
           | 
           | > But the grid cells' firing wasn't entirely random either.
           | Instead, there was local order: For each grid cell, the
           | places where it fired weren't arranged in a perfect periodic
           | lattice, but the distances between them were too regular to
           | be merely a matter of chance. Rather than the neat stack of
           | oranges, the researchers were seeing something similar but
           | less orderly, more like marbles filling a box. "They're
           | always stuck in some local minimum, such that there is not a
           | lattice," Ulanovsky said. "On the other hand, the local
           | distances there are fixed, because all the [marbles] are sort
           | of touching their neighbors."
        
       ___________________________________________________________________
       (page generated 2021-10-15 23:01 UTC)